Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/74211
PIRA download icon_1.1View/Download Full Text
Title: Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods
Authors: Li, G
Pong, TK 
Issue Date: Oct-2018
Source: Foundations of computational mathematics, Oct. 2018, v. 18, p. 1199-1232
Abstract: In this paper, we study the Kurdyka–Łojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo–Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is (Formula presented.). The Luo–Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function’s KL exponent is (Formula presented.). This includes the least squares problem with smoothly clipped absolute deviation regularization or minimax concave penalty regularization and the logistic regression problem with (Formula presented.) regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step sizes for some specific models that arise in sparse recovery.
Keywords: Convergence rate
First-order methods
Kurdyka–Łojasiewicz inequality
Linear convergence
Luo–Tseng error bound
Sparse optimization
Publisher: Springer
Journal: Foundations of computational mathematics 
ISSN: 1615-3375
DOI: 10.1007/s10208-017-9366-8
Rights: © SFoCM 2017
This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use (https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms), but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/s10208-017-9366-8.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Li_Calculus_Kurdyka–Łojasiewicz_Inequality.pdfPre-Published versio1.13 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

127
Last Week
0
Last month
Citations as of Apr 14, 2024

Downloads

111
Citations as of Apr 14, 2024

SCOPUSTM   
Citations

121
Last Week
0
Last month
Citations as of Apr 12, 2024

WEB OF SCIENCETM
Citations

120
Last Week
0
Last month
Citations as of Apr 11, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.