Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/74211
Title: | Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods | Authors: | Li, G Pong, TK |
Issue Date: | Oct-2018 | Source: | Foundations of computational mathematics, Oct. 2018, v. 18, p. 1199-1232 | Abstract: | In this paper, we study the Kurdyka–Łojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo–Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is (Formula presented.). The Luo–Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function’s KL exponent is (Formula presented.). This includes the least squares problem with smoothly clipped absolute deviation regularization or minimax concave penalty regularization and the logistic regression problem with (Formula presented.) regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step sizes for some specific models that arise in sparse recovery. | Keywords: | Convergence rate First-order methods Kurdyka–Łojasiewicz inequality Linear convergence Luo–Tseng error bound Sparse optimization |
Publisher: | Springer | Journal: | Foundations of computational mathematics | ISSN: | 1615-3375 | DOI: | 10.1007/s10208-017-9366-8 | Rights: | © SFoCM 2017 This is a post-peer-review, pre-copyedit version of an article published in Foundations of Computational Mathematics. The final authenticated version is available online at: http://dx.doi.org/10.1007/s10208-017-9366-8 |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
a0585-n03_pDCAe_v2_new.pdf | Pre-Published version | 1.1 MB | Adobe PDF | View/Open |
Page views
73
Last Week
0
0
Last month
Citations as of May 15, 2022
Downloads
73
Citations as of May 15, 2022
SCOPUSTM
Citations
68
Last Week
0
0
Last month
Citations as of May 20, 2022
WEB OF SCIENCETM
Citations
65
Last Week
0
0
Last month
Citations as of May 19, 2022

Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.