Please use this identifier to cite or link to this item:
Title: Calculus of Kurdyka-Lojasiewicz exponents and its applications in the analysis of first-order methods
Authors: Yu, Peiran
Degree: Ph.D.
Issue Date: 2021
Abstract: In this thesis, we study calculus rules of the Kurdyka-Lojasiewicz (KL) exponents and show how KL exponents are applied in analyzing first-order methods for widely used optimization problems. First, we focus on calculus rules that derive the KL exponents of new functions from functions with known KL exponents. These include deriving the KL exponent of the inf-projection of a function from that of its original function, the KL exponent of the sum of a continuous function and the indicator function defined by a set of constraints from that of its Lagrangian and the KL exponent of a fractional function from the difference between the numerator and (a suitable scaling of) the denominator. Using these rules, we derive explicit KL exponents of some concrete optimization models such as the fractional model in [115,116], the model of minimizing l1 subject to logistic/Poisson loss, some semidefinite-programming-representable functions and some functions with C2-cone reducible structures.
Second, we show how KL exponents are employed in analyzing an existing first-order method, the sequential convex programming method with monotone line search (SCPls) in [83] for difference-of-convex (DC) optimization problem with multiple smooth inequality constraints. By imposing suitable KL assumptions, we analyze the convergence rate of the sequence generated by SCPls in both nonconvex and convex settings. We also discuss how the various conditions required in our analysis can be verified for minimizing l1-2 [123] subject to residual error measured by l2 norm/Lorentzian norm [36]. To further illustrate the applications of KL exponents, finally, we focus on the minimization of the quotient of l1 and l2 (denoted as l1/l2) subject to one possibly nonsmooth constraint [97]. We show that the sum of l1/l2 and the indicator function of an affine constraint set satisfies the KL property with exponent 1/2; this allows us to establish linear convergence of the algorithm proposed in [116, Eq. 11] under mild assumptions. We next extend the l1/l2 model to handle compressed sensing problems with noise. We establish the solution existence for some of these models under the spherical section property [114,128], and extend the algorithm in [116, Eq. 11] for solving these problems. We prove the subsequential convergence of our algorithm under mild conditions, and establish global convergence of the whole sequence generated by our algorithm by imposing additional KL and differentiability assumptions on a specially constructed potential function. Finally, we perform numerical experiments on robust compressed sensing and basis pursuit denoising with residual error measured by l2 norm or Lorentzian norm via solving the corresponding l1/l2 models by our algorithm. Our numerical simulations show that our algorithm is able to recover the original sparse vectors with reasonable accuracy.
Subjects: Calculus
Mathematical optimization
Hong Kong Polytechnic University -- Dissertations
Pages: xviii, 169 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

Last Week
Last month
Citations as of Jun 4, 2023

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.