Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/110263
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematics-
dc.creatorWu, Yuqia-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13267-
dc.language.isoEnglish-
dc.titleGlobally convergent regularized newton methods for nonconvex sparse optimization problems-
dc.typeThesis-
dcterms.abstractNowadays, many algorithms have been proposed to solve the nonconvex and nonsmooth problems that arise in sparse optimization. Most of these algorithms belong to the first-order type, including the proximal gradient method. First-order methods have several advantages, such as low computational cost in each iteration, weak global convergence conditions, and easy implementation. However, their convergence rate is at most linear, resulting in slow convergence speed when processing large-scale problems. On the other hand, the classical Newton method, which is a second-order method, can achieve a locally superlinear convergence rate. However, the classical Newton method equips with an Armijo line search for minimizing smooth optimization problems can only achieve a subsequence convergence, let alone for nonsmooth sparse optimization. By exploiting the structure of two classes of nonconvex and nonsmooth sparse optimization problems that arise in compressed sensing and machine learning, this thesis presents an efficient hybrid framework that combines a proximal gradient method and a Newton-type method, which takes advantages of these two kinds of optimization algorithms, and simultaneously avoids their disadvantages.-
dcterms.abstractThe first part of the thesis designs a hybrid of proximal gradient method and regularized subspace Newton method (HpgSRN) for solving ℓq (0<q<1)-norm regularized minimization problems with a twice continuously differentiable loss function. In the iterates of HpgSRN, we first use the proximal gradient method to find a neighbourhood of a potential stationary point, and then apply a regularized Newton method in the subspace, at which the objective is locally smooth, to enhance the convergence speed. We show that this hybrid algorithm finally reduces to a regularized Newton method of minimizing a locally smooth function. If the reduced objective function satisfies the Kurdyka-Lojasiewic property and a curve ratio condition holds, the generated sequence converges to an L-stationary point with an arbitrarily picked initial point. Moreover, if we additionally assume that the generated sequence converges to a second-order stationary point, and an error bound condition holds there, we prove a superlinear convergence of the generated sequence, without assuming either the isolatedness or the local minimality of the limit point. Numerical comparison with the proximal gradient method and ZeroFPR, where the later one is an algorithm using limited-memory BFGS method to minimize the forward-backward envelope of the objective function, indicates that our proposed HpgSRN not only converges much faster, but also yields comparable and even better solutions.-
dcterms.abstractThe second part of the thesis studies fused zero-norms regularization problems, which are the zero-norm version of the fused Lasso plus a box constraint. We propose a polynomial time algorithm to find an element of the proximal mapping of the fused zero-norms over a box constraint. Based on this, we propose a hybrid of proximal gradient method and inexact projected regularized Newton method for solving the fused zero-norms regularization problems. We prove that the algorithm finally reduces to an inexact projected regularized Newton method for seeking a critical point of a smooth function over a convex constraint. We achieve the convergence of the whole sequence under a nondegeneracy condition, a curve ratio condition and assuming that the reduced objective is a Kurdyka-Lojasiewic function. A superlinear convergence rate of the iterates is established under a locally Ho¨lderian error bound condition on a second-order stationary point set, without requiring either the isolatedness or the local optimality of the limit point. Finally, numerical experiments show the features of our considered model, and the superiority of our proposed algorithm.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxvi, 136 pages : color illustrations-
dcterms.issued2024-
dcterms.LCSHMathematical optimization-
dcterms.LCSHNewton-Raphson method-
dcterms.LCSHAlgorithms-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

37
Citations as of Apr 14, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.