Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/86174
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematics-
dc.creatorYu, Peiran-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/9459-
dc.language.isoEnglish-
dc.titleAdaptation of extrapolation techniques in the proximal gradient algorithm and an iteratively reweighted l₁ algorithm-
dc.typeThesis-
dcterms.abstractMany areas like engineering, statistics and computing et.al involve solving optimization models. The proximal gradient algorithm is a popular kind of algorithm for solving these problems. This algorithm, though simple and widely used, can be slow in practice; see for example [64, 90, 95]. To accelerate the proximal gradient algorithm, various approaches such as extrapolation techniques or line-search have been adopted. In this thesis, we first present some existing convergence results of the proximal gradient algorithm such as the global complexity. Then, we focus on the adaptation of the extrapolation techniques which is usually easier to implement in practice and leads to provably better iteration complexity for a large class of convex problems. We review this latter fact concerning iteration complexity and also present some existing convergence properties of the proximal gradient algorithm with extrapolation. Another popular kind of first-order algorithms is the iteratively reweighted algorithm. For this class of algorithms, the line-search techniques have also been adopted for acceleration while the extrapolation techniques have not. In view of the success in accelerating the proximal gradient algorithm empirically via extrapolations, in the last section of this thesis, we investigate how extrapolation techniques can be suitably incorporated into an iteratively reweighted ℓ₁ algorithm. We specifically consider extrapolation techniques motivated from three popular optimal first-order methods: the fast iterative soft-thresholding algorithm (FISTA) [10,68], the method by Auslender and Teboulle [6] and the method by Lan, Lu and Monteiro [53]. For each algorithm, we exhibit an explicitly checkable condition on the extrapolation parameters so that the sequence generated provably clusters at a stationary point of the optimization problem. We also investigate global convergence under additional Kurdyka-Lojasiewicz assumptions on certain potential functions. Our numerical experiments show that our algorithms usually outperform the general iterative shrinkage and thresholding algorithm in [46] and an adaptation of the iteratively reweighted ℓ₁ algorithm in [56, Algorithm 7] with nonmonotone line-search for solving random instances of log penalty regularized least squares problems in terms of both CPU time and solution quality. The results concerning extrapolation techniques applied to iteratively reweighted algorithm are based on the manuscript [101] available on ArXiv.-
dcterms.accessRightsopen access-
dcterms.educationLevelM.Phil.-
dcterms.extentxiv, 91 pages : illustrations-
dcterms.issued2018-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
dcterms.LCSHAlgorithms-
dcterms.LCSHMathematical optimization-
Appears in Collections:Thesis
Show simple item record

Page views

57
Last Week
0
Last month
Citations as of Apr 21, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.