Please use this identifier to cite or link to this item:
Title: Adaptation of extrapolation techniques in the proximal gradient algorithm and an iteratively reweighted l₁ algorithm
Authors: Yu, Peiran
Advisors: Pong, Ting Kei (AMA)
Chen, Xiaojun (AMA)
Keywords: Algorithms
Mathematical optimization
Issue Date: 2018
Publisher: The Hong Kong Polytechnic University
Abstract: Many areas like engineering, statistics and computing involve solving optimization models. The proximal gradient algorithm is a popular kind of algorithm for solving these problems. This algorithm, though simple and widely used, can be slow in practice; see for example [64, 90, 95]. To accelerate the proximal gradient algorithm, various approaches such as extrapolation techniques or line-search have been adopted. In this thesis, we first present some existing convergence results of the proximal gradient algorithm such as the global complexity. Then, we focus on the adaptation of the extrapolation techniques which is usually easier to implement in practice and leads to provably better iteration complexity for a large class of convex problems. We review this latter fact concerning iteration complexity and also present some existing convergence properties of the proximal gradient algorithm with extrapolation. Another popular kind of first-order algorithms is the iteratively reweighted algorithm. For this class of algorithms, the line-search techniques have also been adopted for acceleration while the extrapolation techniques have not. In view of the success in accelerating the proximal gradient algorithm empirically via extrapolations, in the last section of this thesis, we investigate how extrapolation techniques can be suitably incorporated into an iteratively reweighted ℓ₁ algorithm. We specifically consider extrapolation techniques motivated from three popular optimal first-order methods: the fast iterative soft-thresholding algorithm (FISTA) [10,68], the method by Auslender and Teboulle [6] and the method by Lan, Lu and Monteiro [53]. For each algorithm, we exhibit an explicitly checkable condition on the extrapolation parameters so that the sequence generated provably clusters at a stationary point of the optimization problem. We also investigate global convergence under additional Kurdyka-Lojasiewicz assumptions on certain potential functions. Our numerical experiments show that our algorithms usually outperform the general iterative shrinkage and thresholding algorithm in [46] and an adaptation of the iteratively reweighted ℓ₁ algorithm in [56, Algorithm 7] with nonmonotone line-search for solving random instances of log penalty regularized least squares problems in terms of both CPU time and solution quality. The results concerning extrapolation techniques applied to iteratively reweighted algorithm are based on the manuscript [101] available on ArXiv.
Description: xiv, 91 pages : illustrations
PolyU Library Call No.: [THS] LG51 .H577M AMA 2018 Yu
Rights: All rights reserved.
Appears in Collections:Thesis

Files in This Item:
File Description SizeFormat 
991022131144903411_link.htmFor PolyU Users167 BHTMLView/Open
991022131144903411_pira.pdfFor All Users (Non-printable)1.52 MBAdobe PDFView/Open
Show full item record
PIRA download icon_1.1View/Download Contents

Page view(s)

Last Week
Last month
Citations as of Dec 16, 2018


Citations as of Dec 16, 2018

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.