Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/79655
PIRA download icon_1.1View/Download Full Text
Title: A proximal difference-of-convex algorithm with extrapolation
Authors: Wen, B 
Chen, XJ 
Pong, TK 
Issue Date: Mar-2018
Source: Computational optimization and applications, Mar. 2018, v. 69, no. 2, p. 297-324
Abstract: We consider a class of difference-of-convex (DC) optimization problems whose objective is level-bounded and is the sum of a smooth convex function with Lipschitz gradient, a proper closed convex function and a continuous concave function. While this kind of problems can be solved by the classical difference-of-convex algorithm (DCA) (Pham et al. Acta Math Vietnam 22:289-355, 1997), the difficulty of the subproblems of this algorithm depends heavily on the choice of DC decomposition. Simpler subproblems can be obtained by using a specific DC decomposition described in Pham et al. (SIAM J Optim 8:476-505, 1998). This decomposition has been proposed in numerous work such as Gotoh et al. (DC formulations and algorithms for sparse optimization problems, 2017), and we refer to the resulting DCA as the proximal DCA. Although the subproblems are simpler, the proximal DCA is the same as the proximal gradient algorithm when the concave part of the objective is void, and hence is potentially slow in practice. In this paper, motivated by the extrapolation techniques for accelerating the proximal gradient algorithm in the convex settings, we consider a proximal difference-of-convex algorithm with extrapolation to possibly accelerate the proximal DCA. We show that any cluster point of the sequence generated by our algorithm is a stationary point of the DC optimization problem for a fairly general choice of extrapolation parameters: in particular, the parameters can be chosen as in FISTA with fixed restart (O'Donoghue and CandSs in Found Comput Math 15, 715-732, 2015). In addition, by assuming the Kurdyka-Aojasiewicz property of the objective and the differentiability of the concave part, we establish global convergence of the sequence generated by our algorithm and analyze its convergence rate. Our numerical experiments on two difference-of-convex regularized least squares models show that our algorithm usually outperforms the proximal DCA and the general iterative shrinkage and thresholding algorithm proposed in Gong et al. (A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems, 2013).
Keywords: Difference-of-convex problems
Nonconvex
Nonsmooth
Extrapolation
Kurdyka-Lojasiewicz inequality
Publisher: Springer
Journal: Computational optimization and applications 
ISSN: 0926-6003
EISSN: 1573-2894
DOI: 10.1007/s10589-017-9954-1
Rights: © Springer Science+Business Media, LLC 2017
This is a post-peer-review, pre-copyedit version of an article published in Computational Optimization and Applications. The final authenticated version is available online at: http://dx.doi.org/10.1007/s10589-017-9954-1
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
a0585-n03_pDCAe_v2_new.pdfPre-Published version1.1 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

191
Last Week
0
Last month
Citations as of Apr 14, 2024

Downloads

177
Citations as of Apr 14, 2024

SCOPUSTM   
Citations

95
Citations as of Apr 19, 2024

WEB OF SCIENCETM
Citations

87
Last Week
0
Last month
Citations as of Apr 18, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.