Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/92599
Title: First-order methods with extrapolation for some structured nonconvex problems and their applications
Authors: Zu, Chenchen
Degree: Ph.D.
Issue Date: 2021
Abstract: Nowadays, many optimization problems are presented on a large scale. Therefore, numerous techniques have been introduced to accelarate optimization algorithms. A practical one is the extrapolation strategy. In this thesis, we propose and explore two extrapolated first-order algorithms and then evaluate their efficiency by numerical experiments. We also construct sparse portfolio selection models and observe their theoretical and numerical characteristics. The first part investigates an extrapolated inexact quasisubgradient method with diminishing, constant, and dynamic stepsizes for a quasiconvex minimization problem. The convergence in objective values and the iteration complexity of the method are established under a Hölder condition. With an additional assumption of weak sharp minima, a sublinear convergence rate for iterates is obtained for some special diminishing stepsize and extrapolation rule. For the constant and dynamic stepsizes, a linear rate of convergence to (a ball of) the optimal solution set is provided with a specially selected extrapolation step. In a similar way, we study a primal-dual extrapolated quasisubgradient method with the diminishing and constant stepsizes for finding a saddle point of a quasiconvex-quasiconcave function. The numerical testing shows that extrapolation improves the performance in terms of the number of iterations needed for reaching an approximate optimal solution.
In the second part, a penalty extrapolated alternating direction method of multipliers (ADMM) is proposed to solve a generalized bilinear programming problem. The algorithm is divided into inner and outer iterations. The inner iterations are constructed by using a proximal ADMM strategy with extrapolation for a quadratic penalty relaxation. The outer iterations are composed of updating the penalty parameter. The subsequential convergence and the iteration complexity 0 (1/k) are established for the inner extrapolated ADMM algorithm, and its global convergence is obtained by virtue of the Kurdyka-Łojasiewicz theory. Afterward, the convergence to the stationary point is also provided for the outer algorithm. In numerical testing, the efficiency of extrapolation is again demonstrated in the penalty ADMM algorithm. Besides, we compare the proposed algorithm with a semidefinite relaxation method. The resulting optimal values of both methods are close, but the proposed ADMM algorithm terminates within a shorter time. In the final part, we study an lp-sparse minimax model and an l1-sparse minimax Sharpe ratio model and observe a descent property of the lp norm of the optimal portfolio. A parametric algorithm is also designed for finding a global solution of the l1-sparse minimax Sharpe ratio model. Numerically, we compare the three sparse minimax models with two sparse mean-variance models and test the effect of the regularization parameter on the sparsity, return, risk, and short selling by using the weekly historical data of 1200 stocks. We also apply the proposed penalty ADMM method to the l1-sparse minimax Sharpe ratio model. As indicated in numerical experiments, more sparse portfolios of all the sparse minimax models tend to have lower rates of return and lower levels of risk. However, for the sparse mean-variance models, the corresponding changes are not so significant.
Subjects: Extrapolation
First-order logic
Hong Kong Polytechnic University -- Dissertations
Pages: xviii, 143 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

35
Last Week
0
Last month
Citations as of May 5, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.