Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/114654
| Title: | Efficient level set methods for sparse optimization problems with least squares constraints | Authors: | Li, Qian | Degree: | Ph.D. | Issue Date: | 2025 | Abstract: | This thesis is concerned with an important class of sparse optimization problems with least squares constraints. The motivation for this work arises from the well established model known as Basis Pursuit Denoising (BPDN), which aims to find a sparse representation (or approximation) of a solution to an underdetermined least squares problem. This sparse representation problem has a wide range of applications in various fields such as signal processing and statistics. In this thesis, inspired by the recently developed level set approaches, we will propose an efficient sieving based secant method to address the challenges posed by the targeted problems. Firstly, an efficient dimension reduction technique, called adaptive sieving, will be introduced for solving unconstrained sparse optimization problems. This technique addresses the original problem by solving a series of reduced problems that have substantially lower dimensionality. Extensive numerical experiments demonstrate the high efficiency and promising performance of this technique in solving sparse optimization problems, especially in high dimensional settings. The significance of this technique is its integration into the secant method discussed later, which will efficiently reduce the dimension of the level set subproblems for computing the value function. Secondly, we will develop the sieving based secant method to solve the sparse optimization problems with least squares constraints. In the literature, people use the bisection method to find a root of a univariate nonsmooth equation φ (λ) = ϱ for some ϱ > 0, where φ(·) is the value function computed by a solution of the corresponding regularized least squares optimization problem. When the objective function in the constrained problem is a polyhedral gauge function, we prove that (a) for any positive integer k, φ(·) is piecewise Ck in an open interval containing the solution λ∗ to the equation φ(λ) = ϱ; (b) the Clarke Jacobian of φ(·) is always positive. These results allow us to establish the essential ingredients of the fast convergence rates of the secant method. Finally, with all the preparations completed, we will then develop our package, called SMOP, as it is a root finding based secant method for solving the sparse optimization problems. The high efficiency of SMOP is demonstrated by extensive numerical results on high dimensional real applications. We point out that, in the special case where the objective function is the ℓ1 norm, our numerical results show that the secant method and the semismooth Newton method are comparable in terms of the number of iterations, which also demonstrates the high efficiency of the secant method. Note that, different from the semismooth Newton method, the secant method does not need to compute the generalized Jacobian of the value function. This paves the way for using the introduced secant method to solve more complicated sparse optimization problems with least-squares constraints, where the computation of the generalized Jacobian of the value function is impractical. |
Subjects: | Sparse matrices Mathematical optimization Least squares Hong Kong Polytechnic University -- Dissertations |
Pages: | xii, 115 pages : color illustrations |
| Appears in Collections: | Thesis |
Access
View full-text via https://theses.lib.polyu.edu.hk/handle/200/13761
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


