Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114804
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematics-
dc.creatorHe, Yifan-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13777-
dc.language.isoEnglish-
dc.titleError bounds and penalty methods for optimization problems over the sign-constrained stiefel manifold-
dc.typeThesis-
dcterms.abstractOptimization over sign-constrained Stiefel manifold requires that certain columns of variables in the Stiefel manifold are nonnegative and the remaining are nonpositive. When all columns are nonnegative, optimization problems with nonnegative orthogonal constraints arise, which have wide applications in fields such as signal and image processing. The properties of the constraints endow these models' physical meanings but also make the optimization problems hard to solve due to the combinatory property, for example, the quadratic assignment problem. One way to handle the difficulties is to seek help from penalty methods. The error bounds are commonly used to prove the exact penalty property, but the existence of the error bounds on the sign-constrained Stiefel manifold is unknown. In this thesis, we investigate the error bounds on the sign-constrained Stiefel manifold and design an effective algorithm called the smoothing proximal reweighted method (SPR) to solve the penalty problems.-
dcterms.abstractIn the first part, the sign-constrained Stiefel manifold in ℝn×r is a segment of the Stiefel manifold with fixed signs (nonnegative or nonpositive) for some entries of the matrices. We begin with the special case, the nonnegative Stiefel manifold, to discuss the error bounds, then extend the results to the sign-constrained Stiefel manifold. We present global and local error bounds that provide an inequality with easily computable residual functions and explicit coefficients to bound the distance from matrices in ℝn×r to the sign-constrained Stiefel manifold. Moreover, we show that the error bounds cannot be improved except for the multiplicative constants under some mild conditions, which explains why two square-root terms are necessary for the bounds when 1 < r < n and why the ℓ1 norm can be used in the bounds when r = n or r= 1 for the sign constraints and orthogonality, respectively. The error bounds are applied to derive exact penalty methods for minimizing a Lipschitz continuous function with orthogonality and sign constraints. To this end, we show the improvement of adding nonnegativity to the first column of the variable for the sparse principal component analysis problem. In addition, the performance of penalizing one or both constraints to the objective function is compared through testing problems.-
dcterms.abstractIn the second part, we propose a proximal iteratively reweighted ℓ2 algorithm to solve the non-Lipschitz penalized problem. Under the assumption that the objective function in the original problem is continuous, our algorithm has subsequence convergence property, a sufficient decrease in each iteration, and the distance between two adjacent iteration points is square summable, any accumulation point is a stationary point. Extensive numerical experiments including Projection to n,r+ and Quadratic Assignment Problem show the effectiveness of our algorithm.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxviii, 81 pages : color illustrations-
dcterms.issued2025-
dcterms.LCSHStiefel manifolds-
dcterms.LCSHMathematical optimization-
dcterms.LCSHError analysis (Mathematics)-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.