Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/86194
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematics-
dc.creatorHu, Yaohua-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/6988-
dc.language.isoEnglish-
dc.titleSubgradient methods for convex programming and quasi-convex programming-
dc.typeThesis-
dcterms.abstractThe purpose of this thesis is to propose some new types of subgradient methods, investigate convergence properties of the proposed algorithms, and illustrate the high efficiency and wide applicability by numerical experiments for both convex and quasi-convex optimization problems. In the part of convex programming, we propose a primal subgradient method and a dual subgradient method, based on the gradient sampling technique, to solve a non-differentiable convex (constrained) optimization problem. The motivation comes from the fact that the gradient is cheap to compute comparing with the subgradient in many applications. The proposed algorithms consist of perturbing the projection vector to the (relative) interior of the effective domain of the objective function or the constrain set, approaching the subgradient via the convex combination of (relative) gradients at random nearby points, and proceeding the projected subgradient iteration. Using the constant/vanishing sampling radius and the constant/divergent stepsize rules, we demonstrate convergence to the (approximate) optimal value with probability 1. Numerical results demonstrate that the gradient sampling technique improves the convergence behavior of subgradient methods, and that our proposed algorithms are comparable with some existing subgradient algorithms. In the part of quasi-convex programming, motivated by practical and theoretical reasons, we consider a generic inexact subgradient method (we call it the approximate quasi-subgradient method) to solve a nondifferentiable quasi-convex constrained optimization problem. The inexact terms stem from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexact terms on subgradient methods when the constraint set is compact or when the objective function satisfies a generalized weak sharp minima condition. In both cases, using the constant/diminishing stepsize rule, we describe convergence results in both objective values and iterates, where the tolerances are given explicitly in terms of errors and noise. We also consider the finite convergence to the approximate optimal value and efficiency estimates of iterates. Several numerical experiments illustrate that the approximate quasi-subgradient method is comparable with some existing algorithm and suitable for large-scale problems. Furthermore, motivated by distributed optimization problems in networks, where both the data at each node and transmitted data are required to reach some quantization level, we propose and investigate a quantized approximate quasi-subgradient method, by using a quantization operator after proceeding the subgradient iteration along the approximate quasi-subgradient.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxvi, 152 p. : ill. ; 30 cm.-
dcterms.issued2013-
dcterms.LCSHConvex programming.-
dcterms.LCSHProgramming (Mathematics)-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

44
Last Week
1
Last month
Citations as of Apr 14, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.