Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/61864
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorHu, Yen_US
dc.creatorLi, Cen_US
dc.creatorYang, Xen_US
dc.date.accessioned2016-12-19T08:57:32Z-
dc.date.available2016-12-19T08:57:32Z-
dc.identifier.issn1052-6234en_US
dc.identifier.urihttp://hdl.handle.net/10397/61864-
dc.language.isoenen_US
dc.publisherSociety for Industrial and Applied Mathematicsen_US
dc.rights© 2016 Society for Industrial and Applied Mathematicsen_US
dc.rightsThe following publication Hu, Y., Li, C., & Yang, X. (2016). On convergence rates of linearized proximal algorithms for convex composite optimization with applications. SIAM Journal on Optimization, 26(2), 1207-1235 is available at https://doi.org/10.1137/140993090en_US
dc.subjectConvex composite optimizationen_US
dc.subjectFeasibility problemen_US
dc.subjectLinearized proximal algorithmen_US
dc.subjectQuasi-regularity conditionen_US
dc.subjectSensor network localizationen_US
dc.subjectWeak sharp minimaen_US
dc.titleOn convergence rates of linea rized proximal algorithms for convex composite optimization with applicationsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1207en_US
dc.identifier.epage1235en_US
dc.identifier.volume26en_US
dc.identifier.issue2en_US
dc.identifier.doi10.1137/140993090en_US
dcterms.abstractIn the present paper, we investigate a linearized p roximal algorithm (LPA) for solving a convex composite optimization problem. Each iteration of the LPA is a proximal minimization of the convex composite function with the inner function being linearized at the current iterate. The LPA has the attractive computational advantage that the solution of each subproblem is a singleton, which avoids the difficulty as in the Gauss-Newton method (GNM) of finding a solution with minimum norm among the set of minima of its subproblem, while still maintaining the same local convergence rate as that of the GNM. Under the assumptions of local weak sharp minima of order p (p ∈ [1,2]) and a quasi-regularity condition, we establish a local superlinear convergence rate for the LPA. We also propose a globalization strategy for the LPA based on a backtracking line-search and an inexact version of the LPA. We further apply the LPA to solve a (possibly nonconvex) feasibility problem, as well as a sensor network localization problem. Our numerical results illustrate that the LPA meets the demand for an efficient and robust algorithm for the sensor network localization problem.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSIAM journal on optimization, 2016, v. 26, no. 2, p. 1207-1235en_US
dcterms.isPartOfSIAM journal on optimizationen_US
dcterms.issued2016-
dc.identifier.scopus2-s2.0-84976871873-
dc.identifier.eissn1095-7189en_US
dc.description.validate202208 bcvcen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberAMA-0617-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS6655625-
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
140993090.pdf358.85 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

125
Last Week
0
Last month
Citations as of Apr 21, 2024

Downloads

53
Citations as of Apr 21, 2024

SCOPUSTM   
Citations

49
Last Week
0
Last month
Citations as of Apr 19, 2024

WEB OF SCIENCETM
Citations

41
Last Week
0
Last month
Citations as of Apr 25, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.