Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107316
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorWu, Yen_US
dc.creatorPan, Sen_US
dc.creatorYang, Xen_US
dc.date.accessioned2024-06-14T06:36:50Z-
dc.date.available2024-06-14T06:36:50Z-
dc.identifier.issn1052-6234en_US
dc.identifier.urihttp://hdl.handle.net/10397/107316-
dc.language.isoenen_US
dc.publisherSociety for Industrial and Applied Mathematicsen_US
dc.rightsCopyright © by SIAM. Unauthorized reproduction of this article is prohibited.en_US
dc.rightsThe following publication Wu, Y., Pan, S., & Yang, X. (2023). A Regularized Newton Method for ℓ𝑞⁡-Norm Composite Optimization Problems. SIAM Journal on Optimization, 33(3), 1676-1706 is available at https://doi.org/10.1137/22M1482822.en_US
dc.subjectℓ𝑞-norm regularized composite optimizationen_US
dc.subjectGlobal convergenceen_US
dc.subjectKL propertyen_US
dc.subjectLocal error bounden_US
dc.subjectRegularized Newton methoden_US
dc.subjectSuperlinear convergence rateen_US
dc.titleA regularized Newton method for ℓ𝑞⁡-norm composite optimization problemsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1676en_US
dc.identifier.epage1706en_US
dc.identifier.volume33en_US
dc.identifier.issue3en_US
dc.identifier.doi10.1137/22M1482822en_US
dcterms.abstractThis paper is concerned with ℓ𝑞⁡(0<𝑞<1) -norm regularized minimization problems with a twice continuously differentiable loss function. For this class of nonconvex and nonsmooth composite problems, many algorithms have been proposed to solve them, most of which are of the first-order type. In this work, we propose a hybrid of the proximal gradient method and the subspace regularized Newton method, called HpgSRN. The whole iterate sequence produced by HpgSRN is proved to have a finite length and to converge to an 𝐿 -type stationary point under a mild curve-ratio condition and the Kurdyka–Łojasiewicz property of the cost function; it converges linearly if a further Kurdyka–Łojasiewicz property of exponent 1/2 holds. Moreover, a superlinear convergence rate for the iterate sequence is also achieved under an additional local error bound condition. Our convergence results do not require the isolatedness and strict local minimality properties of the 𝐿 -stationary point. Numerical comparisons with ZeroFPR, a hybrid of proximal gradient method and quasi-Newton method for the forward-backward envelope of the cost function, proposed in [A. Themelis, L. Stella, and P. Patrinos, SIAM J. Optim., 28 (2018), pp. 2274–2303] for the ℓ𝑞 -norm regularized linear and logistic regressions on real data, indicate that HpgSRN not only requires much less computing time but also yields comparable or even better sparsities and objective function values.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSIAM journal on optimization, 2023, v. 33, no. 3, p. 1676-1706en_US
dcterms.isPartOfSIAM journal on optimizationen_US
dcterms.issued2023-
dc.identifier.scopus2-s2.0-85171542377-
dc.identifier.eissn1095-7189en_US
dc.description.validate202406 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera2814a-
dc.identifier.SubFormID48453-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryVoR alloweden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
22m1482822.pdf606.38 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

7
Citations as of Jun 30, 2024

Downloads

6
Citations as of Jun 30, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.