Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/98607
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorShi, Xen_US
dc.creatorHuang, Yen_US
dc.creatorHuang, Jen_US
dc.creatorMa, Sen_US
dc.date.accessioned2023-05-10T02:00:38Z-
dc.date.available2023-05-10T02:00:38Z-
dc.identifier.issn0167-9473en_US
dc.identifier.urihttp://hdl.handle.net/10397/98607-
dc.language.isoenen_US
dc.publisherElsevier BVen_US
dc.rights© 2018 Elsevier B.V. All rights reserved.en_US
dc.rights© 2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/.en_US
dc.rightsThe following publication Shi, X., Huang, Y., Huang, J., & Ma, S. (2018). A Forward and Backward Stagewise algorithm for nonconvex loss functions with adaptive Lasso. Computational statistics & data analysis, 124, 235-251 is available at https://doi.org/10.1016/j.csda.2018.03.006.en_US
dc.subjectForward and Backward Stagewiseen_US
dc.subjectPenalizationen_US
dc.subjectNonconvex lossen_US
dc.subjectAdaptive Lassoen_US
dc.titleA Forward and Backward Stagewise algorithm for nonconvex loss functions with adaptive Lassoen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage235en_US
dc.identifier.epage251en_US
dc.identifier.volume124en_US
dc.identifier.doi10.1016/j.csda.2018.03.006en_US
dcterms.abstractPenalization is a popular tool for multi- and high-dimensional data. Most of the existing computational algorithms have been developed for convex loss functions. Nonconvex loss functions can sometimes generate more robust results and have important applications. Motivated by the BLasso algorithm, this study develops the Forward and Backward Stagewise (Fabs) algorithm for nonconvex loss functions with the adaptive Lasso (aLasso) penalty. It is shown that each point along the Fabs paths is a δ-approximate solution to the aLasso problem and the Fabs paths converge to the stationary points of the aLasso problem when δ goes to zero, given that the loss function has second-order derivatives bounded from above. This study exemplifies the Fabs with an application to the penalized smooth partial rank (SPR) estimation, for which there is still a lack of effective algorithm. Extensive numerical studies are conducted to demonstrate the benefit of penalized SPR estimation using Fabs, especially under high-dimensional settings. Application to the smoothed 0-1 loss in binary classification is introduced to demonstrate its capability to work with other differentiable nonconvex loss function.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationComputational statistics and data analysis, Aug. 2018, v. 124, p. 235-251en_US
dcterms.isPartOfComputational statistics and data analysisen_US
dcterms.issued2018-08-
dc.identifier.scopus2-s2.0-85056301184-
dc.identifier.eissn1872-7352en_US
dc.description.validate202305 bcchen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberAMA-0357-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS13241672-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Huang_Forward_Backward_Stagewise.pdfPre-Published version1.55 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

77
Citations as of Apr 14, 2025

Downloads

43
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

10
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

8
Citations as of Oct 10, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.