Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/98590
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematics-
dc.creatorLuo, Zen_US
dc.creatorSun, Den_US
dc.creatorToh, KCen_US
dc.creatorXiu, Nen_US
dc.date.accessioned2023-05-10T02:00:31Z-
dc.date.available2023-05-10T02:00:31Z-
dc.identifier.issn1532-4435en_US
dc.identifier.urihttp://hdl.handle.net/10397/98590-
dc.language.isoenen_US
dc.publisherJournal of Machine Learning Researchen_US
dc.rights© 2019 Ziyan Luo, Defeng Sun, Kim-Chuan Toh and Naihua Xiu.en_US
dc.rightsLicense: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v20/18-172.html.en_US
dc.rightsThe following publication Luo, Z., Sun, D., Toh, K. C., & Xiu, N. (2019). Solving the OSCAR and SLOPE Models Using a Semismooth Newton-Based Augmented Lagrangian Method. J. Mach. Learn. Res., 20, 106 is available at https://www.jmlr.org/papers/v20/18-172.html.en_US
dc.subjectLinear regressionen_US
dc.subjectOSCARen_US
dc.subjectSparsityen_US
dc.subjectAugmented Lagrangian methoden_US
dc.subjectSemi-smooth Newton methoden_US
dc.titleSolving the OSCAR and SLOPE models using a semismooth Newton-based augmented Lagrangian methoden_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.epage25en_US
dc.identifier.volume20en_US
dcterms.abstractThe octagonal shrinkage and clustering algorithm for regression (OSCAR), equipped with the ℓ1-norm and a pair-wise ℓ∞-norm regularizer, is a useful tool for feature selection and grouping in high-dimensional data analysis. The computational challenge posed by OSCAR, for high dimensional and/or large sample size data, has not yet been well resolved due to the non-smoothness and non-separability of the regularizer involved. In this paper, we successfully resolve this numerical challenge by proposing a sparse semismooth Newton-based augmented Lagrangian method to solve the more general SLOPE (the sorted L-one penalized estimation) model. By appropriately exploiting the inherent sparse and low-rank property of the generalized Jacobian of the semismooth Newton system in the augmented Lagrangian subproblem, we show how the computational complexity can be substantially reduced. Our algorithm offers a notable computational advantage in the high-dimensional statistical regression settings. Numerical experiments are conducted on real data sets, and the results demonstrate that our algorithm is far superior, in both speed and robustness, to the existing state-of-the-art algorithms based on first-order iterative schemes, including the widely used accelerated proximal gradient (APG) method and the alternating direction method of multipliers (ADMM).-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of machine learning research, 2019, v. 20, 106, p. 1-25en_US
dcterms.isPartOfJournal of machine learning researchen_US
dcterms.issued2019-
dc.identifier.eissn1533-7928en_US
dc.identifier.artn106en_US
dc.description.validate202305 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberAMA-0283-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextPolyUen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS20280031-
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
18-172.pdf1.26 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

105
Last Week
4
Last month
Citations as of Nov 10, 2025

Downloads

39
Citations as of Nov 10, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.