Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/98590
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Applied Mathematics | - |
| dc.creator | Luo, Z | en_US |
| dc.creator | Sun, D | en_US |
| dc.creator | Toh, KC | en_US |
| dc.creator | Xiu, N | en_US |
| dc.date.accessioned | 2023-05-10T02:00:31Z | - |
| dc.date.available | 2023-05-10T02:00:31Z | - |
| dc.identifier.issn | 1532-4435 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/98590 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Journal of Machine Learning Research | en_US |
| dc.rights | © 2019 Ziyan Luo, Defeng Sun, Kim-Chuan Toh and Naihua Xiu. | en_US |
| dc.rights | License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v20/18-172.html. | en_US |
| dc.rights | The following publication Luo, Z., Sun, D., Toh, K. C., & Xiu, N. (2019). Solving the OSCAR and SLOPE Models Using a Semismooth Newton-Based Augmented Lagrangian Method. J. Mach. Learn. Res., 20, 106 is available at https://www.jmlr.org/papers/v20/18-172.html. | en_US |
| dc.subject | Linear regression | en_US |
| dc.subject | OSCAR | en_US |
| dc.subject | Sparsity | en_US |
| dc.subject | Augmented Lagrangian method | en_US |
| dc.subject | Semi-smooth Newton method | en_US |
| dc.title | Solving the OSCAR and SLOPE models using a semismooth Newton-based augmented Lagrangian method | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 1 | en_US |
| dc.identifier.epage | 25 | en_US |
| dc.identifier.volume | 20 | en_US |
| dcterms.abstract | The octagonal shrinkage and clustering algorithm for regression (OSCAR), equipped with the ℓ1-norm and a pair-wise ℓ∞-norm regularizer, is a useful tool for feature selection and grouping in high-dimensional data analysis. The computational challenge posed by OSCAR, for high dimensional and/or large sample size data, has not yet been well resolved due to the non-smoothness and non-separability of the regularizer involved. In this paper, we successfully resolve this numerical challenge by proposing a sparse semismooth Newton-based augmented Lagrangian method to solve the more general SLOPE (the sorted L-one penalized estimation) model. By appropriately exploiting the inherent sparse and low-rank property of the generalized Jacobian of the semismooth Newton system in the augmented Lagrangian subproblem, we show how the computational complexity can be substantially reduced. Our algorithm offers a notable computational advantage in the high-dimensional statistical regression settings. Numerical experiments are conducted on real data sets, and the results demonstrate that our algorithm is far superior, in both speed and robustness, to the existing state-of-the-art algorithms based on first-order iterative schemes, including the widely used accelerated proximal gradient (APG) method and the alternating direction method of multipliers (ADMM). | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Journal of machine learning research, 2019, v. 20, 106, p. 1-25 | en_US |
| dcterms.isPartOf | Journal of machine learning research | en_US |
| dcterms.issued | 2019 | - |
| dc.identifier.eissn | 1533-7928 | en_US |
| dc.identifier.artn | 106 | en_US |
| dc.description.validate | 202305 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | AMA-0283 | - |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | PolyU | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.identifier.OPUS | 20280031 | - |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 18-172.pdf | 1.26 MB | Adobe PDF | View/Open |
Page views
105
Last Week
4
4
Last month
Citations as of Nov 10, 2025
Downloads
39
Citations as of Nov 10, 2025
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



