Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/99267
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorLi, Qen_US
dc.creatorJiang, Ben_US
dc.creatorSun, Den_US
dc.date.accessioned2023-07-04T08:29:57Z-
dc.date.available2023-07-04T08:29:57Z-
dc.identifier.issn1532-4435en_US
dc.identifier.urihttp://hdl.handle.net/10397/99267-
dc.language.isoenen_US
dc.publisherMIT Pressen_US
dc.rights© 2023 Qian Li, Binyan Jiang, and Defeng Sun. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v24/21-0699.html.en_US
dc.rightsThe following publication Li, Q., Jiang, B., & Sun, D. (2023). MARS: A second-order reduction algorithm for high-dimensional sparse precision matrices estimation. Journal of Machine Learning Research, 24(134), 1-44 is available at http://jmlr.org/papers/v24/21-0699.html.en_US
dc.subjectAdaptive sieving reduction strategyen_US
dc.subjectPrecision matrixen_US
dc.subjectSemismooth Newton methoden_US
dc.subjectSparsityen_US
dc.subjectSolution pathen_US
dc.titleMARS : a second-order reduction algorithm for high-dimensional sparse precision matrices estimationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.epage44en_US
dc.identifier.volume24en_US
dc.identifier.issue134en_US
dcterms.abstractEstimation of the precision matrix (or inverse covariance matrix) is of great importance in statistical data analysis and machine learning. However, as the number of parameters scales quadratically with the dimension p, the computation becomes very challenging when p is large. In this paper, we propose an adaptive sieving reduction algorithm to generate a solution path for the estimation of precision matrices under the ℓ1 penalized D-trace loss, with each subproblem being solved by a second-order algorithm. In each iteration of our algorithm, we are able to greatly reduce the number of variables in the problem based on the Karush-Kuhn-Tucker (KKT) conditions and the sparse structure of the estimated precision matrix in the previous iteration. As a result, our algorithm is capable of handling data sets with very high dimensions that may go beyond the capacity of the existing methods. Moreover, for the sub-problem in each iteration, other than solving the primal problem directly, we develop a semismooth Newton augmented Lagrangian algorithm with global linear convergence rate on the dual problem to improve the efficiency. Theoretical properties of our proposed algorithm have been established. In particular, we show that the convergence rate of our algorithm is asymptotically superlinear. The high efficiency and promising performance of our algorithm are illustrated via extensive simulation studies and real data applications, with comparison to several state-of-the-art solvers.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of machine learning research, Apr. 2023, v. 24, no. 134, p. 1-44en_US
dcterms.isPartOfJournal of machine learning researchen_US
dcterms.issued2023-04-
dc.identifier.eissn1533-7928en_US
dc.description.validate202306 bcwwen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera2149b-
dc.identifier.SubFormID46792-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNSFCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
21-0699.pdf1.25 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

126
Last Week
3
Last month
Citations as of Nov 9, 2025

Downloads

38
Citations as of Nov 9, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.