Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/93329
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorHu, Yen_US
dc.creatorLi, Cen_US
dc.creatorMeng, Ken_US
dc.creatorQin, Jen_US
dc.creatorYang, Xen_US
dc.date.accessioned2022-06-15T03:42:44Z-
dc.date.available2022-06-15T03:42:44Z-
dc.identifier.issn1532-4435en_US
dc.identifier.urihttp://hdl.handle.net/10397/93329-
dc.language.isoenen_US
dc.publisherMIT Pressen_US
dc.rights© 2017 Yaohua Hu, Chong Li, Kaiwen Meng, Jing Qin, and Xiaoqi Yang.en_US
dc.rightsLicense: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/15-651.htmlen_US
dc.rightsThe following publication Hu, Y., Li, C., Meng, K., Qin, J., & Yang, X. (2017). Group sparse optimization via lp, q regularization. The Journal of Machine Learning Research, 18(1), 960-1011 is available at https://jmlr.org/papers/v18/15-651.htmlen_US
dc.subjectGroup sparse optimizationen_US
dc.subjectLower-order regularizationen_US
dc.subjectNonconvex optimizationen_US
dc.subjectRestricted eigenvalue conditionen_US
dc.subjectProximal gradient methoden_US
dc.subjectIterative thresholding algorithmen_US
dc.subjectGene regulation networken_US
dc.titleGroup sparse optimization via ℓp,q regularizationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.epage52en_US
dc.identifier.volume18en_US
dc.identifier.issue1en_US
dcterms.abstractIn this paper, we investigate a group sparse optimization problem via ℓp,q regularization in three aspects: theory, algorithm and application. In the theoretical aspect, by introducing a notion of group restricted eigenvalue condition, we establish an oracle property and a global recovery bound of order O(λ22−q) for any point in a level set of the ℓp,q regularization problem, and by virtue of modern variational analysis techniques, we also provide a local analysis of recovery bound of order O(λ2) for a path of local minima. In the algorithmic aspect, we apply the well-known proximal gradient method to solve the ℓp,q regularization problems, either by analytically solving some specific ℓp,q regularization subproblems, or by using the Newton method to solve general ℓp,q regularization subproblems. In particular, we establish a local linear convergence rate of the proximal gradient method for solving the ℓ1,q regularization problem under some mild conditions and by first proving a second-order growth condition. As a consequence, the local linear convergence rate of proximal gradient method for solving the usual ℓq regularization problem (0<q<1) is obtained. Finally in the aspect of application, we present some numerical results on both the simulated data and the real data in gene transcriptional regulation.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of machine learning research, 2017, v. 18, no. 1, p. 1-52en_US
dcterms.isPartOfJournal of machine learning researchen_US
dcterms.issued2017-
dc.identifier.eissn1533-7928en_US
dc.description.validate202206 bcfcen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberAMA-0498, RGC-B2-0931-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextPolyUen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS6745964-
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
15-651.pdf4.46 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

127
Last Week
1
Last month
Citations as of Nov 10, 2025

Downloads

43
Citations as of Nov 10, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.