Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/63843
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorGuo, Xen_US
dc.creatorFan, Jen_US
dc.creatorZhou, DXen_US
dc.date.accessioned2017-02-09T08:30:42Z-
dc.date.available2017-02-09T08:30:42Z-
dc.identifier.issn1532-4435en_US
dc.identifier.urihttp://hdl.handle.net/10397/63843-
dc.language.isoenen_US
dc.publisherMIT Pressen_US
dc.rights© 2016 Xin Guo, Jun Fan and Ding-Xuan Zhou.en_US
dc.rightsThis article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Guo, X., Fan, J., & Zhou, D. X. (2016). Sparsity and error analysis of empirical feature-based regularization schemes. The Journal of Machine Learning Research, 17(89), 1-34 is available at https://www.jmlr.org/papers/v17/11-207.htmlen_US
dc.subjectSparsityen_US
dc.subjectConcave regularizeren_US
dc.subjectReproducing kernel Hilbert spaceen_US
dc.subjectRegularization with empirical featuresen_US
dc.subjectlq-penaltyen_US
dc.subjectSCAD penaltyen_US
dc.titleSparsity and error analysis of empirical feature-based regularization schemesen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.epage34en_US
dc.identifier.volume17en_US
dc.identifier.issue89en_US
dcterms.abstractWe consider a learning algorithm generated by a regularization scheme with a concave regularizer for the purpose of achieving sparsity and good learning rates in a least squares regression setting. The regularization is induced for linear combinations of empirical features, constructed in the literatures of kernel principal component analysis and kernel projection machines, based on kernels and samples. In addition to the separability of the involved optimization problem caused by the empirical features, we carry out sparsity and error analysis, giving bounds in the norm of the reproducing kernel Hilbert space, based on a priori conditions which do not require assumptions on sparsity in terms of any basis or system. In particular, we show that as the concave exponent qq of the concave regularizer increases to 11, the learning ability of the algorithm improves. Some numerical simulations for both artificial and real MHC-peptide binding data involving the ℓqℓq regularizer and the SCAD penalty are presented to demonstrate the sparsity and error analysis.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of machine learning research, 2016, v. 17, no. 89, p. 1-34en_US
dcterms.isPartOfJournal of machine learning researchen_US
dcterms.issued2016-
dc.identifier.isiWOS:000391533200001-
dc.identifier.eissn1533-7928en_US
dc.identifier.rosgroupid2015003399-
dc.description.ros2015-2016 > Academic research: refereed > Publication in refereed journalen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera0965-n02-
dc.identifier.SubFormID2245-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Guo_Sparsity_Error_Analysis.pdf430.73 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

124
Last Week
1
Last month
Citations as of Apr 21, 2024

Downloads

29
Citations as of Apr 21, 2024

WEB OF SCIENCETM
Citations

24
Last Week
0
Last month
Citations as of Apr 25, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.