Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/73883
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorLin, SBen_US
dc.creatorGuo, Xen_US
dc.creatorZhou, DXen_US
dc.date.accessioned2018-03-29T07:15:33Z-
dc.date.available2018-03-29T07:15:33Z-
dc.identifier.issn1532-4435en_US
dc.identifier.urihttp://hdl.handle.net/10397/73883-
dc.language.isoenen_US
dc.publisherMIT Pressen_US
dc.rights© 2017 Lin, Guo and Zhou.en_US
dc.rightsLicense: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/15-586.html.en_US
dc.rightsThe following publication Lin, S. B., Guo, X., & Zhou, D. X. (2017). Distributed learning with regularized least squares. The Journal of Machine Learning Research, 18(1), 3202-3232. is available at https://jmlr.org/papers/v18/15-586.htmlen_US
dc.subjectDistributed learningen_US
dc.subjectDivide-and-conqueren_US
dc.subjectError analysisen_US
dc.subjectIntegral operator second order decompositionen_US
dc.titleDistributed learning with regularized least squaresen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.epage31en_US
dc.identifier.volume18en_US
dcterms.abstractWe study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS). By a divide-and-conquer approach, the algorithm partitions a data set into disjoint data subsets, applies the least squares regularization scheme to each data subset to produce an output function, and then takes an average of the individual output functions as a final global estimator or predictor. We show with error bounds and learning rates in expectation in both the L2-metric and RKHS-metric that the global output function of this distributed learning is a good approximation to the algorithm processing the whole data in one single machine. Our derived learning rates in expectation are optimal and stated in a general setting without any eigenfunction assumption. The analysis is achieved by a novel second order decomposition of operator differences in our integral operator approach. Even for the classical least squares regularization scheme in the RKHS associated with a general kernel, we give the best learning rate in expectation in the literature.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of machine learning research, 2017, v. 18, p. 1-31en_US
dcterms.isPartOfJournal of machine learning researchen_US
dcterms.issued2017-
dc.identifier.scopus2-s2.0-85032928626-
dc.identifier.eissn1533-7928en_US
dc.identifier.rosgroupid2017000053-
dc.description.ros2017-2018 > Academic research: refereed > Publication in refereed journalen_US
dc.description.validate201802 bcrcen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera0965-n01-
dc.identifier.SubFormID2244-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
15-586.pdf391.12 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

142
Last Week
0
Last month
Citations as of Mar 24, 2024

Downloads

25
Citations as of Mar 24, 2024

SCOPUSTM   
Citations

130
Last Week
1
Last month
Citations as of Mar 22, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.