Please use this identifier to cite or link to this item:
Title: Distributed learning with regularized least squares
Authors: Lin, SB
Guo, X 
Zhou, DX
Keywords: Distributed learning
Error analysis
Integral operator second order decomposition
Issue Date: 2017
Publisher: MIT Press
Source: Journal of machine learning research, 2017, v. 18, p. 1-31 How to cite?
Journal: Journal of machine learning research 
Abstract: We study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS). By a divide-and-conquer approach, the algorithm partitions a data set into disjoint data subsets, applies the least squares regularization scheme to each data subset to produce an output function, and then takes an average of the individual output functions as a final global estimator or predictor. We show with error bounds and learning rates in expectation in both the L2-metric and RKHS-metric that the global output function of this distributed learning is a good approximation to the algorithm processing the whole data in one single machine. Our derived learning rates in expectation are optimal and stated in a general setting without any eigenfunction assumption. The analysis is achieved by a novel second order decomposition of operator differences in our integral operator approach. Even for the classical least squares regularization scheme in the RKHS associated with a general kernel, we give the best learning rate in expectation in the literature.
ISSN: 1532-4435
EISSN: 1533-7928
Appears in Collections:Journal/Magazine Article

View full-text via PolyU eLinks SFX Query
Show full item record


Last Week
Last month
Citations as of Nov 28, 2018

Page view(s)

Last Week
Last month
Citations as of Dec 9, 2018

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.