Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/88211
PIRA download icon_1.1View/Download Full Text
Title: Distributed learning with minimum error entropy principle
Authors: Guo, X 
Hu, T
Wu, Q
Issue Date: 1-Aug-2019
Source: Paper presented at Joint Statistical Meetings (JSM2019), Denver, Colorado, Jul 27 - Aug 01, 2019
Abstract: Minimum Error Entropy (MEE) principle is an important approach in Information Theoretical Learning (ITL). It is widely applied and studied in various fields for its robustness to noise. In this paper, we study a reproducing kernel-based distributed MEE algorithm, DMEE, which is designed to work with both fully supervised data and semi-supervised data. With fully supervised data, our proved learning rates equal the minimax optimal learning rates of the classical pointwise kernel-based regressions. Under the semi-supervised learning scenarios, we show that DMEE exploits unlabeled data effectively, in the sense that first, under the settings with weaker regularity assumptions, additional unlabeled data significantly improves the learning rates of DMEE. Second, with sufficient unlabeled data, labeled data can be distributed to many more computing nodes, that each node takes only O(1) labels, without spoiling the learning rates in terms of the number of labels.
Keywords: information theoretic learning
Minimum error entropy
Distributed method
Semi-supervised data
Reproducing kernel Hilbert space
Rights: Posted with permission of the author.
Appears in Collections:Presentation

Files in This Item:
File Description SizeFormat 
DenverJSM2019Aug.pdf123.86 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Other Version
Show full item record

Page views

30
Citations as of May 22, 2022

Downloads

7
Citations as of May 22, 2022

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.