Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/95579
PIRA download icon_1.1View/Download Full Text
Title: Preserving user privacy for machine learning: local differential privacy or federated machine learning?
Authors: Zheng, H 
Hu, H 
Han, Z 
Issue Date: Jul-2020
Source: IEEE intelligent systems, July-Aug. 2020, v. 35, no. 4, 9144394, p. 5-14
Abstract: The growing number of mobile and IoT devices has nourished many intelligent applications. In order to produce high-quality machine learning models, they constantly access and collect rich personal data such as photos, browsing history, and text messages. However, direct access to personal data has raised increasing public concerns about privacy risks and security breaches. To address these concerns, there are two emerging solutions to privacy-preserving machine learning, namely local differential privacy and federated machine learning. The former is a distributed data collection strategy where each client perturbs data locally before submitting to the server, whereas the latter is a distributed machine learning strategy to train models on mobile devices locally and merge their output (e.g., parameter updates of a model) through a control protocol. In this article, we conduct a comparative study on the efficiency and privacy of both solutions. Our results show that in a standard population and domain setting, both can achieve an optimal misclassification rate lower than 20% and federated machine learning generally performs better at the cost of higher client CPU usage. Nonetheless, local differential privacy can benefit more from a larger client population ($>$> 1k). As for privacy guarantee, local differential privacy also has flexible control over the data leakage.
Keywords: Federated machine learning
Local differential privacy
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE intelligent systems 
ISSN: 1541-1672
EISSN: 1941-1294
DOI: 10.1109/MIS.2020.3010335
Rights: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication H. Zheng, H. Hu and Z. Han, "Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?," in IEEE Intelligent Systems, vol. 35, no. 4, pp. 5-14, 1 July-Aug. 2020 is available at https://doi.org/10.1109/MIS.2020.3010335.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Zheng_Preserving_User_Privacy.pdfPre-Published version1.54 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

60
Last Week
0
Last month
Citations as of Sep 22, 2024

Downloads

331
Citations as of Sep 22, 2024

SCOPUSTM   
Citations

61
Citations as of Sep 26, 2024

WEB OF SCIENCETM
Citations

47
Citations as of Sep 26, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.