Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/117078
PIRA download icon_1.1View/Download Full Text
Title: LabelDP leaks privacy - a tightened correlation-aware privacy model for labeled training data
Authors: Xue, Q 
Ye, Q 
Hu, H 
Lou, J
Li, J
Fang, C
Shi, J
Issue Date: Jan-2026
Source: IEEE transactions on dependable and secure computing, Jan. - Feb. 2026, v. 23, no. 1, p. 491-506
Abstract: It is well understood that the accuracy of machine learning models heavily depends on the amount of training data collected from individuals. However, the collection of sensitive information brings privacy risks to users. Recently, differential privacy (DP) has emerged as a rigorous privacy model for sensitive data collection. When applying DP to training data collection, a common practice to improve utility is that labels are sanitized whereas attribute values are not, a.k.a., label differential privacy (LabelDP). In this paper, we point out that LabelDP can hardly guarantee the expected privacy on labels due to the correlation between attributes and labels. To address this privacy leakage, we propose a stronger privacy model, correlation-aware label local differential privacy (CLLDP), to protect each individual user with the consideration of correlations between attributes and labels. Under CLLDP, we propose a perturbation protocol k heads response (kHR) to estimate the joint probabilistic distribution of attributes and labels. This distribution can be used for a variety of machine learning tasks, such as Naïve Bayes and decision tree, both of which are illustrated in this paper. Through extensive experiments, we show the strong privacy guarantee of CLLDP and its effectiveness in real-life machine learning tasks.
Keywords: Joint distribution estimation
Labeled data collection
Local differential privacy
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on dependable and secure computing 
ISSN: 1545-5971
EISSN: 1941-0018
DOI: 10.1109/TDSC.2025.3607786
Rights: © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication Q. Xue et al., 'LabelDP Leaks Privacy — A Tightened Correlation-Aware Privacy Model for Labeled Training Data,' in IEEE Transactions on Dependable and Secure Computing, vol. 23, no. 1, pp. 491-506, Jan.-Feb. 2026 is available at https://doi.org/10.1109/TDSC.2025.3607786.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Xue_LabelDP_Leaks_Privacy.pdfPre-Published version1.29 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.