Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/117078
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Electrical and Electronic Engineering | - |
| dc.creator | Xue, Q | en_US |
| dc.creator | Ye, Q | en_US |
| dc.creator | Hu, H | en_US |
| dc.creator | Lou, J | en_US |
| dc.creator | Li, J | en_US |
| dc.creator | Fang, C | en_US |
| dc.creator | Shi, J | en_US |
| dc.date.accessioned | 2026-02-02T03:52:54Z | - |
| dc.date.available | 2026-02-02T03:52:54Z | - |
| dc.identifier.issn | 1545-5971 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/117078 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
| dc.rights | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
| dc.rights | The following publication Q. Xue et al., 'LabelDP Leaks Privacy — A Tightened Correlation-Aware Privacy Model for Labeled Training Data,' in IEEE Transactions on Dependable and Secure Computing, vol. 23, no. 1, pp. 491-506, Jan.-Feb. 2026 is available at https://doi.org/10.1109/TDSC.2025.3607786. | en_US |
| dc.subject | Joint distribution estimation | en_US |
| dc.subject | Labeled data collection | en_US |
| dc.subject | Local differential privacy | en_US |
| dc.title | LabelDP leaks privacy - a tightened correlation-aware privacy model for labeled training data | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 491 | en_US |
| dc.identifier.epage | 506 | en_US |
| dc.identifier.volume | 23 | en_US |
| dc.identifier.issue | 1 | en_US |
| dc.identifier.doi | 10.1109/TDSC.2025.3607786 | en_US |
| dcterms.abstract | It is well understood that the accuracy of machine learning models heavily depends on the amount of training data collected from individuals. However, the collection of sensitive information brings privacy risks to users. Recently, differential privacy (DP) has emerged as a rigorous privacy model for sensitive data collection. When applying DP to training data collection, a common practice to improve utility is that labels are sanitized whereas attribute values are not, a.k.a., label differential privacy (LabelDP). In this paper, we point out that LabelDP can hardly guarantee the expected privacy on labels due to the correlation between attributes and labels. To address this privacy leakage, we propose a stronger privacy model, correlation-aware label local differential privacy (CLLDP), to protect each individual user with the consideration of correlations between attributes and labels. Under CLLDP, we propose a perturbation protocol k heads response (kHR) to estimate the joint probabilistic distribution of attributes and labels. This distribution can be used for a variety of machine learning tasks, such as Naïve Bayes and decision tree, both of which are illustrated in this paper. Through extensive experiments, we show the strong privacy guarantee of CLLDP and its effectiveness in real-life machine learning tasks. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | IEEE transactions on dependable and secure computing, Jan. - Feb. 2026, v. 23, no. 1, p. 491-506 | en_US |
| dcterms.isPartOf | IEEE transactions on dependable and secure computing | en_US |
| dcterms.issued | 2026-01 | - |
| dc.identifier.scopus | 2-s2.0-105015393369 | - |
| dc.identifier.eissn | 1941-0018 | en_US |
| dc.description.validate | 202602 bcjz | - |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.SubFormID | G000773/2025-10 | - |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work was supported in part by the National Natural Science Foundation of China under Grant 62372122, Grant 92270123, and Grant 62302214, in part by Research Grants Council under Grant 15208923 and Grant 25207224, and in part by Innovation and Technology Fund under Grant GHP/392/22GD, Hong Kong SAR, China. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Xue_LabelDP_Leaks_Privacy.pdf | Pre-Published version | 1.29 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



