Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118556
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.contributorMainland Development Office-
dc.creatorXu, S-
dc.creatorHua, Z-
dc.creatorZheng, Y-
dc.date.accessioned2026-04-23T07:48:55Z-
dc.date.available2026-04-23T07:48:55Z-
dc.identifier.issn1545-5971-
dc.identifier.urihttp://hdl.handle.net/10397/118556-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.subjectDifferential privacyen_US
dc.subjectFederated learningen_US
dc.subjectSecret sharingen_US
dc.titleFederated learning in the shuffle model of differential privacy : a communication-efficient and maliciously secure realizationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1109/TDSC.2026.3663543-
dcterms.abstractFederated learning (FL) is a compelling privacy-friendly paradigm that allows multiple clients to jointly train a model by sharing only gradient updates instead of their local datasets. Since gradient updates may still expose sensitive information, a line of research has explored the use of local differential privacy (LDP) mechanisms to formally safeguard these updates. Under LDP, each client perturbs its gradients locally prior to sharing. However, LDP often leads to a significant degradation in model utility due to the addition of large noises. To enable a better balance between privacy and utility, an increasing trend is to leverage the shuffle model of differential privacy (DP) in FL, which introduces an intermediate shuffling operation on the perturbed gradients, enabling privacy amplification. Following this trend, we present Camel, a communication-efficient and maliciously secure FL framework operating under the shuffle model of DP. A key difference of Camel from existing works is its new support for integrity checks on the shuffle computation, providing security against a malicious adversary. To achieve this, Camel builds on a trending cryptographic technique called secret-shared shuffle, and augments it by our custom methods for system-wide communication optimization and lightweight server-side integrity verification. Furthermore, we provide a formal analysis of privacy loss by employing Rényi differential privacy (RDP) for the entire FL process, which allows a tighter privacy bound. Our comprehensive experimental results show that Camel outperforms current state-of-the-art approaches in achieving better privacy-utility trade-offs, while maintaining promising performance.-
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationIEEE transactions on dependable and secure computing, Date of Publication: 11 February 2026, Early Access, https://doi.org/10.1109/TDSC.2026.3663543-
dcterms.isPartOfIEEE transactions on dependable and secure computing-
dcterms.issued2026-
dc.identifier.scopus2-s2.0-105029951443-
dc.identifier.eissn1941-0018-
dc.description.validate202604 bcjz-
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG001475/2026-04en_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusEarly releaseen_US
dc.date.embargo0000-00-00 (to be updated)en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.