Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/118556
| Title: | Federated learning in the shuffle model of differential privacy : a communication-efficient and maliciously secure realization | Authors: | Xu, S Hua, Z Zheng, Y |
Issue Date: | 2026 | Source: | IEEE transactions on dependable and secure computing, Date of Publication: 11 February 2026, Early Access, https://doi.org/10.1109/TDSC.2026.3663543 | Abstract: | Federated learning (FL) is a compelling privacy-friendly paradigm that allows multiple clients to jointly train a model by sharing only gradient updates instead of their local datasets. Since gradient updates may still expose sensitive information, a line of research has explored the use of local differential privacy (LDP) mechanisms to formally safeguard these updates. Under LDP, each client perturbs its gradients locally prior to sharing. However, LDP often leads to a significant degradation in model utility due to the addition of large noises. To enable a better balance between privacy and utility, an increasing trend is to leverage the shuffle model of differential privacy (DP) in FL, which introduces an intermediate shuffling operation on the perturbed gradients, enabling privacy amplification. Following this trend, we present Camel, a communication-efficient and maliciously secure FL framework operating under the shuffle model of DP. A key difference of Camel from existing works is its new support for integrity checks on the shuffle computation, providing security against a malicious adversary. To achieve this, Camel builds on a trending cryptographic technique called secret-shared shuffle, and augments it by our custom methods for system-wide communication optimization and lightweight server-side integrity verification. Furthermore, we provide a formal analysis of privacy loss by employing Rényi differential privacy (RDP) for the entire FL process, which allows a tighter privacy bound. Our comprehensive experimental results show that Camel outperforms current state-of-the-art approaches in achieving better privacy-utility trade-offs, while maintaining promising performance. | Keywords: | Differential privacy Federated learning Secret sharing |
Publisher: | Institute of Electrical and Electronics Engineers | Journal: | IEEE transactions on dependable and secure computing | ISSN: | 1545-5971 | EISSN: | 1941-0018 | DOI: | 10.1109/TDSC.2026.3663543 |
| Appears in Collections: | Journal/Magazine Article |
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



