Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118328
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Data Science and Artificial Intelligenceen_US
dc.contributorDepartment of Computingen_US
dc.creatorChen, Jen_US
dc.creatorLin, Zen_US
dc.creatorLin, Wen_US
dc.creatorShi, Wen_US
dc.creatorYin, Xen_US
dc.creatorWang, Den_US
dc.date.accessioned2026-04-02T04:24:51Z-
dc.date.available2026-04-02T04:24:51Z-
dc.identifier.issn1556-6013en_US
dc.identifier.urihttp://hdl.handle.net/10397/118328-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication J. Chen, Z. Lin, W. Lin, W. Shi, X. Yin and D. Wang, 'FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks,' in IEEE Transactions on Information Forensics and Security, vol. 20, pp. 1665-1678, 2025 is available at https://doi.org/10.1109/TIFS.2025.3531141.en_US
dc.subjectFederated learningen_US
dc.subjectTargeted attacksen_US
dc.subjectUnlearning attacksen_US
dc.titleFedMUA : exploring the vulnerabilities of federated learning to malicious unlearning attacksen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1665en_US
dc.identifier.epage1678en_US
dc.identifier.volume20en_US
dc.identifier.doi10.1109/TIFS.2025.3531141en_US
dcterms.abstractRecently, the practical needs of “the right to be forgotten” in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client’s removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client’s model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. Specifically, clients may act as attackers by crafting malicious unlearning requests to manipulate the prediction behavior of the global model. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on information forensics and security, 2025, v. 20, p. 1665-1678en_US
dcterms.isPartOfIEEE transactions on information forensics and securityen_US
dcterms.issued2025-
dc.identifier.scopus2-s2.0-85215364747-
dc.identifier.eissn1556-6021en_US
dc.description.validate202604 bcjzen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.SubFormIDG001366/2025-12-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported in part by the Hong Kong (HK) Research Grant Council (RGC) General Research Fund under Grant PolyU 15208222 and in part by the NSFC Young Scientist Fund under Grant PolyU A0040473.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Chen_FedMUA_Exploring_Vulnerabilities.pdfPre-Published version4.92 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.