Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115243
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineering-
dc.creatorLin, C-
dc.creatorJi, X-
dc.creatorYang, Y-
dc.creatorLi, Q-
dc.creatorZhao, Z-
dc.creatorPeng, Z-
dc.creatorWang, R-
dc.creatorFang, L-
dc.creatorShen, C-
dc.date.accessioned2025-09-17T03:46:33Z-
dc.date.available2025-09-17T03:46:33Z-
dc.identifier.issn1556-6013-
dc.identifier.urihttp://hdl.handle.net/10397/115243-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication C. Lin et al., "Hard Adversarial Example Mining for Improving Robust Fairness," in IEEE Transactions on Information Forensics and Security, vol. 20, pp. 350-363, 2025 is available at https://doi.org/10.1109/TIFS.2024.3516554.en_US
dc.subjectAdversarial trainingen_US
dc.subjectConvolutional neural networken_US
dc.subjectHard adversarial example miningen_US
dc.subjectRobust fairnessen_US
dc.titleHard adversarial example mining for improving robust fairnessen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage350-
dc.identifier.epage363-
dc.identifier.volume20-
dc.identifier.doi10.1109/TIFS.2024.3516554-
dcterms.abstractAdversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AEs). Nevertheless, recent studies have revealed that adversarially trained models are prone to unfairness problems. Recent works in this field usually apply class-wise regularization methods to enhance the fairness of AT. However, this paper discovers that these paradigms can be sub-optimal in improving robust fairness. Specifically, we empirically observe that the AEs that are already robust (referred to as “easy AEs” in this paper) are useless and even harmful in improving robust fairness. To this end, we propose the hard adversarial example mining (HAM) technique which concentrates on mining hard AEs while discarding the easy AEs in AT. Specifically, HAM identifies the easy AEs and hard AEs with a fast adversarial attack method. By discarding the easy AEs and reweighting the hard AEs, the robust fairness of the model can be efficiently and effectively improved. Extensive experimental results on four image classification datasets demonstrate the improvement of HAM in robust fairness and training efficiency compared to several state-of-the-art fair adversarial training methods. Our code is available at https://github.com/yyl-github-1896/HAM.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on information forensics and security, 2025, v. 20, p. 350-363-
dcterms.isPartOfIEEE transactions on information forensics and security-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-85212574758-
dc.identifier.eissn1556-6021-
dc.description.validate202509 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera4025en_US
dc.identifier.SubFormID51956en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported in part by the National Key Research and Development Program of China under Grant 2023YFE0209800; in part by the National Natural Science Foundation of China under Grant T2341003, Grant 62376210, Grant 62161160337, Grant 62132011, Grant U21B2018, Grant U20A20177, and Grant 62206217; in part by Shaanxi Province Key Industry Innovation Program under Grant 2023-ZDLGY-38; in part by Hong Kong Research Grants Council (RGC) General Research Fund (GRF) Projects under Grant 12202922 and Grant 15238724; and in part by Shenzhen Science and Technology Program under Grant JCYJ20230807140412025. The associate editor coordinating the review of this article and approving it for publication was Prof. Guowen Xu.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Lin_Hard_Adversarial_Example.pdfPre-Published version3.86 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.