Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115243
| Title: | Hard adversarial example mining for improving robust fairness | Authors: | Lin, C Ji, X Yang, Y Li, Q Zhao, Z Peng, Z Wang, R Fang, L Shen, C |
Issue Date: | 2025 | Source: | IEEE transactions on information forensics and security, 2025, v. 20, p. 350-363 | Abstract: | Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AEs). Nevertheless, recent studies have revealed that adversarially trained models are prone to unfairness problems. Recent works in this field usually apply class-wise regularization methods to enhance the fairness of AT. However, this paper discovers that these paradigms can be sub-optimal in improving robust fairness. Specifically, we empirically observe that the AEs that are already robust (referred to as “easy AEs” in this paper) are useless and even harmful in improving robust fairness. To this end, we propose the hard adversarial example mining (HAM) technique which concentrates on mining hard AEs while discarding the easy AEs in AT. Specifically, HAM identifies the easy AEs and hard AEs with a fast adversarial attack method. By discarding the easy AEs and reweighting the hard AEs, the robust fairness of the model can be efficiently and effectively improved. Extensive experimental results on four image classification datasets demonstrate the improvement of HAM in robust fairness and training efficiency compared to several state-of-the-art fair adversarial training methods. Our code is available at https://github.com/yyl-github-1896/HAM. | Keywords: | Adversarial training Convolutional neural network Hard adversarial example mining Robust fairness |
Publisher: | Institute of Electrical and Electronics Engineers | Journal: | IEEE transactions on information forensics and security | ISSN: | 1556-6013 | EISSN: | 1556-6021 | DOI: | 10.1109/TIFS.2024.3516554 | Rights: | © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The following publication C. Lin et al., "Hard Adversarial Example Mining for Improving Robust Fairness," in IEEE Transactions on Information Forensics and Security, vol. 20, pp. 350-363, 2025 is available at https://doi.org/10.1109/TIFS.2024.3516554. |
| Appears in Collections: | Journal/Magazine Article |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Lin_Hard_Adversarial_Example.pdf | Pre-Published version | 3.86 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



