Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118560
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorAi, X-
dc.creatorZhu, Y-
dc.creatorZheng, Y-
dc.creatorLi, G-
dc.creatorLi, J-
dc.creatorZhou, K-
dc.date.accessioned2026-04-23T09:24:02Z-
dc.date.available2026-04-23T09:24:02Z-
dc.identifier.issn1556-6013-
dc.identifier.urihttp://hdl.handle.net/10397/118560-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication X. Ai, Y. Zhu, Y. Zheng, G. Li, J. Li and K. Zhou, 'Revisiting Adversarial Robustness of GNNs Against Structural Attacks: A Simple and Fast Approach,' in IEEE Transactions on Information Forensics and Security, vol. 21, pp. 446-459, 2026 is available at https://doi.org/10.1109/TIFS.2025.3641816.en_US
dc.subjectGraph learningen_US
dc.subjectMutual informationen_US
dc.subjectRobust GNNen_US
dc.subjectStructural attacksen_US
dc.titleRevisiting adversarial robustness of GNNs against structural attacks : a simple and fast approachen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage446-
dc.identifier.epage459-
dc.identifier.volume21-
dc.identifier.doi10.1109/TIFS.2025.3641816-
dcterms.abstractTo defend against adversarial structural attacks on graphs, we analyze attacks through the lens of mutual information and discover the “pairwise effect'. This effect reveals that structural attacks effectively degrade the performance of victim GNNs when these GNNs receive the modified structure paired with the given node attributes as training input. Therefore, we propose a novel defense strategy that renders structural attacks ineffective by disrupting the pairing of modified structures and node attributes during the training of victim GNNs, which we call “disrupting the pairwise effect'. To implement this idea, we propose two simple yet effective training strategies: Structural Fine-Tuning (SF) and Progressive Structural Training (PST), which disrupt the pairwise effect through node attributes pre-training followed by structure fine-tuning and progressive structure training, respectively. Compared to existing robust GNNs, our strategies avoid time-consuming techniques, thereby improving the robustness of GNNs while enhancing training speed. Additionally, these strategies can be easily applied to a wide range of commonly used GNNs, including robust GNN variants, making them highly adaptable to different models and applications. We provide theoretical analysis of the proposed training strategies and conduct extensive experiments on various datasets to demonstrate their effectiveness. Datasets and codes of this paper are available at https://github.com/Xing-Ai1003/Revisiting-Adversarial-Robustness-of-GNNs.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on information forensics and security, 2026, v. 21, p. 446 - 459-
dcterms.isPartOfIEEE transactions on information forensics and security-
dcterms.issued2026-
dc.identifier.scopus2-s2.0-105025886418-
dc.identifier.eissn1556-6021-
dc.description.validate202604 bcjz-
dc.description.oaAccepted Manuscripten_US
dc.identifier.SubFormIDG001518/2026-01en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported in part by the National Natural Science Foundation of China under Grant 62572314 and Grant 62471301 and in part by Hong Kong Research Grants Council (RGC) Project under Grant PolyU25210821.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Ai_Revisiting_Adversarial_Robustness.pdfPre-Published version971.17 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.