Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114578
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorLai, Yuni-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13739-
dc.language.isoEnglish-
dc.titleTrustworthy graph learning : adversarial attack, empirical defense, and certified robustness-
dc.typeThesis-
dcterms.abstractWith the prevalence of graph data on the Internet, graph learning models, particularly Graph Neural Networks (GNNs), have emerged as powerful tools for various applications, including social network analysis, recommender systems, and anomaly detection. These applications often play critical roles in ensuring system security and reliability, making the trustworthiness of graph models paramount. However, recent studies have revealed the vulnerability of graph learning models to adversarial attacks, highlighting the urgent need to enhance their robustness. This thesis delves into the vulnerability and robustness of graph learning models from three key perspectives: adversarial attacks, empirical defenses, and certifiable defenses.-
dcterms.abstractFirst, we introduce coupled-space attack to investigate the vulnerability of the random-walk-based anomaly detection (RWAD) model. A unique characteristic of RWAD is that it can operate on both pre-existing and feature-derived graphs, which present two potential attack surfaces: graph-space and feature-space. Our proposed coupled-space attacks are the first to investigate the interplay between graph-space and feature-space attacks. We prove the NP-hardness of attacking RWAD and propose efficient strategies to solve the bi-level optimization problem associated with the attacks.-
dcterms.abstractSecondly, we propose a powerful MetaC attack for both GNN-based and matrix-factorization-based recommender systems. Leveraging insights from our vulnerability analysis, we design a robust recommender system with empirical defense named PDR system. GraphRfi, a state-of-the-art robust GNN-based recommender system, was proposed to mitigate the effects of injected fake users. Unfortunately, we demonstrate that GraphRfi is still vulnerable to strong attacks likes MetaC due to the supervised nature of its fraudster detection component, where the clean labels it relies on are hard to obtain in practice. Then, we design an adjustable fraudster detection module that explicitly considers label uncertainty. This module can serve as a plug-in that can be easily integrated into different models, resulting in the PDR system.-
dcterms.abstractThirdly, while empirical and certified robustness techniques have been developed to defend against graph modification attacks (GMAs), the problem of certified robustness against graph injection attacks (GIAs) remains largely unexplored. To bridge this gap, we introduce the node-aware bi-smoothing framework, which is the first certifiably robust approach for general node classification tasks against GIAs. Specifically, it randomly deletes nodes and edges of the graphs to obtain smoothed predictions with a large amount of random inputs. Through rigorous theoretical analysis, we establish the certifiable conditions of our smoothing scheme.-
dcterms.abstractFinally, we present the first collective certificate against GIAs, significantly improving the certified performance compared to existing sample-wise certificates. Our collective certificate certifies a set of target nodes simultaneously, overcoming the limitations of previous approaches. We formulate the problem as a binary integer quadratic constrained linear programming (BQ-CLP) and develop a customized linearization technique to relax it into efficiently solvable linear programming (LP).-
dcterms.abstractThrough extensive evaluations, we demonstrate the effectiveness of our proposed adversarial attacks and defense techniques, paving the way for developing more robust and trustworthy graph learning models for real-world applications.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxvi, 186 pages : color illustrations-
dcterms.issued2025-
dcterms.LCSHGraph theory -- Data processing-
dcterms.LCSHNeural networks (Computer science)-
dcterms.LCSHMachine learning -- Mathematical models-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.