Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/114578
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.creator | Lai, Yuni | - |
| dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/13739 | - |
| dc.language.iso | English | - |
| dc.title | Trustworthy graph learning : adversarial attack, empirical defense, and certified robustness | - |
| dc.type | Thesis | - |
| dcterms.abstract | With the prevalence of graph data on the Internet, graph learning models, particularly Graph Neural Networks (GNNs), have emerged as powerful tools for various applications, including social network analysis, recommender systems, and anomaly detection. These applications often play critical roles in ensuring system security and reliability, making the trustworthiness of graph models paramount. However, recent studies have revealed the vulnerability of graph learning models to adversarial attacks, highlighting the urgent need to enhance their robustness. This thesis delves into the vulnerability and robustness of graph learning models from three key perspectives: adversarial attacks, empirical defenses, and certifiable defenses. | - |
| dcterms.abstract | First, we introduce coupled-space attack to investigate the vulnerability of the random-walk-based anomaly detection (RWAD) model. A unique characteristic of RWAD is that it can operate on both pre-existing and feature-derived graphs, which present two potential attack surfaces: graph-space and feature-space. Our proposed coupled-space attacks are the first to investigate the interplay between graph-space and feature-space attacks. We prove the NP-hardness of attacking RWAD and propose efficient strategies to solve the bi-level optimization problem associated with the attacks. | - |
| dcterms.abstract | Secondly, we propose a powerful MetaC attack for both GNN-based and matrix-factorization-based recommender systems. Leveraging insights from our vulnerability analysis, we design a robust recommender system with empirical defense named PDR system. GraphRfi, a state-of-the-art robust GNN-based recommender system, was proposed to mitigate the effects of injected fake users. Unfortunately, we demonstrate that GraphRfi is still vulnerable to strong attacks likes MetaC due to the supervised nature of its fraudster detection component, where the clean labels it relies on are hard to obtain in practice. Then, we design an adjustable fraudster detection module that explicitly considers label uncertainty. This module can serve as a plug-in that can be easily integrated into different models, resulting in the PDR system. | - |
| dcterms.abstract | Thirdly, while empirical and certified robustness techniques have been developed to defend against graph modification attacks (GMAs), the problem of certified robustness against graph injection attacks (GIAs) remains largely unexplored. To bridge this gap, we introduce the node-aware bi-smoothing framework, which is the first certifiably robust approach for general node classification tasks against GIAs. Specifically, it randomly deletes nodes and edges of the graphs to obtain smoothed predictions with a large amount of random inputs. Through rigorous theoretical analysis, we establish the certifiable conditions of our smoothing scheme. | - |
| dcterms.abstract | Finally, we present the first collective certificate against GIAs, significantly improving the certified performance compared to existing sample-wise certificates. Our collective certificate certifies a set of target nodes simultaneously, overcoming the limitations of previous approaches. We formulate the problem as a binary integer quadratic constrained linear programming (BQ-CLP) and develop a customized linearization technique to relax it into efficiently solvable linear programming (LP). | - |
| dcterms.abstract | Through extensive evaluations, we demonstrate the effectiveness of our proposed adversarial attacks and defense techniques, paving the way for developing more robust and trustworthy graph learning models for real-world applications. | - |
| dcterms.accessRights | open access | - |
| dcterms.educationLevel | Ph.D. | - |
| dcterms.extent | xvi, 186 pages : color illustrations | - |
| dcterms.issued | 2025 | - |
| dcterms.LCSH | Graph theory -- Data processing | - |
| dcterms.LCSH | Neural networks (Computer science) | - |
| dcterms.LCSH | Machine learning -- Mathematical models | - |
| dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | - |
| Appears in Collections: | Thesis | |
Access
View full-text via https://theses.lib.polyu.edu.hk/handle/200/13739
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


