Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/110422
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.creator | Han, Yuwei | - |
| dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/13295 | - |
| dc.language.iso | English | - |
| dc.title | Cost aware poisoning attack against graph neural networks | - |
| dc.type | Thesis | - |
| dcterms.abstract | Graph Neural Networks (GNNs) have achieved remarkable success in various tasks such as node classification, link prediction, and anomaly detection. However, these applications are vulnerable to adversarial attacks, especially poisoning attacks, where the attacker can modify the graph’s structure and features at the model training stage to degrade the model’s performance. Despite the existence of such attacks, the efficient utilization of the attacker’s budget, in terms of the number and type of modifications allowed, remains an open challenge. This thesis aims to address this challenge by developing cost-aware poisoning attack strategies against GNNs that maximize the degradation of the model’s performance while adhering to a constrained attack budget. | - |
| dcterms.abstract | We begin by identifying the key factors that contribute to the effectiveness of poisoning attacks on GNNs, focusing on the strategic modification of graph structure. We then propose a set of novel attack methodologies that are designed to exploit these factors efficiently, ensuring that each modification contributes significantly to the overall impact on the GNN’s performance. Our approaches are validated through extensive empirical evaluations on standard benchmarks for node classification, link prediction and anomaly detection tasks, demonstrating their superiority over existing attack strategies in terms of cost-effectiveness and impact. | - |
| dcterms.abstract | Building on our empirical findings, we formalize the problem of cost-aware adversarial attacks on GNNs, deriving theoretical bounds on the minimum number of modifications required to achieve a desired level of performance degradation. This formalization not only provides a theoretical foundation for our empirical strategies but also offers insights into the inherent vulnerabilities of GNNs to poisoning attacks. | - |
| dcterms.abstract | In summary, this thesis contributes to the field of adversarial machine learning by introducing a comprehensive framework for cost-aware poisoning attacks against GNNs. Our work not only advances the understanding of GNN vulnerabilities but also provides practical tools and theoretical insights to guide the development of more robust GNN models in the face of poisoning threats. | - |
| dcterms.accessRights | open access | - |
| dcterms.educationLevel | M.Phil. | - |
| dcterms.extent | x, 76 pages : color illustrations | - |
| dcterms.issued | 2024 | - |
| dcterms.LCSH | Neural networks (Computer science) | - |
| dcterms.LCSH | Artificial intelligence | - |
| dcterms.LCSH | Computer security | - |
| dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | - |
| Appears in Collections: | Thesis | |
Access
View full-text via https://theses.lib.polyu.edu.hk/handle/200/13295
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


