Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/98235
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.creator | Li, Qimai | - |
| dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/12326 | - |
| dc.language.iso | English | - |
| dc.title | Learning on graphs with graph convolution | - |
| dc.type | Thesis | - |
| dcterms.abstract | Graph convolutional neural networks (GCNN) have been the model of choice for graph representation learning, which is mainly due to the effective design of graph convolution that computes the representation of a node by aggregating those of its neighbors. This thesis reveals the mechanisms behind graph convolution neural networks from the perspective of graph signal processing theory and focuses on developing theoretic algorithms for modeling complex, richly labeled, and large-scale graph-structured data, with applications spanning across computer vision, natural language processing, human action understanding, smart transportation, and malware detection. | - |
| dcterms.abstract | We conducted systematic research on analyzing and extending GCNNs from different theoretical perspectives including graph signal processing and spectral graph theory. Our spatial analysis shows that the graph convolution in GCN is a special form of Laplacian smoothing, which is the key reason why GCN works, but it also brings the over-smoothing problem to deep GCN models. Our spectral analysis revisits GCN and classical label propagation methods under a graph filtering framework and shows that they extract useful data representations by a low-pass graph filter. | - |
| dcterms.abstract | Our research also contributes to the development of efficient and more powerful GC-NNs models, and various high-impact real-world applications. With the new theoretical insights, we have developed new, efficient, and more powerful models based on graph convolution for semi-supervised and unsupervised learning, including Improved Graph Convolutional Networks (IGCN), Generalized Label Propagation (GLP), Adaptive Graph Convolution (AGC). We also extend 1-D GCNN to 2-D GCNN so as to explore informative relational information among object attributes, and proposed Dimensionwise Separable 2-D Graph Convolution (DSGC). | - |
| dcterms.abstract | The results have been published in various top AI conferences, including AAAI-18 [1], IJCAI-19 [2], CVPR-19 [3], KDD-21 [4], and WWW-22 [5]. | - |
| dcterms.accessRights | open access | - |
| dcterms.educationLevel | Ph.D. | - |
| dcterms.extent | xiv, 193 pages : color illustrations | - |
| dcterms.issued | 2023 | - |
| dcterms.LCSH | Machine learning | - |
| dcterms.LCSH | Neural networks (Computer science) | - |
| dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | - |
| Appears in Collections: | Thesis | |
Access
View full-text via https://theses.lib.polyu.edu.hk/handle/200/12326
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


