Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/90598
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorHu, Jen_US
dc.creatorCao, Len_US
dc.creatorLi, Ten_US
dc.creatorDong, Sen_US
dc.creatorLi, Pen_US
dc.date.accessioned2021-08-04T01:52:03Z-
dc.date.available2021-08-04T01:52:03Z-
dc.identifier.urihttp://hdl.handle.net/10397/90598-
dc.language.isoenen_US
dc.publisherBioMed Centralen_US
dc.rights© The Author(s), 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.en_US
dc.rightsThe following publication Hu, J., Cao, L., Li, T., Dong, S., & Li, P. (2021). GAT-LI: a graph attention network based learning and interpreting method for functional brain network classification. BMC bioinformatics, 22(1), 379, 1-20 is available at https://doi.org/10.1186/s12859-021-04295-1en_US
dc.subjectGraph attention networksen_US
dc.subjectFunctional brain networksen_US
dc.subjectResting-state functional connectivity dataen_US
dc.subjectClassifcationen_US
dc.subjectModel interpretationen_US
dc.titleGAT-LI: a graph attention network based learning and interpreting method for functional brain network classificationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.epage20en_US
dc.identifier.volume22en_US
dc.identifier.issue1en_US
dc.identifier.doi10.1186/s12859-021-04295-1en_US
dcterms.abstractBackground: Autism spectrum disorders (ASD) imply a spectrum of symptoms rather than a single phenotype. ASD could afect brain connectivity at diferent degree based on the severity of the symptom. Given their excellent learning capability, graph neural networks (GNN) methods have recently been used to uncover functional connectivity patterns and biological mechanisms in neuropsychiatric disorders, such as ASD. However, there remain challenges to develop an accurate GNN learning model and understand how specifc decisions of these graph models are made in brain network analysis.en_US
dcterms.abstractResults: In this paper, we propose a graph attention network based learning and interpreting method, namely GAT-LI, which learns to classify functional brain networks of ASD individuals versus healthy controls (HC), and interprets the learned graph model with feature importance. Specifcally, GAT-LI includes a graph learning stage and an interpreting stage. First, in the graph learning stage, a new graph attention network model, namely GAT2, uses graph attention layers to learn the node representation, and a novel attention pooling layer to obtain the graph representation for functional brain network classifcation. We experimentally compared GAT2 model’s performance on the ABIDE I database from 1035 subjects against the classifcation performances of other well-known models, and the results showed that the GAT2 model achieved the best classifcation performance. We experimentally compared the infuence of diferent construction methods of brain networks in GAT2 model. We also used a larger synthetic graph dataset with 4000 samples to validate the utility and power of GAT2 model. Second, in the interpreting stage, we used GNNExplainer to interpret learned GAT2 model with feature importance. We experimentally compared GNNExplainer with two well-known interpretation methods including Saliency Map and DeepLIFT to interpret the learned model, and the results showed GNNExplainer achieved the best interpretation performance. We further used the interpretation method to identify the features that contributed most in classifying ASD versus HC.en_US
dcterms.abstractConclusion: We propose a two-stage learning and interpreting method GAT-LI to classify functional brain networks and interpret the feature importance in the graph model. The method should also be useful in the classifcation and interpretation tasks for graph data from other biomedical scenarios.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationBMC bioinformatics, 2021, v. 22, no. 1, 379, p. 1-20en_US
dcterms.isPartOfBMC bioinformaticsen_US
dcterms.issued2021-
dc.identifier.scopus2-s2.0-85111055504-
dc.identifier.eissn1471-2105en_US
dc.identifier.artn379en_US
dc.description.validate202108 bcvcen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera0987-n01-
dc.identifier.SubFormID2359-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextOA19-00248en_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
s12859-021-04295-1.pdf1.45 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

91
Last Week
0
Last month
Citations as of Apr 21, 2024

Downloads

33
Citations as of Apr 21, 2024

SCOPUSTM   
Citations

19
Citations as of Apr 26, 2024

WEB OF SCIENCETM
Citations

16
Citations as of Apr 25, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.