Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/82016
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorHu, Jen_US
dc.creatorKuang, Yen_US
dc.creatorLiao, Ben_US
dc.creatorCao, Len_US
dc.creatorDong, Sen_US
dc.creatorLi, Pen_US
dc.date.accessioned2020-05-05T05:58:18Z-
dc.date.available2020-05-05T05:58:18Z-
dc.identifier.issn1687-5265en_US
dc.identifier.urihttp://hdl.handle.net/10397/82016-
dc.language.isoenen_US
dc.publisherHindawi Ltden_US
dc.rightsCopyright © 2019 Jinlong Hu et al. This is an open access article distributed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.en_US
dc.rightsThe following publication Hu, J. L., Kuang, Y. Z., Liao, B., Cao, L. J., Dong, S. B., & Li, P. (2019). A multichannel 2D convolutional neural network model for task-evoked fMRI data classification. Computational Intelligence and Neuroscience, 5065214, 1-9 is available at https://dx.doi.org/10.1155/2019/5065214en_US
dc.titleA multichannel 2D convolutional neural network model for task-evoked fMRI data classificationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1-
dc.identifier.epage9-
dc.identifier.volume2019en_US
dc.identifier.doi10.1155/2019/5065214en_US
dcterms.abstractDeep learning models have been successfully applied to the analysis of various functional MRI data. Convolutional neural networks (CNN), a class of deep neural networks, have been found to excel at extracting local meaningful features based on their shared-weights architecture and space invariance characteristics. In this study, we propose M2D CNN, a novel multichannel 2D CNN model, to classify 3D fMRI data. The model uses sliced 2D fMRI data as input and integrates multichannel information learned from 2D CNN networks. We experimentally compared the proposed M2D CNN against several widely used models including SVM, 1D CNN, 2D CNN, 3D CNN, and 3D separable CNN with respect to their performance in classifying task-based fMRI data. We tested M2D CNN against six models as benchmarks to classify a large number of time-series whole-brain imaging data based on a motor task in the Human Connectome Project (HCP). The results of our experiments demonstrate the following: (i) convolution operations in the CNN models are advantageous for high-dimensional whole-brain imaging data classification, as all CNN models outperform SVM; (ii) 3D CNN models achieve higher accuracy than 2D CNN and 1D CNN model, but 3D CNN models are computationally costly as any extra dimension is added in the input; (iii) the M2D CNN model proposed in this study achieves the highest accuracy and alleviates data overfitting given its smaller number of parameters as compared with 3D CNN.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationComputational intelligence and neuroscience, 31 Dec. 2019, v. 2019, 5065214, p. 1-9en_US
dcterms.isPartOfComputational Intelligence and Neuroscienceen_US
dcterms.issued2019-
dc.identifier.isiWOS:000507293400001-
dc.identifier.scopus2-s2.0-85077943825-
dc.identifier.eissn1687-5273en_US
dc.identifier.artn5065214en_US
dc.description.validate202006 bcmaen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera0435-n08en_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Hu_Convolutional_Neural_Network.pdf1.75 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

193
Last Week
2
Last month
Citations as of Mar 24, 2024

Downloads

146
Citations as of Mar 24, 2024

SCOPUSTM   
Citations

32
Citations as of Mar 28, 2024

WEB OF SCIENCETM
Citations

22
Citations as of Mar 28, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.