Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115005
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Chinese and Bilingual Studies | - |
| dc.creator | Yan, HJ | - |
| dc.creator | Mubonanyikuzo, V | - |
| dc.creator | Komolafe, TE | - |
| dc.creator | Zhou, L | - |
| dc.creator | Wu, T | - |
| dc.creator | Wang, NZ | - |
| dc.date.accessioned | 2025-09-02T00:32:03Z | - |
| dc.date.available | 2025-09-02T00:32:03Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/115005 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Public Library of Science | en_US |
| dc.rights | © 2025 Yan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. | en_US |
| dc.rights | The following publication Yan H, Mubonanyikuzo V, Komolafe TE, Zhou L, Wu T, Wang N (2025) Hybrid-RViT: Hybridizing ResNet-50 and Vision Transformer for Enhanced Alzheimer’s disease detection. PLoS ONE 20(2): e0318998 is available at https://dx.doi.org/10.1371/journal.pone.0318998. | en_US |
| dc.title | Hybrid-RViT: hybridizing resnet-50 and vision transformer for enhanced alzheimer's disease detection | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 20 | - |
| dc.identifier.issue | 2 | - |
| dc.identifier.doi | 10.1371/journal.pone.0318998 | - |
| dcterms.abstract | Alzheimer's disease (AD) is a leading cause of disability worldwide. Early detection is critical for preventing progression and formulating effective treatment plans. This study aims to develop a novel deep learning (DL) model, Hybrid-RViT, to enhance the detection of AD. The proposed Hybrid-RViT model integrates the pre-trained convolutional neural network (ResNet-50) with the Vision Transformer (ViT) to classify brain MRI images across different stages of AD. The ResNet-50 adopted for transfer learning, facilitates inductive bias and feature extraction. Concurrently, ViT processes sequences of image patches to capture long-distance relationships via a self-attention mechanism, thereby functioning as a joint local-global feature extractor. The Hybrid-RViT model achieved a training accuracy of 97% and a testing accuracy of 95%, outperforming previous models. This demonstrates its potential efficacy in accurately identifying and classifying AD stages from brain MRI data. The Hybrid-RViT model, combining ResNet-50 and ViT, shows superior performance in AD detection, highlighting its potential as a valuable tool for medical professionals in interpreting and analyzing brain MRI images. This model could significantly improve early diagnosis and intervention strategies for AD. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | PLoS one, 2025, v. 20, no. 2, e0318998 | - |
| dcterms.isPartOf | PLoS one | - |
| dcterms.issued | 2025 | - |
| dc.identifier.isi | WOS:001438464500022 | - |
| dc.identifier.eissn | 1932-6203 | - |
| dc.identifier.artn | e0318998 | - |
| dc.description.validate | 202509 bcrc | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | National Natural Science Foundation of China; “HaiYan Plan” Scientific Research Funding Project of Lianyungang City; The First People’s Hospital of Lianyungang–Advanced Technology Support Project; Project of Huaguoshan Mountain Talent Plan - Doctors for Innovation and Entrepreneurship, The Hong Kong Polytechnic University Start-up Fund | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| journal.pone.0318998.pdf | 1.75 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



