Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113627
Title: ADFound : a foundation model for diagnosis and prognosis of Alzheimer's disease
Authors: Yang, G 
Du, K
Yang, Z
Du, Y 
Cheung, EYW
Zheng, Y 
Yang, M 
Kourtzi, Z
Schönlieb, CB
Wang, S 
Issue Date: 2025
Source: IEEE journal of biomedical and health informatics, Date of Publication: 03 June 2025, Early Access, https://doi.org/10.1109/JBHI.2025.3576436
Abstract: Alzheimer's disease (AD) is an incurable neurodegenerative disorder characterized by progressive cognitive and functional decline. Consequently, early diagnosis and accurate prediction of disease progression are of paramount importance and inherently complex, necessitating the integration of multi-modal data. However, most existing methods are task-specific models that lack generalization ability, addressing only one task at a time and failing to simultaneously assess disease diagnosis and progression. In this paper, we introduce ADFound, the first foundation model for AD that serves as a basis for various downstream tasks, such as diagnosis and prognosis, with high generalization capability. ADFound leverages a substantial amount of unlabeled 3D multi-modal neuroimaging, including paired and unpaired data, to achieve its objectives. Specifically, ADFound is developed upon the Multi-modal Vim encoder by Vision Mamba block to capture long-range dependencies inherent in 3D multi-modal medical images. To efficiently pre-train ADFound on unlabeled paired and upaired multi-modal neuroimaging data, we proposed a novel self-supervised learning framework that integrates multi-modal masked autoencoder (MAE) and contrastive learning. The multi-modal MAE aims to learn local relations among modalities by reconstructing images with unmasked image patches. Additionally, we introduce a Dual Contrastive Learning for Multi-modal Data to enhance the discriminative capabilities of multi-modal representations from intra-modal and inter-modal perspectives. Our experiments demonstrate that ADFound outperforms stateof-the-art methods across a wide range of downstream tasks relevant to the diagnosis and prognosis of AD. Furthermore, the results indicate that our foundation model can be extended to more modalities, such as non-image data, showing its versatility. The code is available at https: //github.com/guangqianyang/ADFound.git.
Keywords: Alzheimer's Disease
Contrastive Learning
Dementia
Foundation Model
Multi-modal Learning
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE journal of biomedical and health informatics 
ISSN: 2168-2194
EISSN: 2168-2208
DOI: 10.1109/JBHI.2025.3576436
Appears in Collections:Journal/Magazine Article

Open Access Information
Status embargoed access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.