Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113627
DC FieldValueLanguage
dc.contributorDepartment of Biomedical Engineering-
dc.contributorResearch Institute for Smart Ageing-
dc.creatorYang, Gen_US
dc.creatorDu, Ken_US
dc.creatorYang, Zen_US
dc.creatorDu, Yen_US
dc.creatorCheung, EYWen_US
dc.creatorZheng, Yen_US
dc.creatorYang, Men_US
dc.creatorKourtzi, Zen_US
dc.creatorSchönlieb, CBen_US
dc.creatorWang, Sen_US
dc.date.accessioned2025-06-16T00:36:56Z-
dc.date.available2025-06-16T00:36:56Z-
dc.identifier.issn2168-2194en_US
dc.identifier.urihttp://hdl.handle.net/10397/113627-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.subjectAlzheimer's Diseaseen_US
dc.subjectContrastive Learningen_US
dc.subjectDementiaen_US
dc.subjectFoundation Modelen_US
dc.subjectMulti-modal Learningen_US
dc.titleADFound : a foundation model for diagnosis and prognosis of Alzheimer's diseaseen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1109/JBHI.2025.3576436en_US
dcterms.abstractAlzheimer's disease (AD) is an incurable neurodegenerative disorder characterized by progressive cognitive and functional decline. Consequently, early diagnosis and accurate prediction of disease progression are of paramount importance and inherently complex, necessitating the integration of multi-modal data. However, most existing methods are task-specific models that lack generalization ability, addressing only one task at a time and failing to simultaneously assess disease diagnosis and progression. In this paper, we introduce ADFound, the first foundation model for AD that serves as a basis for various downstream tasks, such as diagnosis and prognosis, with high generalization capability. ADFound leverages a substantial amount of unlabeled 3D multi-modal neuroimaging, including paired and unpaired data, to achieve its objectives. Specifically, ADFound is developed upon the Multi-modal Vim encoder by Vision Mamba block to capture long-range dependencies inherent in 3D multi-modal medical images. To efficiently pre-train ADFound on unlabeled paired and upaired multi-modal neuroimaging data, we proposed a novel self-supervised learning framework that integrates multi-modal masked autoencoder (MAE) and contrastive learning. The multi-modal MAE aims to learn local relations among modalities by reconstructing images with unmasked image patches. Additionally, we introduce a Dual Contrastive Learning for Multi-modal Data to enhance the discriminative capabilities of multi-modal representations from intra-modal and inter-modal perspectives. Our experiments demonstrate that ADFound outperforms stateof-the-art methods across a wide range of downstream tasks relevant to the diagnosis and prognosis of AD. Furthermore, the results indicate that our foundation model can be extended to more modalities, such as non-image data, showing its versatility. The code is available at https: //github.com/guangqianyang/ADFound.git.-
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationIEEE journal of biomedical and health informatics, Date of Publication: 03 June 2025, Early Access, https://doi.org/10.1109/JBHI.2025.3576436en_US
dcterms.isPartOfIEEE journal of biomedical and health informaticsen_US
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105007424227-
dc.identifier.eissn2168-2208en_US
dc.description.validate202506 bcch-
dc.identifier.FolderNumbera3701-
dc.identifier.SubFormID50757-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextStart-up Fund of The Hong Kong Polytechnic University; Seed Fund of the Research Institute for Smart Ageing; Tsinghua-PolyU Joint Research Initiative Funden_US
dc.description.pubStatusEarly releaseen_US
dc.date.embargo0000-00-00 (to be updated)en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.