Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/109510
DC FieldValueLanguage
dc.contributorSchool of Optometryen_US
dc.contributorCentre for Eye and Vision Researchen_US
dc.creatorYan, Zen_US
dc.creatorLiang, Zen_US
dc.creatorLiu, Zen_US
dc.creatorWang, Sen_US
dc.creatorChun, RKMen_US
dc.creatorJizhou Lien_US
dc.creatorKee, CSen_US
dc.creatorLiang, Den_US
dc.date.accessioned2024-11-04T06:33:20Z-
dc.date.available2024-11-04T06:33:20Z-
dc.identifier.issn0302-9743en_US
dc.identifier.urihttp://hdl.handle.net/10397/109510-
dc.description27th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2024, Marrakesh, Morocco, October 6-10, 2024en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.subjectDenoisingen_US
dc.subjectFeature learningen_US
dc.subjectFundus photographen_US
dc.subjectGeneralizationen_US
dc.subjectVision loss estimationen_US
dc.subjectVisual fielden_US
dc.titleGeneralized robust fundus photography-based vision loss estimation for high myopiaen_US
dc.typeConference Paperen_US
dc.identifier.spage700en_US
dc.identifier.epage710en_US
dc.identifier.volume15001en_US
dc.identifier.doi10.1007/978-3-031-72378-0_65en_US
dcterms.abstractHigh myopia significantly increases the risk of irreversible vision loss. Traditional perimetry-based visual field (VF) assessment provides systematic quantification of visual loss but it is subjective and time-consuming. Consequently, machine learning models utilizing fundus photographs to estimate VF have emerged as promising alternatives. However, due to the high variability and the limited availability of VF data, existing VF estimation models fail to generalize well, particularly when facing out-of-distribution data across diverse centers and populations. To tackle this challenge, we propose a novel, parameter-efficient framework to enhance the generalized robustness of VF estimation on both in- and out-of-distribution data. Specifically, we design a Refinement-by-Denoising (RED) module for feature refinement and adaptation from pretrained vision models, aiming to learn high-entropy feature representations and to mitigate the domain gap effectively and efficiently. Through independent validation on two distinct real-world datasets from separate centers, our method significantly outperforms existing approaches in RMSE, MAE and correlation coefficient for both internal and external validation. Our proposed framework benefits both in- and out-of-distribution VF estimation, offering significant clinical implications and potential utility in real-world ophthalmic practices.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationLecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics), 2024, v. 15001, p. 700-710en_US
dcterms.isPartOfLecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics)en_US
dcterms.issued2024-
dc.relation.ispartofbookMedical Image Computing and Computer Assisted Intervention – MICCAI 2024 : 27th International Conference, Marrakesh, Morocco, October 6–10, 2024, Proceedings, Part Ien_US
dc.relation.conferenceMedical Image Computing and Computer Assisted Intervention [MICCAI]en_US
dc.identifier.eissn1611-3349en_US
dc.description.validate202411 bcchen_US
dc.description.oaNot applicableen_US
dc.identifier.FolderNumbera3267-
dc.identifier.SubFormID49855-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextCentre for Eye and Vision Research (CEVR), InnoHK CEVR Project 1.5, 17W Hong Kong Science Park, Hong Kong; Research Centre for SHARP Vision (RCSV), Hong Kong Polytechnic University, Hong Kong; Research Institute for Artificial Intelligence of Things (RIAIoT), Hong Kong Polytechnic University, Hong Kongen_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2025-10-03en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Open Access Information
Status embargoed access
Embargo End Date 2025-10-03
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

48
Citations as of Apr 14, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.