Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/109510
DC Field | Value | Language |
---|---|---|
dc.contributor | School of Optometry | en_US |
dc.contributor | Centre for Eye and Vision Research | en_US |
dc.creator | Yan, Z | en_US |
dc.creator | Liang, Z | en_US |
dc.creator | Liu, Z | en_US |
dc.creator | Wang, S | en_US |
dc.creator | Chun, RKM | en_US |
dc.creator | Jizhou Li | en_US |
dc.creator | Kee, CS | en_US |
dc.creator | Liang, D | en_US |
dc.date.accessioned | 2024-11-04T06:33:20Z | - |
dc.date.available | 2024-11-04T06:33:20Z | - |
dc.identifier.issn | 0302-9743 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/109510 | - |
dc.description | 27th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2024, Marrakesh, Morocco, October 6-10, 2024 | en_US |
dc.language.iso | en | en_US |
dc.publisher | Springer | en_US |
dc.subject | Denoising | en_US |
dc.subject | Feature learning | en_US |
dc.subject | Fundus photograph | en_US |
dc.subject | Generalization | en_US |
dc.subject | Vision loss estimation | en_US |
dc.subject | Visual field | en_US |
dc.title | Generalized robust fundus photography-based vision loss estimation for high myopia | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.spage | 700 | en_US |
dc.identifier.epage | 710 | en_US |
dc.identifier.volume | 15001 | en_US |
dc.identifier.doi | 10.1007/978-3-031-72378-0_65 | en_US |
dcterms.abstract | High myopia significantly increases the risk of irreversible vision loss. Traditional perimetry-based visual field (VF) assessment provides systematic quantification of visual loss but it is subjective and time-consuming. Consequently, machine learning models utilizing fundus photographs to estimate VF have emerged as promising alternatives. However, due to the high variability and the limited availability of VF data, existing VF estimation models fail to generalize well, particularly when facing out-of-distribution data across diverse centers and populations. To tackle this challenge, we propose a novel, parameter-efficient framework to enhance the generalized robustness of VF estimation on both in- and out-of-distribution data. Specifically, we design a Refinement-by-Denoising (RED) module for feature refinement and adaptation from pretrained vision models, aiming to learn high-entropy feature representations and to mitigate the domain gap effectively and efficiently. Through independent validation on two distinct real-world datasets from separate centers, our method significantly outperforms existing approaches in RMSE, MAE and correlation coefficient for both internal and external validation. Our proposed framework benefits both in- and out-of-distribution VF estimation, offering significant clinical implications and potential utility in real-world ophthalmic practices. | en_US |
dcterms.accessRights | embargoed access | en_US |
dcterms.bibliographicCitation | Lecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics), 2024, v. 15001, p. 700-710 | en_US |
dcterms.isPartOf | Lecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics) | en_US |
dcterms.issued | 2024 | - |
dc.relation.ispartofbook | Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 : 27th International Conference, Marrakesh, Morocco, October 6–10, 2024, Proceedings, Part I | en_US |
dc.relation.conference | Medical Image Computing and Computer Assisted Intervention [MICCAI] | en_US |
dc.identifier.eissn | 1611-3349 | en_US |
dc.description.validate | 202411 bcch | en_US |
dc.description.oa | Not applicable | en_US |
dc.identifier.FolderNumber | a3267 | - |
dc.identifier.SubFormID | 49855 | - |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | Centre for Eye and Vision Research (CEVR), InnoHK CEVR Project 1.5, 17W Hong Kong Science Park, Hong Kong; Research Centre for SHARP Vision (RCSV), Hong Kong Polytechnic University, Hong Kong; Research Institute for Artificial Intelligence of Things (RIAIoT), Hong Kong Polytechnic University, Hong Kong | en_US |
dc.description.pubStatus | Published | en_US |
dc.date.embargo | 2025-10-03 | en_US |
dc.description.oaCategory | Green (AAM) | en_US |
Appears in Collections: | Conference Paper |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.