Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114620
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorHe, Z-
dc.creatorXiao, Z-
dc.creatorXu, Z-
dc.creatorLi, Y-
dc.creatorSong, Z-
dc.creatorLeighton, C-
dc.creatorWang, L-
dc.creatorLiu, S-
dc.creatorWong, SY-
dc.creatorHuang, W-
dc.creatorJia, W-
dc.creatorLam, KM-
dc.date.accessioned2025-08-18T03:02:19Z-
dc.date.available2025-08-18T03:02:19Z-
dc.identifier.issn0277-786X-
dc.identifier.urihttp://hdl.handle.net/10397/114620-
dc.descriptionInternational Workshop on Advanced Imaging Technology (IWAIT) 2025, 6-8 January 2025, Douliu City, Taiwanen_US
dc.language.isoenen_US
dc.publisherSPIE - International Society for Optical Engineeringen_US
dc.rightsCopyright 2024 Society of Photo‑Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, and modification of the contents of the publication are prohibited.en_US
dc.rightsThe following publication Zongqi He, Zhe Xiao, Zhuoning Xu, Yunze Li, Zelin Song, Calvin Leighton, Li Wang, Shanru Liu, Shiun Yee Wong, Wenfeng Huang, Wenjing Jia, and Kin-Man Lam "MFGAN: OCT image super-resolution and enhancement with blind degradation and multi-frame fusion", Proc. SPIE 13510, International Workshop on Advanced Imaging Technology (IWAIT) 2025, 1351005 (5 February 2025) is available at https://doi.org/10.1117/12.3057230.en_US
dc.subjectBlind Degradationen_US
dc.subjectImage Enhancementen_US
dc.subjectMulti-frame Fusionen_US
dc.subjectOCT Image Super-resolutionen_US
dc.titleMFGAN : OCT image super-resolution and enhancement with blind degradation and multi-frame fusionen_US
dc.typeConference Paperen_US
dc.identifier.volume13510-
dc.identifier.doi10.1117/12.3057230-
dcterms.abstractOptical coherence tomography (OCT) is crucial in medical imaging, especially for retinal diagnostics. However, its effectiveness is often limited by imaging devices, resulting in high noise levels, low resolution, and reduced sampling rates, which hinder OCT image diagnosis. This paper proposes a generative adversarial network (GAN) based OCT image super-resolution framework that leverages a blind degradation and Multi-frame Fusion mechanism, namely MFGAN, for retinal OCT image super-resolution. Our method jointly performs denoising, blind super-resolution, and multi-frame fusion, which can reconstruct high quality OCT images without requiring paired ground-truth data. We employ a blind degradation model to handle OCT image degradation and a denoising prior to effectively process noisy inputs. Experimental results on the PKU37 dataset and the VIP Cup 2024 dataset demonstrate that MFGAN excels in both visual quality and quantitative performance, outperforming existing OCT image super-resolution methods.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of SPIE : the International Society for Optical Engineering, 2025, v. 13510, 1351005-
dcterms.isPartOfProceedings of SPIE : the International Society for Optical Engineering-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-85218350553-
dc.relation.conferenceInternational Workshop on Advanced Imaging Technology [IWAIT]-
dc.identifier.eissn1996-756X-
dc.identifier.artn1351005-
dc.description.validate202508 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Othersen_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryVoR alloweden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
1351005.pdf696.93 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.