Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/104983
PIRA download icon_1.1View/Download Full Text
Title: Bridging the camera domain gap with image-to-image translation improves glaucoma diagnosis
Authors: He, S
Joseph, S
Bulloch, G
Jiang, F
Kasturibai, H
Kim, R
Ravilla, TD
Wang, Y
Shi, D 
He, M
Issue Date: Dec-2023
Source: Translational vision science & technology, Dec. 2023, v. 12, no. 12, 20
Abstract: Purpose: The purpose of this study was to improve the automated diagnosis of glaucomatous optic neuropathy (GON), we propose a generative adversarial network (GAN) model that translates Optain images to Topcon images.
Methods: We trained the GAN model on 725 paired images from Topcon and Optain cameras and externally validated it using an additional 843 paired images collected from the Aravind Eye Hospital in India. An optic disc segmentation model was used to assess the disparities in disc parameters across cameras. The performance of the translated images was evaluated using root mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), 95% limits of agreement (LOA), Pearson's correlations, and Cohen's Kappa coefficient. The evaluation compared the performance of the GON model on Topcon photographs as a reference to that of Optain photographs and GAN-translated photographs.
Results: The GAN model significantly reduced Optain false positive results for GON diagnosis, with RMSE, PSNR, and SSIM of GAN images being 0.067, 14.31, and 0.64, respectively, the mean difference of VCDR and cup-to-disc area ratio between Topcon and GAN images being 0.03, 95% LOA ranging from −0.09 to 0.15 and −0.05 to 0.10. Pearson correlation coefficients increased from 0.61 to 0.85 in VCDR and 0.70 to 0.89 in cup-to-disc area ratio, whereas Cohen's Kappa improved from 0.32 to 0.60 after GAN translation.
Conclusions: Image-to-image translation across cameras can be achieved by using GAN to solve the problem of disc overexposure in Optain cameras.
Translational Relevance: Our approach enhances the generalizability of deep learning diagnostic models, ensuring their performance on cameras that are outside of the original training data set.
Keywords: Deep learning model performance
Generative adversarial networks
Image-to-image translation
Portable fundus camera
Publisher: Association for Research in Vision and Ophthalmology
Journal: Translational vision science & technology 
EISSN: 2164-2591
DOI: 10.1167/tvst.12.12.20
Rights: Copyright 2023 The Authors
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nc-nd/4.0).
The following publication Shuang He, Sanil Joseph, Gabriella Bulloch, Feng Jiang, Hariharasubramanian Kasturibai, Ramasamy Kim, Thulasiraj D. Ravilla, Yueye Wang, Danli Shi, Mingguang He; Bridging the Camera Domain Gap With Image-to-Image Translation Improves Glaucoma Diagnosis. Trans. Vis. Sci. Tech. 2023;12(12):20 is available at https://doi.org/10.1167/tvst.12.12.20.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
i2164-2591-12-12-20_1703160495.98945.pdf2.65 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

30
Citations as of Apr 28, 2024

Downloads

4
Citations as of Apr 28, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.