Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/104987
PIRA download icon_1.1View/Download Full Text
Title: ICGA-GPT : report generation and question answering for indocyanine green angiography images
Authors: Chen, X 
Zhang, W 
Zhao, Z 
Xu, P
Zheng, Y
Shi, D 
He, M 
Issue Date: 2024
Source: British journal of ophthalmology, First published March 20, 2024, Online First, https://doi.org/10.1136/bjo-2023-324446
Abstract: Background: Indocyanine green angiography (ICGA) is vital for diagnosing chorioretinal diseases, but its interpretation and patient communication require extensive expertise and time-consuming efforts. We aim to develop a bilingual ICGA report generation and question-answering (QA) system.
Methods: Our dataset comprised 213 129 ICGA images from 2919 participants. The system comprised two stages: image–text alignment for report generation by a multimodal transformer architecture, and large language model (LLM)-based QA with ICGA text reports and human-input questions. Performance was assessed using both qualitative metrics (including Bilingual Evaluation Understudy (BLEU), Consensus-based Image Description Evaluation (CIDEr), Recall-Oriented Understudy for Gisting Evaluation-Longest Common Subsequence (ROUGE-L), Semantic Propositional Image Caption Evaluation (SPICE), accuracy, sensitivity, specificity, precision and F1 score) and subjective evaluation by three experienced ophthalmologists using 5-point scales (5 refers to high quality).
Results: We produced 8757 ICGA reports covering 39 disease-related conditions after bilingual translation (66.7% English, 33.3% Chinese). The ICGA-GPT model’s report generation performance was evaluated with BLEU scores (1–4) of 0.48, 0.44, 0.40 and 0.37; CIDEr of 0.82; ROUGE of 0.41 and SPICE of 0.18. For disease-based metrics, the average specificity, accuracy, precision, sensitivity and F1 score were 0.98, 0.94, 0.70, 0.68 and 0.64, respectively. Assessing the quality of 50 images (100 reports), three ophthalmologists achieved substantial agreement (kappa=0.723 for completeness, kappa=0.738 for accuracy), yielding scores from 3.20 to 3.55. In an interactive QA scenario involving 100 generated answers, the ophthalmologists provided scores of 4.24, 4.22 and 4.10, displaying good consistency (kappa=0.779).
Conclusion: This pioneering study introduces the ICGA-GPT model for report generation and interactive QA for the first time, underscoring the potential of LLMs in assisting with automated ICGA image interpretation.
Publisher: BMJ Publishing Group
Journal: British journal of ophthalmology 
ISSN: 0007-1161
EISSN: 1468-2079
DOI: 10.1136/bjo-2023-324446
Rights: © Author(s) (or their employer(s)) 2024. No commercial re-use. See rights and permissions. Published by BMJ.
This article has been accepted for publication in British journal of ophthalmology, 2024 following peer review, and the Version of Record can be accessed online at https://doi.org/10.1136/bjo-2023-324446.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Chen_ICGA-GPT_Report_Generation.pdfPre-Published version9.65 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

34
Citations as of Apr 21, 2024

Downloads

15
Citations as of Apr 21, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.