Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/106629
DC Field | Value | Language |
---|---|---|
dc.contributor | School of Optometry | en_US |
dc.contributor | Research Centre for SHARP Vision | en_US |
dc.creator | Chen, X | en_US |
dc.creator | Zhang, W | en_US |
dc.creator | Xu, P | en_US |
dc.creator | Zhao, Z | en_US |
dc.creator | Zheng, Y | en_US |
dc.creator | Shi, D | en_US |
dc.creator | He, M | en_US |
dc.date.accessioned | 2024-05-20T08:40:49Z | - |
dc.date.available | 2024-05-20T08:40:49Z | - |
dc.identifier.uri | http://hdl.handle.net/10397/106629 | - |
dc.language.iso | en | en_US |
dc.publisher | Nature Publishing Group | en_US |
dc.rights | © The Author(s) 2024 | en_US |
dc.rights | This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | en_US |
dc.rights | The following publication Chen, X., Zhang, W., Xu, P. et al. FFA-GPT: an automated pipeline for fundus fluorescein angiography interpretation and question-answer. npj Digit. Med. 7, 111 (2024) is available at https://doi.org/10.1038/s41746-024-01101-z. | en_US |
dc.title | FFA-GPT : an automated pipeline for fundus fluorescein angiography interpretation and question-answer | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.volume | 7 | en_US |
dc.identifier.doi | 10.1038/s41746-024-01101-z | en_US |
dcterms.abstract | Fundus fluorescein angiography (FFA) is a crucial diagnostic tool for chorioretinal diseases, but its interpretation requires significant expertise and time. Prior studies have used Artificial Intelligence (AI)-based systems to assist FFA interpretation, but these systems lack user interaction and comprehensive evaluation by ophthalmologists. Here, we used large language models (LLMs) to develop an automated interpretation pipeline for both report generation and medical question-answering (QA) for FFA images. The pipeline comprises two parts: an image-text alignment module (Bootstrapping Language-Image Pre-training) for report generation and an LLM (Llama 2) for interactive QA. The model was developed using 654,343 FFA images with 9392 reports. It was evaluated both automatically, using language-based and classification-based metrics, and manually by three experienced ophthalmologists. The automatic evaluation of the generated reports demonstrated that the system can generate coherent and comprehensible free-text reports, achieving a BERTScore of 0.70 and F1 scores ranging from 0.64 to 0.82 for detecting top-5 retinal conditions. The manual evaluation revealed acceptable accuracy (68.3%, Kappa 0.746) and completeness (62.3%, Kappa 0.739) of the generated reports. The generated free-form answers were evaluated manually, with the majority meeting the ophthalmologists’ criteria (error-free: 70.7%, complete: 84.0%, harmless: 93.7%, satisfied: 65.3%, Kappa: 0.762–0.834). This study introduces an innovative framework that combines multi-modal transformers and LLMs, enhancing ophthalmic image interpretation, and facilitating interactive communications during medical consultation. | en_US |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | npj digital medicine, 2024, v. 7, 111 | en_US |
dcterms.isPartOf | npj digital medicine | en_US |
dcterms.issued | 2024 | - |
dc.identifier.eissn | 2398-6352 | en_US |
dc.identifier.artn | 111 | en_US |
dc.description.validate | 202405 bcch | en_US |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | a2711 | - |
dc.identifier.SubFormID | 48110 | - |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | Start-up Fund for RAPs under the Strategic Hiring Scheme | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
dc.relation.rdata | https://osf.io/5rvsh/?view_only=38c6988083e54521a227aace9acb98f6 | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
s41746-024-01101-z.pdf | 1.33 MB | Adobe PDF | View/Open |
Page views
20
Citations as of Jun 30, 2024
Downloads
6
Citations as of Jun 30, 2024
SCOPUSTM
Citations
1
Citations as of Jun 21, 2024
WEB OF SCIENCETM
Citations
1
Citations as of Jun 27, 2024
![](/image/google_scholar.jpg)
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.