Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112832
PIRA download icon_1.1View/Download Full Text
Title: Evaluating large language models and agents in healthcare : key challenges in clinical applications
Authors: Chen, X 
Xiang, J
Lu, S
Liu, Y
He, M 
Shi, D 
Issue Date: May-2025
Source: Intelligent medicine, May 2025, v. 5, no. 2, p. 151-163
Abstract: Large language models (LLMs) have emerged as transformative tools with significant potential across healthcare and medicine. In clinical settings, they hold promises for tasks ranging from clinical decision support to patient education. Advances in LLM agents further broaden their utility by enabling multimodal processing and multitask handling in complex clinical workflows. However, evaluating the performance of LLMs in medical contexts presents unique challenges due to the high-risk nature of healthcare and the complexity of medical data. This paper provides a comprehensive overview of current evaluation practices for LLMs and LLM agents in medicine. We contributed 3 main aspects: First, we summarized data sources used in evaluations, including existing medical resources and manually designed clinical questions, offering a basis for LLM evaluation in medical settings. Second, we analyzed key medical task scenarios: closed-ended tasks, open-ended tasks, image processing tasks, and real-world multitask scenarios involving LLM agents, thereby offering guidance for further research across different medical applications. Third, we compared evaluation methods and dimensions, covering both automated metrics and human expert assessments, while addressing traditional accuracy measures alongside agent-specific dimensions, such as tool usage and reasoning capabilities. Finally, we identified key challenges and opportunities in this evolving field, emphasizing the need for continued research and interdisciplinary collaboration between healthcare professionals and computer scientists to ensure safe, ethical, and effective deployment of LLMs in clinical practice.
Keywords: Evaluation
Generative pre-trained transformer
Hallucination
Large language model
Medical agent
Reasoning
Publisher: Elsevier BV
Journal: Intelligent medicine 
EISSN: 2667-1026
DOI: 10.1016/j.imed.2025.03.002
Rights: © 2025 The Author(s). Published by Elsevier B.V. on behalf of Chinese Medical Association. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
The following publication Chen, X., Xiang, J., Lu, S., Liu, Y., He, M., & Shi, D. (2025). Evaluating large language models and agents in healthcare: key challenges in clinical applications. Intelligent Medicine, 5(2), 151-163 is available at https://doi.org/10.1016/j.imed.2025.03.002.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
1-s2.0-S2667102625000294-main.pdf3.15 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.