Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118124
PIRA download icon_1.1View/Download Full Text
Title: Medical knowledge intervention prompt tuning for medical image classification
Authors: Du, Y 
Yu, N 
Wang, S 
Issue Date: Dec-2025
Source: IEEE transactions on medical imaging, Dec. 2025, v. 44, no. 12, p. 4945-4959
Abstract: Vision-language foundation models (VLMs) have shown great potential in feature transfer and generalization across a wide spectrum of medical-related downstream tasks. However, fine-tuning these models is resource-intensive due to their large number of parameters. Prompt tuning has emerged as a viable solution to mitigate memory usage and reduce training time while maintaining competitive performance. Nevertheless, the challenge is that existing prompt tuning methods cannot precisely distinguish different kinds of medical concepts, which miss essentially specific disease-related features across various medical imaging modalities in medical image classification tasks. We find that Large Language Models (LLMs), trained on extensive text corpora, are particularly adept at providing this specialized medical knowledge. Motivated by this, we propose incorporating LLMs into the prompt tuning process. Specifically, we introduce the CILMP, Conditional Intervention of Large Language Models for Prompt Tuning, a method that bridges LLMs and VLMs to facilitate the transfer of medical knowledge into VLM prompts. CILMP extracts disease-specific representations from LLMs, intervenes within a low-rank linear subspace and utilizes them to create disease-specific prompts. Additionally, a conditional mechanism is incorporated to condition the intervention process on each individual medical image, generating instance-adaptive prompts and thus enhancing adaptability. Extensive experiments across diverse medical image datasets demonstrate that CILMP consistently outperforms state-of-the-art prompt tuning methods, demonstrating its effectiveness. Code is available at https://usr922.github.io/cilmp
Keywords: Conditional intervention
Medical image classification
Prompt tuning
Representation fine-tuning
Vision language foundation model
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on medical imaging 
ISSN: 0278-0062
EISSN: 1558-254X
DOI: 10.1109/TMI.2025.3584841
Rights: © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication Y. Du, N. Yu and S. Wang, 'Medical Knowledge Intervention Prompt Tuning for Medical Image Classification,' in IEEE Transactions on Medical Imaging, vol. 44, no. 12, pp. 4945-4959, Dec. 2025 is available at https://doi.org/10.1109/TMI.2025.3584841.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Du_Medical_Knowledge_Intervention.pdfPre-Published version1.26 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

1
Citations as of May 8, 2026

WEB OF SCIENCETM
Citations

1
Citations as of Apr 23, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.