Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118124
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Biomedical Engineering-
dc.contributorResearch Institute for Smart Ageing-
dc.contributorResearch Institute for Artificial Intelligence of Things-
dc.creatorDu, Y-
dc.creatorYu, N-
dc.creatorWang, S-
dc.date.accessioned2026-03-18T03:14:13Z-
dc.date.available2026-03-18T03:14:13Z-
dc.identifier.issn0278-0062-
dc.identifier.urihttp://hdl.handle.net/10397/118124-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication Y. Du, N. Yu and S. Wang, 'Medical Knowledge Intervention Prompt Tuning for Medical Image Classification,' in IEEE Transactions on Medical Imaging, vol. 44, no. 12, pp. 4945-4959, Dec. 2025 is available at https://doi.org/10.1109/TMI.2025.3584841.en_US
dc.subjectConditional interventionen_US
dc.subjectMedical image classificationen_US
dc.subjectPrompt tuningen_US
dc.subjectRepresentation fine-tuningen_US
dc.subjectVision language foundation modelen_US
dc.titleMedical knowledge intervention prompt tuning for medical image classificationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage4945-
dc.identifier.epage4959-
dc.identifier.volume44-
dc.identifier.issue12-
dc.identifier.doi10.1109/TMI.2025.3584841-
dcterms.abstractVision-language foundation models (VLMs) have shown great potential in feature transfer and generalization across a wide spectrum of medical-related downstream tasks. However, fine-tuning these models is resource-intensive due to their large number of parameters. Prompt tuning has emerged as a viable solution to mitigate memory usage and reduce training time while maintaining competitive performance. Nevertheless, the challenge is that existing prompt tuning methods cannot precisely distinguish different kinds of medical concepts, which miss essentially specific disease-related features across various medical imaging modalities in medical image classification tasks. We find that Large Language Models (LLMs), trained on extensive text corpora, are particularly adept at providing this specialized medical knowledge. Motivated by this, we propose incorporating LLMs into the prompt tuning process. Specifically, we introduce the CILMP, Conditional Intervention of Large Language Models for Prompt Tuning, a method that bridges LLMs and VLMs to facilitate the transfer of medical knowledge into VLM prompts. CILMP extracts disease-specific representations from LLMs, intervenes within a low-rank linear subspace and utilizes them to create disease-specific prompts. Additionally, a conditional mechanism is incorporated to condition the intervention process on each individual medical image, generating instance-adaptive prompts and thus enhancing adaptability. Extensive experiments across diverse medical image datasets demonstrate that CILMP consistently outperforms state-of-the-art prompt tuning methods, demonstrating its effectiveness. Code is available at https://usr922.github.io/cilmp-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on medical imaging, Dec. 2025, v. 44, no. 12, p. 4945-4959-
dcterms.isPartOfIEEE transactions on medical imaging-
dcterms.issued2025-12-
dc.identifier.scopus2-s2.0-105010357388-
dc.identifier.pmid40591469-
dc.identifier.eissn1558-254X-
dc.description.validate202603 bcjz-
dc.description.oaAccepted Manuscripten_US
dc.identifier.SubFormIDG001277/2026-02en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported in part by the Research Grants Council (RGC) Collaborative Research Fund under Grant C5055-24G, in part by the Start-Up Fund of The Hong Kong Polytechnic University under Grant P0045999, in part by the Seed Fund of the Research Institute for Smart Ageing under Grant P0050946, in part by the Tsinghua-PolyU Joint Research Initiative Fund under Grant P0056509, and in part by the PolyU UGC Funding under Grant P0053716.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Du_Medical_Knowledge_Intervention.pdfPre-Published version1.26 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

1
Citations as of May 8, 2026

WEB OF SCIENCETM
Citations

1
Citations as of Apr 23, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.