Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/109379
Title: | Anatomical structure-guided medical vision-language pre-training | Authors: | Li, Q Yan, X Xu, J Yuan, R Zhang, Y Feng, R Shen, Q Zhang, X Wang, S |
Issue Date: | 2024 | Source: | In MG Linguraru,Q Dou, A Feragen, S Giannarou, B Glocker, K Lekadir, & JA Schnabel [Eds.]. Medical Image Computing and Computer Assisted Intervention– MICCAI 2024 27th International Conference Marrakesh, Morocco, October 6–10, 2024 Proceedings, Part XI, p. 80-90. Cham, Switzerland: Springer, 2024 | Abstract: | Learning medical visual representations through vision-language pre-training has reached remarkable progress. Despite the promising performance, it still faces challenges, i.e., local alignment lacks interpretability and clinical relevance, and the insufficient internal and external representation learning of image-report pairs. To address these issues, we propose an Anatomical Structure-Guided (ASG) framework. Specifically, we parse raw reports into triplets <anatomical region, finding, existence>, and fully utilize each element as supervision to enhance representation learning. For anatomical region, we design an automatic anatomical region-sentence alignment paradigm in collaboration with radiologists, considering them as the minimum semantic units to explore fine-grained local alignment. For finding and existence, we regard them as image tags, applying an image-tag recognition decoder to associate image features with their respective tags within each sample and constructing soft labels for contrastive learning to improve the semantic association of different image-report pairs. We evaluate the proposed ASG framework on two downstream tasks, including five public benchmarks. Experimental results demonstrate that our method outperforms the state-of-the-art methods. | Keywords: | Anatomical structure Contrastive learning Medical vision-language Pre-training Representation learning |
Publisher: | Springer | DOI: | 10.1007/978-3-031-72120-5_8 |
Appears in Collections: | Conference Paper |
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.