Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/97141
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.creator | Xu, Li | - |
| dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/12184 | - |
| dc.language.iso | English | - |
| dc.title | Multi-modal pre-training for medical vision-language understanding and generation | - |
| dc.type | Thesis | - |
| dcterms.abstract | With the availability of large-scale, comprehensive, and general-purpose vision-language (VL) datasets such as MSCOCO, vision-language pre-training (VLP) has become an active area of research and proven to be effective for various VL tasks such as visual question answering. However, studies on VLP in the medical domain have so far been scanty. To provide a comprehensive perspective on VLP for medical VL tasks, we conduct a thorough experimental analysis to study key factors that may affect the performance of VLP based on popular vision-language Transformer. Based on the empirical analysis, we develop several key insights which can guide future medical VLP research. Since current medical VL datasets are either noisy or of single modality, we propose RadioGraphy Captions (RGC), a multi-modality radiographic dataset containing 18,434 image-caption pairs, collected from an open-access online database MedPix. Our experimental results on RGC demonstrate that a domain-specific dataset with limited high-quality samples is also effective for pre-training. RGC can also be used as a new benchmark to evaluate VL models for report generation and medical image-text retrieval. By utilizing RGC and other available datasets for pre-training, we achieve new state-of-the-art or competitive results on medical VL tasks, including medical visual question answering, report generation, and medical image-text retrieval, compared with previous works, which can serve as solid baselines for future works. Additionally, we apply a token pruning method named Learned Token Pruning in the Transformer model to further reduce the inference time for downstream tasks. | - |
| dcterms.accessRights | open access | - |
| dcterms.educationLevel | M.Phil. | - |
| dcterms.extent | xii, 76 pages : color illustrations | - |
| dcterms.issued | 2022 | - |
| dcterms.LCSH | Computer vision | - |
| dcterms.LCSH | Machine learning | - |
| dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | - |
| Appears in Collections: | Thesis | |
Access
View full-text via https://theses.lib.polyu.edu.hk/handle/200/12184
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


