Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113674
PIRA download icon_1.1View/Download Full Text
Title: Asymmetric source-free unsupervised domain adaptation for medical image diagnosis
Authors: Zhang, Y 
Huang, ZA
Wu, J 
Tan, KC 
Issue Date: 2024
Source: Proceedings : 2024 IEEE Conference on Artificial Intelligence CAI 2024 : 25-27 June 2024, Marina Bay Sands, Singapore, p. 234-239
Abstract: Existing source-free unsupervised domain adaptation (SFUDA) methods primarily focus on addressing the domain gap issue for single-modal data, overlooking two crucial aspects: 1) In medical scenarios, clinicians often rely on multi-modal information for disease diagnosis. Consequently, emphasizing single-modal (symmetric modality) SFUDA algorithms neglect the complementary information from other modalities (asymmetric modalities). 2) Restricting SFUDA to a single modality limits downstream institutions’s ability to handle diverse modalities beyond that singular modality. To tackle these challenges, we propose an Asymmetric Source-Free Unsupervised Domain Adaptation (A-SFUDA) algorithm. This method leverages source model and unlabeled data from both symmetric and asymmetric modalities in the target domain for disease diagnosis. A-SFUDA adopts a two-stage training approach. In the first stage, A-SFUDA employs knowledge distillation (KD) to obtain two models capable of handling symmetric and asymmetric data in the target domain, facilitating preliminary diagnosis ability. In the second stage, A-SFUDA optimizes the target models through a pseudo-label correction mechanism based on multi-modal prediction correction and class-centered distance correction. Incorporating the two pseudo-label correction modules effectively mitigates noise within the training data, thereby facilitating the learning of the target models. We validate the performance of the proposed A-SFUDA algorithm on a large chest X-ray dataset, demonstrating its excellent performance for disease diagnosis in the target domain.
Keywords: Asymmetric modality
Pseudo-labeling
Source-free
Unsupervised domain adaptation
Publisher: Institute of Electrical and Electronics Engineers
ISBN: 979-8-3503-5409-6
DOI: 10.1109/CAI59869.2024.00051
Description: 2024 IEEE Conference on Artificial Intelligence CAI 2024 : 25-27 June 2024, Marina Bay Sands, Singapore
Rights: © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication Y. Zhang, Z. -A. Huang, J. Wu and K. C. Tan, "Asymmetric Source-Free Unsupervised Domain Adaptation for Medical Image Diagnosis," 2024 IEEE Conference on Artificial Intelligence (CAI), Singapore, Singapore, 2024, pp. 234-239 is available at https://doi.org/10.1109/CAI59869.2024.00051.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
Zhang_Asymmetric_Source-free_Unsupervised.pdfPre-Published version2.63 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.