Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/117730
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorJiang, Yen_US
dc.creatorQian, Gen_US
dc.creatorWu, Jen_US
dc.creatorHuang, Qen_US
dc.creatorLi, Qen_US
dc.creatorWu, Yen_US
dc.creatorWei, XYen_US
dc.date.accessioned2026-03-04T06:02:43Z-
dc.date.available2026-03-04T06:02:43Z-
dc.identifier.issn0278-0062en_US
dc.identifier.urihttp://hdl.handle.net/10397/117730-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication Y. Jiang et al., "Self-Paced Learning for Images of Antinuclear Antibodies," in IEEE Transactions on Medical Imaging, vol. 45, no. 4, pp. 1661-1672, April 2026 is available at https://doi.org/10.1109/TMI.2025.3637237.en_US
dc.subjectAntinuclear antibodiesen_US
dc.subjectMulti-instance learningen_US
dc.subjectMulti-label learningen_US
dc.subjectSelf-paced learningen_US
dc.titleSelf-paced learning for images of antinuclear antibodiesen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1661en_US
dc.identifier.epage1672en_US
dc.identifier.volume45en_US
dc.identifier.issue4en_US
dc.identifier.doi10.1109/TMI.2025.3637237en_US
dcterms.abstractAntinuclear antibody (ANA) testing is a critical method for diagnosing autoimmune disorders such as Lupus, Sjögren’s syndrome, and scleroderma. Despite its importance, manual ANA detection is slow, labor-intensive, and demands years of training. ANA detection is complicated by over 100 coexisting antibody types, resulting in vast fluorescent pattern combinations. Although machine learning and deep learning have enabled automation, ANA detection in real-world clinical settings presents unique challenges as it involves multi-instance, multi-label (MIML) learning. In this paper, a novel framework for ANA detection is proposed that handles the complexities of MIML tasks using unaltered microscope images without manual preprocessing. Inspired by human labeling logic, it identifies consistent ANA sub-regions and assigns aggregated labels accordingly. These steps are implemented using three task-specific components: an instance sampler, a probabilistic pseudo-label dispatcher, and self-paced weight learning rate coefficients. The instance sampler suppresses low-confidence instances by modeling pattern confidence, while the dispatcher adaptively assigns labels based on instance distinguishability. Self-paced learning adjusts training according to empirical label observations. Our framework overcomes limitations of traditional MIML methods and supports end-to-end optimization. Extensive experiments on one ANA dataset and three public medical MIML benchmarks demonstrate the superiority of our framework. On the ANA dataset, our model achieves up to +7.0% F1-Macro and +12.6% mAP gains over the best prior method, setting new state-of-the-art results. It also ranks top-2 across all key metrics on public datasets, reducing Hamming loss and one-error by up to 18.2% and 26.9%, respectively. The source code can be accessed at https://github.com/fletcherjiang/ANA-SelfPacedLearning.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on medical imaging, Apr. 2026, v. 45, no. 4, p. 1661-1672en_US
dcterms.isPartOfIEEE transactions on medical imagingen_US
dcterms.issued2026-04-
dc.identifier.scopus2-s2.0-105023198037-
dc.identifier.pmid41296939-
dc.identifier.eissn1558-254Xen_US
dc.description.validate202603 bcjzen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.SubFormIDG001135/2026-01-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis research was supported by the Hong Kong Research Grants Council via the General Research Fund (project no. PolyU 15200023) and the National Natural Science Foundation of China (Grant No.: 62372314). The experimental part of this work was supported by The Centre for Large AI Models (CLAIM) of The Hong Kong Polytechnic University.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Jiang_Self-paced_Learning_Images.pdfPre-Published version10.56 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

1
Citations as of May 8, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.