Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/109690
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorHe, R-
dc.creatorLiu, S-
dc.creatorWu, J-
dc.creatorHe, S-
dc.creatorTang, K-
dc.date.accessioned2024-11-08T06:11:20Z-
dc.date.available2024-11-08T06:11:20Z-
dc.identifier.issn0922-6389-
dc.identifier.urihttp://hdl.handle.net/10397/109690-
dc.description26th European Conference on Artificial Intelligence, September 30–October 4, 2023, Kraków, Polanden_US
dc.language.isoenen_US
dc.publisherIOS Pressen_US
dc.rights© 2023 The Authors.en_US
dc.rightsThis article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/).en_US
dc.rightsThe following publication He, R., Liu, S., Wu, J., He, S., & Tang, K. (2023). Multi-Domain Learning From Insufficient Annotations. Frontiers in artificial intelligence and applications, 372, 1028-1035 is available at https://doi.org/10.3233/FAIA230375.en_US
dc.titleMulti-domain learning from insufficient annotationsen_US
dc.typeConference Paperen_US
dc.identifier.spage1028-
dc.identifier.epage1035-
dc.identifier.volume372-
dc.identifier.doi10.3233/FAIA230375-
dcterms.abstractMulti-domain learning (MDL) refers to simultaneously constructing a model or a set of models on datasets collected from different domains. Conventional approaches emphasize domain-shared information extraction and domain-private information preservation, following the shared-private framework (SP models), which offers significant advantages over single-domain learning. However, the limited availability of annotated data in each domain considerably hinders the effectiveness of conventional supervised MDL approaches in real-world applications. In this paper, we introduce a novel method called multi-domain contrastive learning (MDCL) to alleviate the impact of insufficient annotations by capturing both semantic and structural information from both labeled and unlabeled data. Specifically, MDCL comprises two modules: inter-domain semantic alignment and intra-domain contrast. The former aims to align annotated instances of the same semantic category from distinct domains within a shared hidden space, while the latter focuses on learning a cluster structure of unlabeled instances in a private hidden space for each domain. MDCL is readily compatible with many SP models, requiring no additional model parameters and allowing for end-to-end training. Experimental results across five textual and image multi-domain datasets demonstrate that MDCL brings noticeable improvement over various SP models. Furthermore, MDCL can further be employed in multi-domain active learning (MDAL) to achieve a superior initialization, eventually leading to better overall performance.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationFrontiers in artificial intelligence and applications, 2023, v. 372, p. 1028-1035-
dcterms.isPartOfFrontiers in artificial intelligence and applications-
dcterms.issued2023-
dc.identifier.scopus2-s2.0-85175844692-
dc.relation.ispartofbook26th European Conference on Artificial Intelligence, September 30–October 4, 2023, Kraków, Poland – Including 12th Conference on Prestigious Applications of Intelligent Systems (PAIS 2023)-
dc.relation.conferenceEuropean Conference on Artificial Intelligence [ECAI]-
dc.identifier.eissn1879-8314-
dc.description.validate202411 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Key Research and Development Program of China; National Natural Science Foundation of Chinaen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
FAIA-372-FAIA230375.pdf639.56 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

11
Citations as of Nov 17, 2024

Downloads

6
Citations as of Nov 17, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.