Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114604
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorChien, JT-
dc.creatorYeh, IP-
dc.creatorMak, MW-
dc.date.accessioned2025-08-18T03:02:08Z-
dc.date.available2025-08-18T03:02:08Z-
dc.identifier.urihttp://hdl.handle.net/10397/114604-
dc.descriptionInterspeech 2024, 1-5 September 2024, Kos, Greeceen_US
dc.language.isoenen_US
dc.publisherInternational Speech Communication Associationen_US
dc.rightsThe following publication Chien, J.-T., Yeh, I.-P., Mak, M.-W. (2024) Collaborative Contrastive Learning for Hypothesis Domain Adaptation. Proc. Interspeech 2024, 3225-3229 is available at https://doi.org/10.21437/Interspeech.2024-1800.en_US
dc.subjectCollaborative learningen_US
dc.subjectContrastive learningen_US
dc.subjectDomain adaptationen_US
dc.subjectSpeaker verificationen_US
dc.titleCollaborative contrastive learning for hypothesis domain adaptationen_US
dc.typeConference Paperen_US
dc.identifier.spage3225-
dc.identifier.epage3229-
dc.identifier.doi10.21437/Interspeech.2024-1800-
dcterms.abstractAchieving desirable performance for speaker recognition with severe domain mismatch is challenging. Such a challenge becomes even more harsh when the source data are missing. To enhance the low-resource speaker representation, this study deals with a practical scenario, called hypothesis domain adaptation, where a model trained on a source domain is adapted to a significantly different target domain as a hypothesis without access to source data. To pursue a domain-invariant representation, this paper proposes a novel collaborative hypothesis domain adaptation (CHDA) where the dual encoders are collaboratively trained to estimate the pseudo source data which are then utilized to maximize the domain confusion. Combined with the constrastive learning, this CHDA is further enhanced by increasing the domain matching as well as the speaker discrimination. The experiments on cross-language speaker recognition show the merit of the proposed method.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2024, p. 3225-3229-
dcterms.issued2024-
dc.identifier.scopus2-s2.0-85214809119-
dc.description.validate202508 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Othersen_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryVoR alloweden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
chien24c_interspeech.pdf598.99 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.