Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107147
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorLi, L-
dc.creatorMak, MW-
dc.creatorChien, JT-
dc.date.accessioned2024-06-13T01:04:11Z-
dc.date.available2024-06-13T01:04:11Z-
dc.identifier.issn2162-237X-
dc.identifier.urihttp://hdl.handle.net/10397/107147-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication L. Li, M. -W. Mak and J. -T. Chien, "Contrastive Adversarial Domain Adaptation Networks for Speaker Recognition," in IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 5, pp. 2236-2245, May 2022 is available at https://doi.org/10.1109/TNNLS.2020.3044215.en_US
dc.subjectAdversarial learningen_US
dc.subjectDomain adaptationen_US
dc.subjectDomain adversarial networks (DANs)en_US
dc.subjectDomain invarianceen_US
dc.subjectSpeaker recognitionen_US
dc.titleContrastive adversarial domain adaptation networks for speaker recognitionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage2236-
dc.identifier.epage2245-
dc.identifier.volume33-
dc.identifier.issue5-
dc.identifier.doi10.1109/TNNLS.2020.3044215-
dcterms.abstractDomain adaptation aims to reduce the mismatch between the source and target domains. A domain adversarial network (DAN) has been recently proposed to incorporate adversarial learning into deep neural networks to create a domain-invariant space. However, DAN's major drawback is that it is difficult to find the domain-invariant space by using a single feature extractor. In this article, we propose to split the feature extractor into two contrastive branches, with one branch delegating for the class-dependence in the latent space and another branch focusing on domain-invariance. The feature extractor achieves these contrastive goals by sharing the first and last hidden layers but possessing decoupled branches in the middle hidden layers. For encouraging the feature extractor to produce class-discriminative embedded features, the label predictor is adversarially trained to produce equal posterior probabilities across all of the outputs instead of producing one-hot outputs. We refer to the resulting domain adaptation network as 'contrastive adversarial domain adaptation network (CADAN).' We evaluated the embedded features' domain-invariance via a series of speaker identification experiments under both clean and noisy conditions. Results demonstrate that the embedded features produced by CADAN lead to a 33% improvement in speaker identification accuracy compared with the conventional DAN.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on neural networks and learning systems, May 2022, v. 33, no. 5, p. 2236-2245-
dcterms.isPartOfIEEE transactions on neural networks and learning systems-
dcterms.issued2022-05-
dc.identifier.scopus2-s2.0-85099088002-
dc.identifier.pmid33373306-
dc.identifier.eissn2162-2388-
dc.description.validate202403 bckw-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0252en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextTaiwan Ministry of Science and Technology (MOST)en_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS43305333en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Li_Contrastive_Adversarial_Domain.pdfPre-Published version1.01 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

2
Citations as of Jun 30, 2024

Downloads

2
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

15
Citations as of Jun 27, 2024

WEB OF SCIENCETM
Citations

11
Citations as of Jun 27, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.