Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114106
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorLi, Z-
dc.creatorMak, MW-
dc.creatorPilanci, M-
dc.creatorMeng, H-
dc.date.accessioned2025-07-11T09:11:55Z-
dc.date.available2025-07-11T09:11:55Z-
dc.identifier.issn1558-7916-
dc.identifier.urihttp://hdl.handle.net/10397/114106-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.subjectAdditive angular marginen_US
dc.subjectContrastive learningen_US
dc.subjectMutual informationen_US
dc.subjectSpeaker verificationen_US
dc.titleMutual information-enhanced contrastive learning with margin for maximal speaker separabilityen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1109/TASLPRO.2025.3583485-
dcterms.abstractContrastive learning across various augmentations of the same utterance can enhance speaker representations' ability to distinguish new speakers. This paper introduces a supervised contrastive learning objective that optimizes a speaker embedding space using label information from training data. Besides augmenting different segments of an utterance to form a positive pair, our approach generates multiple positive pairs by augmenting various utterances from the same speaker. However, employing contrastive learning for speaker verification (SV) presents two challenges: (1) softmax loss is ineffective in reducing intra-class variation, and (2) previous research has shown that contrastive learning can share information across the augmented views of an object but could discard task-relevant nonshared information, suggesting that it is essential to keep nonshared speaker information across the augmented views when constructing a speaker representation space. To overcome the first challenge, we incorporate an additive angular margin in the contrastive loss. For the second challenge, we maximize the mutual information (MI) between the squeezed low-level features and speaker representations to extract the nonshared information. Evaluations on VoxCeleb, CN-Celeb, and CU-MARVEL validate that our new learning objective enables ECAPA-TDNN to identify an embedding space that exhibits robust speaker discrimination.-
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationIEEE transactions on audio, speech and language processing, Date of Publication: 26 June 2025, Early Access, https://doi.org/10.1109/TASLPRO.2025.3583485-
dcterms.isPartOfIEEE transactions on audio, speech and language processing-
dcterms.issued2025-
dc.identifier.eissn1558-7924-
dc.description.validate202507 bcch-
dc.identifier.FolderNumbera3850aen_US
dc.identifier.SubFormID51337en_US
dc.description.fundingSourceRGCen_US
dc.description.pubStatusEarly releaseen_US
dc.date.embargo0000-00-00 (to be updated)en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.