Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/114106
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electrical and Electronic Engineering | - |
dc.creator | Li, Z | - |
dc.creator | Mak, MW | - |
dc.creator | Pilanci, M | - |
dc.creator | Meng, H | - |
dc.date.accessioned | 2025-07-11T09:11:55Z | - |
dc.date.available | 2025-07-11T09:11:55Z | - |
dc.identifier.issn | 1558-7916 | - |
dc.identifier.uri | http://hdl.handle.net/10397/114106 | - |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.subject | Additive angular margin | en_US |
dc.subject | Contrastive learning | en_US |
dc.subject | Mutual information | en_US |
dc.subject | Speaker verification | en_US |
dc.title | Mutual information-enhanced contrastive learning with margin for maximal speaker separability | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.doi | 10.1109/TASLPRO.2025.3583485 | - |
dcterms.abstract | Contrastive learning across various augmentations of the same utterance can enhance speaker representations' ability to distinguish new speakers. This paper introduces a supervised contrastive learning objective that optimizes a speaker embedding space using label information from training data. Besides augmenting different segments of an utterance to form a positive pair, our approach generates multiple positive pairs by augmenting various utterances from the same speaker. However, employing contrastive learning for speaker verification (SV) presents two challenges: (1) softmax loss is ineffective in reducing intra-class variation, and (2) previous research has shown that contrastive learning can share information across the augmented views of an object but could discard task-relevant nonshared information, suggesting that it is essential to keep nonshared speaker information across the augmented views when constructing a speaker representation space. To overcome the first challenge, we incorporate an additive angular margin in the contrastive loss. For the second challenge, we maximize the mutual information (MI) between the squeezed low-level features and speaker representations to extract the nonshared information. Evaluations on VoxCeleb, CN-Celeb, and CU-MARVEL validate that our new learning objective enables ECAPA-TDNN to identify an embedding space that exhibits robust speaker discrimination. | - |
dcterms.accessRights | embargoed access | en_US |
dcterms.bibliographicCitation | IEEE transactions on audio, speech and language processing, Date of Publication: 26 June 2025, Early Access, https://doi.org/10.1109/TASLPRO.2025.3583485 | - |
dcterms.isPartOf | IEEE transactions on audio, speech and language processing | - |
dcterms.issued | 2025 | - |
dc.identifier.eissn | 1558-7924 | - |
dc.description.validate | 202507 bcch | - |
dc.identifier.FolderNumber | a3850a | en_US |
dc.identifier.SubFormID | 51337 | en_US |
dc.description.fundingSource | RGC | en_US |
dc.description.pubStatus | Early release | en_US |
dc.date.embargo | 0000-00-00 (to be updated) | en_US |
dc.description.oaCategory | Green (AAM) | en_US |
Appears in Collections: | Journal/Magazine Article |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.