Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/111709
Title: | Mutual information enhanced training for speaker embedding | Authors: | Tu, Y Mak, MW |
Issue Date: | 2021 | Source: | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2021, p. 91-95 | Abstract: | Mutual information (MI) is useful in unsupervised and self-supervised learning. Maximizing the MI between the low-level features and the learned embeddings can preserve meaningful information in the embeddings, which can contribute to performance gains. This strategy is called deep InfoMax (DIM) in representation learning. In this paper, we follow the DIM framework so that the speaker embeddings can capture more information from the frame-level features. However, a straightforward implementation of DIM may pose a dimensionality imbalance problem because the dimensionality of the frame-level features is much larger than that of the speaker embeddings. This problem can lead to unreliable MI estimation and can even cause detrimental effects on speaker verification. To overcome this problem, we propose to squeeze the frame-level features before MI estimation through some global pooling methods. We call the proposed method squeeze-DIM. Although the squeeze operation inevitably introduces some information loss, we empirically show that the squeeze-DIM can achieve performance gains on both Voxceleb1 and VOiCES-19 tasks. This suggests that the squeeze operation facilitates the MI estimation and maximization in a balanced dimensional space, which helps learn more informative speaker embeddings. | Publisher: | International Speech Communication Association | DOI: | 10.21437/Interspeech.2021-1436 | Description: | 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021, Brno, Czechia, August 30 - September 3, 2021 | Rights: | Copyright © 2021 ISCA The following publication Tu, Y., Mak, M.-W. (2021) Mutual Information Enhanced Training for Speaker Embedding. Proc. Interspeech 2021, 91-95 is available at https://doi.org/10.21437/Interspeech.2021-1436. |
Appears in Collections: | Conference Paper |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
tu21_interspeech.pdf | 297.96 kB | Adobe PDF | View/Open |
Page views
11
Citations as of Apr 1, 2025
Downloads
3
Citations as of Apr 1, 2025

Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.