Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/95740
Title: | Aggregating frame-level information in the spectral domain with self-attention for speaker embedding | Authors: | Tu, Y Mak, MW |
Issue Date: | 2022 | Source: | IEEE/ACM transactions on audio, speech, and language processing, 2022, v. 30, p. 944-957 | Abstract: | Most pooling methods in state-of-the-art speaker embedding networks are implemented in the temporal domain. However, due to the high non-stationarity in the feature maps produced from the last frame-level layer, it is not advantageous to use the global statistics (e.g., means and standard deviations) of the temporal feature maps as aggregated embeddings. This motivates us to explore stationary spectral representations and perform aggregation in the spectral domain. In this paper, we propose attentive short-time spectral pooling (attentive STSP) from a Fourier perspective to exploit the local stationarity of the feature maps. In attentive STSP, for each utterance, we compute the spectral representations through a weighted average of the windowed segments within each spectrogram by attention weights and aggregate their lowest spectral components to form the speaker embedding. Because most of the feature map energy is concentrated in the low-frequency region of the spectral domain, attentive STSP facilitates the information aggregation by retaining the low spectral components only. Attentive STSP is shown to consistently outperform attentive pooling on VoxCeleb1, VOiCES19-eval, SRE16-eval, and SRE18-CMN2-eval. This observation suggests that applying segment-level attention and leveraging low spectral components can produce discriminative speaker embeddings. | Keywords: | Self-attention Short-time Fourier transform Speaker embedding Speaker verification Statistics pooling |
Publisher: | Institute of Electrical and Electronics Engineers | Journal: | IEEE/ACM transactions on audio, speech, and language processing | ISSN: | 2329-9290 | EISSN: | 2329-9304 | DOI: | 10.1109/TASLP.2022.3153267 | Rights: | © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The following publication Y. Tu and M. -W. Mak, "Aggregating Frame-Level Information in the Spectral Domain With Self-Attention for Speaker Embedding," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 944-957, 2022 is available at https://dx.doi.org/10.1109/TASLP.2022.3153267. |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
att_stsp_j.pdf | Pre-Published version | 1.37 MB | Adobe PDF | View/Open |
Page views
65
Last Week
0
0
Last month
Citations as of Oct 13, 2024
Downloads
78
Citations as of Oct 13, 2024
SCOPUSTM
Citations
9
Citations as of Oct 17, 2024
WEB OF SCIENCETM
Citations
7
Citations as of Oct 10, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.