Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/114605
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electrical and Electronic Engineering | - |
dc.creator | Li, G | - |
dc.creator | Deng, J | - |
dc.creator | Chen, Y | - |
dc.creator | Geng, M | - |
dc.creator | Hu, S | - |
dc.creator | Li, Z | - |
dc.creator | Jin, Z | - |
dc.creator | Wang, T | - |
dc.creator | Xie, X | - |
dc.creator | Meng, H | - |
dc.creator | Liu, X | - |
dc.date.accessioned | 2025-08-18T03:02:09Z | - |
dc.date.available | 2025-08-18T03:02:09Z | - |
dc.identifier.uri | http://hdl.handle.net/10397/114605 | - |
dc.description | Interspeech 2024, 1-5 September 2024, Kos, Greece | en_US |
dc.language.iso | en | en_US |
dc.publisher | International Speech Communication Association | en_US |
dc.rights | The following publication Li, G., Deng, J., Chen, Y., Geng, M., Hu, S., Li, Z., Jin, Z., Wang, T., Xie, X., Meng, H., Liu, X. (2024) Joint Speaker Features Learning for Audio-visual Multichannel Speech Separation and Recognition. Proc. Interspeech 2024, 1925-1929 is available at https://doi.org/10.21437/Interspeech.2024-1063. | en_US |
dc.subject | Speaker features | en_US |
dc.subject | Speech recognition | en_US |
dc.subject | Speech separation | en_US |
dc.subject | Zero-shot adaptation | en_US |
dc.title | Joint speaker features learning for audio-visual multichannel speech separation and recognition | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.spage | 1925 | - |
dc.identifier.epage | 1929 | - |
dc.identifier.doi | 10.21437/Interspeech.2024-1063 | - |
dcterms.abstract | This paper proposes joint speaker feature learning methods for zero-shot adaptation of audio-visual multichannel speech separation and recognition systems. xVector and ECAPA-TDNN speaker encoders are connected using purpose-built fusion blocks and tightly integrated with the complete system training. Experiments conducted on LRS3-TED data simulated multichannel overlapped speech suggest that joint speaker feature learning consistently improves speech separation and recognition performance over the baselines without joint speaker feature estimation. Further analyses reveal performance improvements are strongly correlated with increased inter-speaker discrimination measured using cosine similarity. The best-performing joint speaker feature learning adapted system outperformed the baseline fine-tuned WavLM model by statistically significant WER reductions of 21.6% and 25.3% absolute (67.5% and 83.5% relative) on Dev and Test sets after incorporating WavLM features and video modality. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2024, p. 1925-1929 | - |
dcterms.issued | 2024 | - |
dc.identifier.scopus | 2-s2.0-85214844713 | - |
dc.description.validate | 202508 bcch | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Others | en_US |
dc.description.fundingSource | RGC | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | TRS under Grant T45-407/19N; Innovation & Technology Fund grant No. ITS/218/21; National Natural Science Foundation of China (NSFC) Grant 62106255; Youth Innovation Promotion Association CAS Grant 2023119 | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | VoR allowed | en_US |
Appears in Collections: | Conference Paper |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
li24v_interspeech.pdf | 771.16 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.