Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114605
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorLi, G-
dc.creatorDeng, J-
dc.creatorChen, Y-
dc.creatorGeng, M-
dc.creatorHu, S-
dc.creatorLi, Z-
dc.creatorJin, Z-
dc.creatorWang, T-
dc.creatorXie, X-
dc.creatorMeng, H-
dc.creatorLiu, X-
dc.date.accessioned2025-08-18T03:02:09Z-
dc.date.available2025-08-18T03:02:09Z-
dc.identifier.urihttp://hdl.handle.net/10397/114605-
dc.descriptionInterspeech 2024, 1-5 September 2024, Kos, Greeceen_US
dc.language.isoenen_US
dc.publisherInternational Speech Communication Associationen_US
dc.rightsThe following publication Li, G., Deng, J., Chen, Y., Geng, M., Hu, S., Li, Z., Jin, Z., Wang, T., Xie, X., Meng, H., Liu, X. (2024) Joint Speaker Features Learning for Audio-visual Multichannel Speech Separation and Recognition. Proc. Interspeech 2024, 1925-1929 is available at https://doi.org/10.21437/Interspeech.2024-1063.en_US
dc.subjectSpeaker featuresen_US
dc.subjectSpeech recognitionen_US
dc.subjectSpeech separationen_US
dc.subjectZero-shot adaptationen_US
dc.titleJoint speaker features learning for audio-visual multichannel speech separation and recognitionen_US
dc.typeConference Paperen_US
dc.identifier.spage1925-
dc.identifier.epage1929-
dc.identifier.doi10.21437/Interspeech.2024-1063-
dcterms.abstractThis paper proposes joint speaker feature learning methods for zero-shot adaptation of audio-visual multichannel speech separation and recognition systems. xVector and ECAPA-TDNN speaker encoders are connected using purpose-built fusion blocks and tightly integrated with the complete system training. Experiments conducted on LRS3-TED data simulated multichannel overlapped speech suggest that joint speaker feature learning consistently improves speech separation and recognition performance over the baselines without joint speaker feature estimation. Further analyses reveal performance improvements are strongly correlated with increased inter-speaker discrimination measured using cosine similarity. The best-performing joint speaker feature learning adapted system outperformed the baseline fine-tuned WavLM model by statistically significant WER reductions of 21.6% and 25.3% absolute (67.5% and 83.5% relative) on Dev and Test sets after incorporating WavLM features and video modality.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2024, p. 1925-1929-
dcterms.issued2024-
dc.identifier.scopus2-s2.0-85214844713-
dc.description.validate202508 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Othersen_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextTRS under Grant T45-407/19N; Innovation & Technology Fund grant No. ITS/218/21; National Natural Science Foundation of China (NSFC) Grant 62106255; Youth Innovation Promotion Association CAS Grant 2023119en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryVoR alloweden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
li24v_interspeech.pdf771.16 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.