Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/37659
Title: Robust speech recognition based on the second-order difference cochlear model
Authors: Wan, WG
Au, OC
Keywords: Cepstral analysis
Hearing
Speech recognition
White noise
Issue Date: 2001
Source: Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, 2001, 02 May 2001-04 May 2001, Hong Kong, p. 543-546 How to cite?
Abstract: MFCC (Mel-Frequency Cepstral Coefficients) is a kind of traditional speech feature widely used in speech recognition. The error rate of the speech recognition algorithm using MFCC and CDHMM is known to be very low in a clean speech environment, but it increases greatly in a noisy environment, especially in the white noisy environment. We propose a new kind of speech feature called the auditory spectrum based feature (ASBF) that is based on the second-order difference cochlear model of the human auditory system. This new speech feature can track the speech formants and the selection scheme of this feature is based on both the second-order difference cochlear model and primary auditory nerve processing model of the human auditory system. In our experiment, the performance of MFCC and ASBF are compared in both clean and noisy environments when left-to-right CDHMM with 6 states and 5 Gaussian mixtures is used. The experimental result shows that the ASBF is much more robust to noise than MFCC. When only 5 frequency components are used in ASBF, the error rate is approximately 38% lower than the traditional MFCC with 39 parameters in the condition of S/N=10 dB with white noise
URI: http://hdl.handle.net/10397/37659
ISBN: 962-85766-2-3
DOI: 10.1109/ISIMP.2001.925453
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page view(s)

25
Last Week
4
Last month
Checked on Aug 14, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.