Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/37680
Title: Intramodal and intermodal fusion for audio-visual biometric authentication
Authors: Cheung, MC
Mak, MW 
Kung, SY
Keywords: Audio-visual systems
Biometrics (access control)
Decision making
Image recognition
Sensor fusion
Speaker recognition
Video signal processing
Issue Date: 2004
Source: Proceedings of the International Symposium on Intelligent Multimedia, Video and Speech Processing (ISIMP’2004), Hong Kong, 20-22 Oct. 2004, p. 25-28 How to cite?
Abstract: The paper proposes a multiple-source, multiple-sample fusion approach to identity verification. Fusion is performed at two levels, intramodal and intermodal. In intramodal fusion, the scores of multiple samples (e.g., utterances or video shots) obtained from the same modality are linearly combined, where the combination weights are dependent on the difference between the score values and a user-dependent reference score obtained during enrollment. This is followed by intermodal fusion in which the means of intramodal fused scores obtained from different modalities are fused. The final fused score is then used for decision making. This two-level fusion approach was applied to audio-visual biometric authentication; experimental results based on the XM2VTSDB corpus show that the proposed fusion approach can achieve an error rate reduction of up to 83%.
URI: http://hdl.handle.net/10397/37680
ISBN: 0-7803-8687-6
DOI: 10.1109/ISIMP.2004.1433991
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page view(s)

20
Last Week
1
Last month
Checked on Aug 13, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.