Please use this identifier to cite or link to this item:
Title: Realistic Human Action Recognition with Audio Context
Authors: Wu, Q
Wang, Z
Deng, F
Feng, DD
Keywords: Audio signal processing
Decision theory
Feature extraction
Signal classification
Signal representation
Support vector machines
Issue Date: 2010
Source: Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA'2010), Sydney, Australia, 1-3 Dec. 2010, p. 288-293 How to cite?
Abstract: Recognizing human actions in realistic scenes has emerged as a challenging topic due to various aspects such as dynamic backgrounds. In this paper, we present a novel approach to taking audio context into account for better action recognition performance, since audio can provide strong evidence to certain actions such as phone-ringing to answer-phone. At first, classifiers are established for visual and audio modalities, respectively. Specifically, bag of visual-words model is employed to represent human actions in visual modality, a number of audio features are extracted for audio modality, and Support Vector Machine (SVM) is employed as the classification technique. Then, a decision fusion scheme is utilized to fuse classification results from two modalities. Since audio context is not always helpful, two simple yet effective decision rules are developed for selective fusion. Experimental results on the Hollywood Human Actions (HOHA) dataset demonstrate that the proposed approach can achieve better recognition performance than that of integrating scene context. Therefor, our work provides strong confidence to further explore how audio context influences realistic human action recognition.
ISBN: 978-1-4244-8816-2
978-0-7695-4271-3 (E-ISBN)
DOI: 10.1109/DICTA.2010.57
Appears in Collections:Conference Paper

View full-text via PolyU eLinks SFX Query
Show full item record


Last Week
Last month
Citations as of Feb 13, 2019

Page view(s)

Last Week
Last month
Citations as of Feb 17, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.