Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/29926
Title: Video action recognition with spatio-temporal graph embedding and spline modeling
Authors: Yuan, Y
Zheng, H
Li, Z
Zhang, D 
Keywords: Appearance modeling
Graph embedding
Spline modeling
Video event analysis
Issue Date: 2010
Source: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2010, p. 2422-2425 How to cite?
Journal: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 
Abstract: In recent years, video analysis and event recognition are becoming a popular research topic with wide applications in surveillance and security. In this paper, we proposed a video action appearance modeling based on spatio-temporal graph embedding and video action recognition based on video luminance field trajectory spline modeling and aligned matching. Graphs are computed from spline re-sampling of training video data set. Matching is achieved from minimizing the average projection distance between query clips and training groups. Simulation with the Cambridge hand gesture data set demonstrates the effectiveness of the proposed solution.
Description: 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010, Dallas, TX, 14-19 March 2010
URI: http://hdl.handle.net/10397/29926
ISBN: 9781424442966
ISSN: 1520-6149
DOI: 10.1109/ICASSP.2010.5496275
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

5
Last Week
0
Last month
0
Citations as of Jun 22, 2017

WEB OF SCIENCETM
Citations

3
Last Week
0
Last month
0
Citations as of Jun 22, 2017

Page view(s)

28
Last Week
0
Last month
Checked on Jun 25, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.