Please use this identifier to cite or link to this item:
Title: Facial expression recognition using depth map estimation of light field camera
Authors: Shen, TW
Fu, H
Chen, J
Yu, WK
Lau, CY
Lo, WL
Chi, Z 
Keywords: Facial component detection
Facial expression recognition
HOG features
Light field camera
Issue Date: 2016
Publisher: Institute of Electrical and Electronics Engineers Inc.
Source: ICSPCC 2016 - IEEE International Conference on Signal Processing, Communications and Computing, Conference Proceedings, 2016, 7753695 How to cite?
Abstract: Facial expressions recognition has gained a growing attention from industry and also academics, because it could be widely used in many field such as Human Computer Interface (HCI) and medical assessment. In this paper, we evaluate the strength of the Light Field Camera for facial expression recognition. The light filed camera can capture the directions of the incoming light rays which is not possible with a conventional 2D camera. In addition, the light filed camera could estimates depth maps which provide further information to handle the facial expression recognition problem. Firstly, a new facial expression dataset is collected by the light field camera. The depth map is estimated and applied on Histogram Oriented Gradient (HOG) to encode these facial components as features. Then, a linear SVM is trained to perform the facial expression classification. Performance of the proposed approach is evaluated using the new dataset with estimated depth map. Experimental results show that significant improvements on accuracy are achieved as compared to the traditional approach.
Description: 2016 IEEE International Conference on Signal Processing, Communications and Computing, ICSPCC 2016, Hong Kong, 5-8 August 2016
ISBN: 9781509027088
DOI: 10.1109/ICSPCC.2016.7753695
Appears in Collections:Conference Paper

View full-text via PolyU eLinks SFX Query
Show full item record

Page view(s)

Last Week
Last month
Citations as of Oct 15, 2018

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.