Please use this identifier to cite or link to this item:
Title: Multimodal human attention detection for reading
Authors: Li, JJ
Ngai, G 
Leong, HV 
Chan, S 
Keywords: Facial features
Mouse dynamics
Human attention level
Multimodal interaction
Issue Date: 2016
Publisher: ACM Press
Source: Proceedings of the 31st Annual ACM Symposium on Applied Computing (SAC '16), Pisa, Italy,4-8 April, 2016, p. 187-192 How to cite?
Abstract: Affective computing in human-computer interaction research enables computers to understand human affects or emotions to provide better service. In this paper, we investigate the detection of human attention useful in intelligent e-learning applications. Our principle is to use only ubiquitous hardware available in most computer systems, namely, webcam and mouse. Information from multiple modalities is fused together for effective human attention detection. We invite human subjects to carry out experiments in reading articles being subjected to different kinds of distraction to induce different attention levels. Machine-learning techniques are applied to identify useful features to recognize human attention level. Our results indicate improved performance with multimodal inputs, suggesting an interesting affective computing direction.
ISBN: 978-1-4503-3739-7 (print)
DOI: 10.1145/2851613.2851681
Appears in Collections:Conference Paper

View full-text via PolyU eLinks SFX Query
Show full item record

Page view(s)

Last Week
Last month
Checked on Aug 13, 2017

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.