Back to results list
Show full item record
Please use this identifier to cite or link to this item:
|Title:||Understanding human intention via non-intrusive capturing of interaction and body signals||Authors:||Fu, Yujun||Degree:||Ph.D.||Issue Date:||2019||Abstract:||Computers are commonplace and their ability to understand human intention will open up the possibility to provide potential useful assistance to human in many applications, including human social interactions and daily human-computer interactions. In this thesis, I investigate into these interactions to understand human intention. For human social interactions, I focus on a common type of social interaction in real life, namely, human fight. Prior studies about fight detection are challenged by some constraints, including the reliance on costly high level feature recognition, like human gestures, actions, or visual words and simulated fight events due to dataset availability. I propose two sets of motion analysis-based features to build human real fight detection models, without recognizing human gestures or visual words. To evaluate, we collect our own human real fight datasets. Experiments demonstrate that my models outperform the state-of-the-art counterparts in human real fight detection. I further extend my investigation to understand fights with real fight intention and simulated fights. The findings suggest that there are fundamental differences between human real fights and simulated fights, and my motion analysis-based features can effectively distinguish them. State-of-the-art data-driven approaches, such as deep learning, are constrained by the limited amount of spontaneous human real fight data available. To address this, I propose an ensemble-based method for cross-species fight detection by adapting knowledge from real animal fight events. Experiments demonstrate that it is feasible to build well-performing human real fight detection models via cross-species learning.
For daily human-computer interaction tasks, I study the user intention prediction problem. The challenges include limited prescribed tasks, lack of full modalities and the need of expensive intrusive devices to capture interaction and body signals, especially physiological signals. First, I conduct the study on a common but more complex and open-ended daily computer interaction task: web search task, to overcome the limitation of studying on simple prescribed tasks. Second, I propose two feature representations to encode users' interaction and body signals, including mouse, gaze, head and body motion signals. I combine these signal features with historical activity sequences to build effective multimodal user intention prediction models. Experiments indicate that the proposed features can successfully encode these signals and the model can achieve encouraging performance. I further extend the work to the application of detecting user slips. Experiments provide evidence about the feasibility of building useful intention-based user slips detection models. Third, I would like to capture the interaction and body signals with non-intrusive devices, as opposed to contemporary physiological signal measurements with expensive intrusive devices. Since physiological signals could well indicate human emotions and even intentions, measuring them in a non-intrusive and low cost manner would benefit human emotion and intention understanding. I propose a physiological mouse and build a prototype to non-intrusively measure human heart beat and respiratory rate. Experiments illustrate that the mouse can achieve promising performance on measuring the two physiological signals. Further experiments also suggest that it is feasible to correlate the measured signals to human emotions via the physiological mouse prototype.
|Subjects:||Hong Kong Polytechnic University -- Dissertations
|Pages:||xviii, 170 pages : color illustrations|
|Appears in Collections:||Thesis|
View full-text via https://theses.lib.polyu.edu.hk/handle/200/10024
Citations as of Jul 3, 2022
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.