Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/89783
Title: Practical algorithms for vision-based human activity recognition and human action evaluation
Authors: Yu, Xinbo
Degree: Ph.D.
Issue Date: 2020
Abstract: Human Activity Recognition (HAR) and Human Action Evaluation (HAE) are two main tasks of human activity analysis addressed in this thesis, which could be applied to many application domains like healthcare and physical rehabilitation, interactive entertainment, and video surveillance. These applications could alleviate the increasingly serious problem of population aging by bring improvements to the people's quality of life. Existing HAR methods use various sensors like vision, wearable, and ambient sensors. With a comprehensive understanding of these sensors, this thesis focuses on the vision-based HAR. To test the effectiveness of existing methods, we collect a small real-world Activities of Daily Living (ADLs) dataset and implement some representative skeleton-based methods. We also propose an HAR framework called HARELCARE for developing practical HAR algorithms. Within the proposed HARELCARE framework, two effective HAR algorithms are developed and tested on the collected ADLs dataset. One of them is based on feature extraction, while the other is based on transfer learning. The results show both methods significantly outperform existing methods on our real-world ADLs dataset. Not only tackling with small datasets, we also propose a Model-based Multimodal Network (MMNet) to handle HAR with increasingly larger public datasets. Since most of public datasets are collected with Kinect sensors, multiple data modalities like skeleton and RGB video are available. However, it remains a lack of effective multimodal methods that could further improve the existing methods. Our MMNet fuses different data modalities at the feature level. With extensive experiments, the proposed MMNet is proved effective and achieves the true state-of-the-art performances on three public datasets NTU-RGB+D, PKU-MMD and Northwestern-UCLA Multiview. The results of our HAR algorithms show great potential of our methods to be applied to wide applications. Unlike HAR that focuses on activity classification, HAE is concerned with making judgements about the abnormality and even the quality of human actions. If performed effectively, HAE based on skeleton data can be used to monitor the outcomes of behavioural therapies for Alzheimer disease (AD). To do so, we propose a two-task Graph Convolutional Network (2T-GCN) to represent the skeleton data for both HAE tasks of abnormality detection and quality evaluation. It is first evaluated on the UI-PRMD dataset and found to perform well for abnormality detection. While for quality evaluation, in addition to the laboratory-collected UI-PRMD, we test it on a set of real exercise data collected from AD patients. Experimental results show that the numerical scores for some exercises performed by AD patients are consistent with their AD severity level assigned by a clinical staff. This shows the potential of our approach for monitoring AD and other neurodegenerative diseases.
Subjects: Human activity recognition
Machine learning
Hong Kong Polytechnic University -- Dissertations
Pages: iii, vii, 109 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

34
Last Week
0
Last month
Citations as of Apr 28, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.