Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/87843
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Land Surveying and Geo-Informatics-
dc.creatorYe, JH-
dc.creatorLi, X-
dc.creatorZhang, XD-
dc.creatorZhang, Q-
dc.creatorChen, W-
dc.date.accessioned2020-08-19T06:27:42Z-
dc.date.available2020-08-19T06:27:42Z-
dc.identifier.urihttp://hdl.handle.net/10397/87843-
dc.language.isoenen_US
dc.publisherMolecular Diversity Preservation Internationalen_US
dc.rights© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Ye, J.; Li, X.; Zhang, X.; Zhang, Q.; Chen, W. Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation. Sensors 2020, 20, 2574 is available at https://dx.doi.org/10.3390/s20092574en_US
dc.subjectLSTMen_US
dc.subjectCNNen_US
dc.subjectTensorflowen_US
dc.subjectDeep learningen_US
dc.subjectPedestrian navigationen_US
dc.titleDeep learning-based human activity real-time recognition for pedestrian navigationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1-
dc.identifier.epage30-
dc.identifier.volume20-
dc.identifier.issue9-
dc.identifier.doi10.3390/s20092574-
dcterms.abstractSeveral pedestrian navigation solutions have been proposed to date, and most of them are based on smartphones. Real-time recognition of pedestrian mode and smartphone posture is a key issue in navigation. Traditional ML (Machine Learning) classification methods have drawbacks, such as insufficient recognition accuracy and poor timing. This paper presents a real-time recognition scheme for comprehensive human activities, and this scheme combines deep learning algorithms and MEMS (Micro-Electro-Mechanical System) sensors' measurements. In this study, we performed four main experiments, namely pedestrian motion mode recognition, smartphone posture recognition, real-time comprehensive pedestrian activity recognition, and pedestrian navigation. In the procedure of recognition, we designed and trained deep learning models using LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network) networks based on Tensorflow framework. The accuracy of traditional ML classification methods was also used for comparison. Test results show that the accuracy of motion mode recognition was improved from <mml:semantics>89.9%</mml:semantics>, which was the highest accuracy and obtained by SVM (Support Vector Machine), to <mml:semantics>90.74%</mml:semantics> (LSTM) and <mml:semantics>91.92%</mml:semantics> (CNN); the accuracy of smartphone posture recognition was improved from <mml:semantics>81.60%</mml:semantics>, which is the highest accuracy and obtained by NN (Neural Network), to <mml:semantics>93.69%</mml:semantics> (LSTM) and <mml:semantics>95.55%</mml:semantics> (CNN). We give a model transformation procedure based on the trained CNN network model, and then obtain the converted <mml:semantics>.tflite</mml:semantics> model, which can be run in Android devices for real-time recognition. Real-time recognition experiments were performed in multiple scenes, a recognition model trained by the CNN network was deployed in a Huawei Mate20 smartphone, and the five most used pedestrian activities were designed and verified. The overall accuracy was up to <mml:semantics>89.39%</mml:semantics>. Overall, the improvement of recognition capability based on deep learning algorithms was significant. Therefore, the solution was helpful to recognize comprehensive pedestrian activities during navigation. On the basis of the trained model, a navigation test was performed; mean bias was reduced by more than 1.1 m. Accordingly, the positioning accuracy was improved obviously, which is meaningful to apply DL in the area of pedestrian navigation to make improvements.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSensors, 1 May 2020, v. 20, no. 9, 2574, p. 1-30-
dcterms.isPartOfSensors-
dcterms.issued2020-05-01-
dc.identifier.isiWOS:000537106200138-
dc.identifier.scopus2-s2.0-85084276498-
dc.identifier.pmid32366055-
dc.identifier.eissn1424-8220-
dc.identifier.artn2574-
dc.description.validate202008 bcrc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Ye_Deep_Learning-Based_Human.pdf68.92 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

147
Last Week
0
Last month
Citations as of May 19, 2024

Downloads

26
Citations as of May 19, 2024

SCOPUSTM   
Citations

18
Citations as of May 16, 2024

WEB OF SCIENCETM
Citations

13
Citations as of May 16, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.