Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/91958
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Building and Real Estate | - |
dc.creator | Su, MC | - |
dc.creator | Chen, JH | - |
dc.creator | Trisandini, Azzizi, V | - |
dc.creator | Chang, HL | - |
dc.creator | Wei, HH | - |
dc.date.accessioned | 2022-02-07T07:04:34Z | - |
dc.date.available | 2022-02-07T07:04:34Z | - |
dc.identifier.issn | 0957-4174 | - |
dc.identifier.uri | http://hdl.handle.net/10397/91958 | - |
dc.language.iso | en | en_US |
dc.publisher | Pergamon Press | en_US |
dc.rights | © 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). | en_US |
dc.rights | The following publication Su, M. C., Chen, J. H., Azzizi, V. T., Chang, H. L., & Wei, H. H. (2021). Smart training: Mask R-CNN oriented approach. Expert Systems with Applications, 185, 115595 is available at https://doi.org/10.1016/j.eswa.2021.115595 | en_US |
dc.subject | Augmented reality | en_US |
dc.subject | Finger-pointing analysis | en_US |
dc.subject | Hand gesture recognition | en_US |
dc.subject | Mask Regions with Convolutional Neural Network (R-CNN) | en_US |
dc.subject | Smart training | en_US |
dc.title | Smart training : mask R-CNN oriented approach | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.volume | 185 | - |
dc.identifier.doi | 10.1016/j.eswa.2021.115595 | - |
dcterms.abstract | This paper is aimed at the usage of an augmented reality assisted system set up on the smart-glasses for training activities. Literature review leads us to a comparison among related technologies, yielding that Mask Regions with Convolutional Neural Network (R-CNN) oriented approach fits the study needs. The proposed method including (1) pointing gesture capture, (2) finger-pointing analysis, and (3) virtual tool positioning and rotation angle are developed. Results show that the recognition of object detection is 95.5%, the Kappa value of recognition of gesture detection is 0.93, and the average time for detecting pointing gesture is 0.26 seconds. Furthermore, even under different lighting, such as indoor and outdoor, the pointing analysis accuracy is up to 79%. The error between the analysis angle and the actual angle is only 1.32 degrees. The results proved that the system is well suited to present the effect of augmented reality, making it applicable for real world usage. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | Expert systems with applications, 15 Dec. 2021, v. 185, 115595 | - |
dcterms.isPartOf | Expert systems with applications | - |
dcterms.issued | 2021-12 | - |
dc.identifier.scopus | 2-s2.0-85111293710 | - |
dc.identifier.eissn | 1873-6793 | - |
dc.identifier.artn | 115595 | - |
dc.description.validate | 202202 bcvc | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | This paper was partly supported by the Ministry of Science and Technology (MOST), Taiwan, for promoting academic excellent of universities under grant numbers MOST 109-2221-E-008-059-MY3 and MOST 110-2634-F-008-005. | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1-s2.0-S0957417421009957-main.pdf | 6.44 MB | Adobe PDF | View/Open |
Page views
95
Last Week
0
0
Last month
Citations as of Apr 13, 2025
Downloads
52
Citations as of Apr 13, 2025
SCOPUSTM
Citations
8
Citations as of May 8, 2025
WEB OF SCIENCETM
Citations
4
Citations as of May 8, 2025

Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.