Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/111355
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Building and Real Estate-
dc.creatorGong, Yen_US
dc.creatorSeo, Jen_US
dc.creatorKang, KSen_US
dc.creatorShi, Men_US
dc.date.accessioned2025-02-20T04:09:53Z-
dc.date.available2025-02-20T04:09:53Z-
dc.identifier.issn0926-5805en_US
dc.identifier.urihttp://hdl.handle.net/10397/111355-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.rights© 2025 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC license ( http://creativecommons.org/licenses/by- nc/4.0/ ).en_US
dc.rightsThe following publication Gong, Y., Seo, J., Kang, K. S., & Shi, M. (2025). Automated recognition of construction worker activities using multimodal decision-level fusion. Automation in Construction, 172, 106032 is available at https://doi.org/10.1016/j.autcon.2025.106032.en_US
dc.subjectAccelerometeren_US
dc.subjectActivity recognitionen_US
dc.subjectAutomationen_US
dc.subjectComputer visionen_US
dc.subjectDecision-level fusionen_US
dc.subjectDempster-Shafer theoryen_US
dc.subjectMultimodal fusionen_US
dc.titleAutomated recognition of construction worker activities using multimodal decision-level fusionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume172en_US
dc.identifier.doi10.1016/j.autcon.2025.106032en_US
dcterms.abstractThis paper proposes an automated approach for construction worker activity recognition by integrating video and acceleration data, employing a decision-level fusion method that combines classification results from each data modality using the Dempster-Shafer Theory (DS). To address uneven sensor reliability, the Category-wise Weighted Dempster-Shafer (CWDS) approach is further proposed, estimating category-wise weights during training and embedding them into the fusion process. An experimental study with ten participants performing eight construction activities showed that models trained using DS and CWDS outperformed single-modal approaches, achieving accuracies of 91.8% and 95.6%, about 7% and 10% higher than those of vision-based and acceleration-based models, respectively. Category-wise improvements were also observed, indicating that the proposed multimodal fusion approaches result in a more robust and balanced model. These results highlight the effectiveness of integrating vision and accelerometer data through decision-level fusion to reduce uncertainty in multimodal data and leverage the strengths of single sensor-based approaches.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationAutomation in construction, Apr. 2025, v. 172, 106032en_US
dcterms.isPartOfAutomation in constructionen_US
dcterms.issued2025-04-
dc.identifier.scopus2-s2.0-85216923474-
dc.identifier.eissn1872-7891en_US
dc.identifier.artn106032en_US
dc.description.validate202502 bcwh-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TA-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.TAElsevier (2025)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
1-s2.0-S092658052500072X-main.pdf4.58 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

17
Citations as of Apr 14, 2025

Downloads

13
Citations as of Apr 14, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.