Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114880
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Building and Real Estate-
dc.creatorZhang, M-
dc.creatorGuo, W-
dc.creatorZhang, J-
dc.creatorHan, S-
dc.creatorLi, H-
dc.creatorYue, H-
dc.date.accessioned2025-09-01T01:53:15Z-
dc.date.available2025-09-01T01:53:15Z-
dc.identifier.issn1093-9687-
dc.identifier.urihttp://hdl.handle.net/10397/114880-
dc.language.isoenen_US
dc.publisherWiley-Blackwell Publishing, Inc.en_US
dc.rightsThis is an open access article under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited.en_US
dc.rights© 2025 The Author(s). Computer-Aided Civil and Infrastructure Engineering published by Wiley Periodicals LLC on behalf of Editor.en_US
dc.rightsThe following publication Zhang, M., Guo, W., Zhang, J., Han, S., Li, H., & Yue, H. (2025). Excavator 3D pose estimation from point cloud with self-supervised deep learning. Computer-Aided Civil and Infrastructure Engineering, 1–19 is available at https://doi.org/10.1111/mice.13500.en_US
dc.titleExcavator 3D pose estimation from point cloud with self-supervised deep learningen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1111/mice.13500-
dcterms.abstractPose estimation of excavators is a fundamental yet challenging task with significant implications for intelligent construction. Traditional methods based on cameras or sensors are often limited by their ability to perceive spatial structures. To address this, 3D light detection and ranging has emerged as a promising paradigm for excavator pose estimation. However, these methods face significant challenges: (1) accurate 3D pose annotations are labor-intensive and costly, and (2) excavators exhibit complex kinematics and geometric structures, further complicating pose estimation. In this study, a novel framework is proposed for full-body excavator pose estimation directly from 3D point clouds, without relying on manual 3D annotations. The excavator pose is parameterized using pose parameters of geometric primitives under kinematic constraints. A unified deep network is designed to predict pose parameters from point clouds. The network is initially pre-trained on synthetic data to provide parameter initialization and then fine-tuned using real-world data. To facilitate label-free training, the self-supervised loss functions are designed by exploiting the geometric and kinematic consistency between point clouds and excavators. Experimental results on real-world construction sites demonstrate the effectiveness and robustness of the proposed method, achieving an average pose estimation accuracy of 0.26 m. The method also exhibits promising performance across various excavator operational scenarios, highlighting its potential for real-world applications.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationComputer-aided civil and infrastructure engineering, First published: 03 May 2025, Early View, https://doi.org/10.1111/mice.13500-
dcterms.isPartOfComputer-aided civil and infrastructure engineering-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105004319243-
dc.identifier.eissn1467-8667-
dc.description.validate202509 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TAen_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis research was funded by the National Natural Science Foundation of China [No. 42302322], Internal Research Fund of PolyU (UGC) [No. P0047899], and Collaborative Research Fund (CRF) from the Research Grants Council (Hong Kong) [No. C6044-23GF].en_US
dc.description.pubStatusEarly releaseen_US
dc.description.TAWiley (2025)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Zhang_Excavator_3D_Pose.pdf4.48 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.