Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115178
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineering-
dc.creatorZhou, Q-
dc.creatorZuo, J-
dc.creatorKang, W-
dc.creatorRen, W-
dc.date.accessioned2025-09-15T02:22:43Z-
dc.date.available2025-09-15T02:22:43Z-
dc.identifier.urihttp://hdl.handle.net/10397/115178-
dc.language.isoenen_US
dc.publisherMDPI AGen_US
dc.rightsCopyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Zhou, Q., Zuo, J., Kang, W., & Ren, M. (2025). High-Precision 3D Reconstruction in Complex Scenes via Implicit Surface Reconstruction Enhanced by Multi-Sensor Data Fusion. Sensors, 25(9), 2820 is available at https://doi.org/10.3390/s25092820.en_US
dc.subject3D reconstructionen_US
dc.subjectDeep learningen_US
dc.subjectImplicit surfaceen_US
dc.subjectMulti-sensor fusionen_US
dc.subjectSigned distance function (SDF)en_US
dc.titleHigh-precision 3D reconstruction in complex scenes via implicit surface reconstruction enhanced by multi-sensor data fusionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume25-
dc.identifier.issue9-
dc.identifier.doi10.3390/s25092820-
dcterms.abstractIn this paper, we investigate implicit surface reconstruction methods based on deep learning, enhanced by multi-sensor data fusion, to improve the accuracy of 3D reconstruction in complex scenes. Existing single-sensor approaches often struggle with occlusions and incomplete observations. By fusing complementary information from multiple sensors (e.g., multiple cameras or a combination of cameras and depth sensors), our proposed framework alleviates the issue of missing or partial data and further increases reconstruction fidelity. We introduce a novel deep neural network that learns a continuous signed distance function (SDF) for scene geometry, conditioned on fused multi-sensor feature representations. The network seamlessly merges multi-modal data into a unified implicit representation, enabling precise and watertight surface reconstruction. We conduct extensive experiments on 3D datasets, demonstrating superior accuracy compared to single-sensor baselines and classical fusion methods. Quantitative and qualitative results reveal that multi-sensor fusion significantly improves reconstruction completeness and geometric detail, while our implicit approach provides smooth, high-resolution surfaces. Additionally, we analyze the influence of the number and diversity of sensors on reconstruction quality, the model’s ability to generalize to unseen data, and computational considerations. Our work highlights the potential of coupling deep implicit representations with multi-sensor fusion to achieve robust 3D reconstruction in challenging real-world conditions.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSensors, May 2025, v. 25, no. 9, 2820-
dcterms.isPartOfSensors-
dcterms.issued2025-05-
dc.identifier.scopus2-s2.0-105004903390-
dc.identifier.eissn1424-8220-
dc.identifier.artn2820-
dc.description.validate202509 bcch-
dc.description.oaVersion or Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
sensors-25-02820.pdf3.83 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version or Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.