Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115178
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Industrial and Systems Engineering | - |
| dc.creator | Zhou, Q | - |
| dc.creator | Zuo, J | - |
| dc.creator | Kang, W | - |
| dc.creator | Ren, W | - |
| dc.date.accessioned | 2025-09-15T02:22:43Z | - |
| dc.date.available | 2025-09-15T02:22:43Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/115178 | - |
| dc.language.iso | en | en_US |
| dc.publisher | MDPI AG | en_US |
| dc.rights | Copyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | en_US |
| dc.rights | The following publication Zhou, Q., Zuo, J., Kang, W., & Ren, M. (2025). High-Precision 3D Reconstruction in Complex Scenes via Implicit Surface Reconstruction Enhanced by Multi-Sensor Data Fusion. Sensors, 25(9), 2820 is available at https://doi.org/10.3390/s25092820. | en_US |
| dc.subject | 3D reconstruction | en_US |
| dc.subject | Deep learning | en_US |
| dc.subject | Implicit surface | en_US |
| dc.subject | Multi-sensor fusion | en_US |
| dc.subject | Signed distance function (SDF) | en_US |
| dc.title | High-precision 3D reconstruction in complex scenes via implicit surface reconstruction enhanced by multi-sensor data fusion | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 25 | - |
| dc.identifier.issue | 9 | - |
| dc.identifier.doi | 10.3390/s25092820 | - |
| dcterms.abstract | In this paper, we investigate implicit surface reconstruction methods based on deep learning, enhanced by multi-sensor data fusion, to improve the accuracy of 3D reconstruction in complex scenes. Existing single-sensor approaches often struggle with occlusions and incomplete observations. By fusing complementary information from multiple sensors (e.g., multiple cameras or a combination of cameras and depth sensors), our proposed framework alleviates the issue of missing or partial data and further increases reconstruction fidelity. We introduce a novel deep neural network that learns a continuous signed distance function (SDF) for scene geometry, conditioned on fused multi-sensor feature representations. The network seamlessly merges multi-modal data into a unified implicit representation, enabling precise and watertight surface reconstruction. We conduct extensive experiments on 3D datasets, demonstrating superior accuracy compared to single-sensor baselines and classical fusion methods. Quantitative and qualitative results reveal that multi-sensor fusion significantly improves reconstruction completeness and geometric detail, while our implicit approach provides smooth, high-resolution surfaces. Additionally, we analyze the influence of the number and diversity of sensors on reconstruction quality, the model’s ability to generalize to unseen data, and computational considerations. Our work highlights the potential of coupling deep implicit representations with multi-sensor fusion to achieve robust 3D reconstruction in challenging real-world conditions. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Sensors, May 2025, v. 25, no. 9, 2820 | - |
| dcterms.isPartOf | Sensors | - |
| dcterms.issued | 2025-05 | - |
| dc.identifier.scopus | 2-s2.0-105004903390 | - |
| dc.identifier.eissn | 1424-8220 | - |
| dc.identifier.artn | 2820 | - |
| dc.description.validate | 202509 bcch | - |
| dc.description.oa | Version or Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| sensors-25-02820.pdf | 3.83 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



