Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/118681
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Aeronautical and Aviation Engineering | - |
| dc.creator | Zhang, Z | - |
| dc.creator | Fang, L | - |
| dc.creator | Yan, Z | - |
| dc.creator | Chen, T | - |
| dc.creator | Wang, B | - |
| dc.creator | Wen, CY | - |
| dc.date.accessioned | 2026-05-11T02:49:49Z | - |
| dc.date.available | 2026-05-11T02:49:49Z | - |
| dc.identifier.issn | 1083-4435 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/118681 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
| dc.rights | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
| dc.rights | The following publication Z. Zhang, L. Fang, Z. Yan, T. Chen, B. Wang and C. -y. Wen, 'Spatial–Temporal Diffusion Model for Underwater Scene Reconstruction With Application to AUV Navigation,' in IEEE/ASME Transactions on Mechatronics, vol. 30, no. 6, pp. 4142-4153, Dec. 2025 is available at https://doi.org/10.1109/TMECH.2025.3600436. | en_US |
| dc.subject | Autonomous underwater vehicle (AUV) | en_US |
| dc.subject | Diffusion model | en_US |
| dc.subject | Scene reconstruction | en_US |
| dc.subject | Subsea terrain perception | en_US |
| dc.title | Spatial–temporal diffusion model for underwater scene reconstruction with application to AUV navigation | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 4142 | - |
| dc.identifier.epage | 4153 | - |
| dc.identifier.volume | 30 | - |
| dc.identifier.issue | 6 | - |
| dc.identifier.doi | 10.1109/TMECH.2025.3600436 | - |
| dcterms.abstract | autonomous underwater vehicles (AUVs) have been extensively utilized in subsea exploration and surveying. However, accurately perceiving the surrounding environment remains a significant challenge for AUVs due to the complexities of subsea terrains. To address this issue, we propose a novel generative scene reconstruction method to enhance AUVs’ perception capabilities. Our method is primarily designed for reconstructing dense subsea terrain from 3-D multibeam echosounder data. We leverage local diffusion and denoising strategies to reconstruct complete subsea terrain at the scene scale directly, without requiring normalization from point clouds. Considering the motion dynamics of AUVs and the overlap between consecutive sonar frames, we introduce a spatial–temporal attention mechanism to aggregate features from consecutive point clouds and guide the reconstruction process as a condition. Then, the reconstructed point cloud is utilized for probabilistic terrain modeling through Bayesian updating, enabling path planning. Experiments conducted on simulation and real-world datasets demonstrate that our method can generate more accurate and complete terrain maps. Furthermore, path planning based on our reconstruction method achieves the shortest and smoothest motion path, further validating that our reconstruction method can provide more complete perception information for AUV navigation. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | IEEE/ASME transactions on mechatronics, Dec. 2025, v. 30, no. 6, p. 4142-4153 | - |
| dcterms.isPartOf | IEEE/ASME transactions on mechatronics | - |
| dcterms.issued | 2025-12 | - |
| dc.identifier.scopus | 2-s2.0-105017084815 | - |
| dc.identifier.eissn | 1941-014X | - |
| dc.description.validate | 202605 bcjz | - |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.SubFormID | G001605/2026-03 | en_US |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work was supported in part by the Young Scientists Fund of the National Natural Science Foundation of China under Grant 42301520, in part by the Major Research Project on Scientific Instrument Development of National Natural Science Foundation of China under Grant 42327901, in part by the Research Grants Council of Hong Kong under Grant 25206524, in part by the Innovation and Technology Fund under Grant PRP/068/23FX, in part by the Platform Project of Unmanned Autonomous Systems Research Centre under Grant P0049516, in part by the Guangdong-Hong Kong Joint Laboratory for Marine Infrastructure under Grant 2025B1212150001, and in part by the Seed Projects of Smart Cities Research Institute under Grant P0051028 and Grant P0054511. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| dc.relation.rdata | https://github.com/sam-zyzhang/SonarPC-Diff | - |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Zhang_Spatial_Temporal_Diffusion.pdf | Pre-Published version | 5.59 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



