Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115601
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Land Surveying and Geo-Informatics | - |
| dc.contributor | Research Centre for Deep Space Explorations | - |
| dc.creator | Ma, Y | - |
| dc.creator | Li, Z | - |
| dc.creator | Wu, B | - |
| dc.creator | Duan, R | - |
| dc.date.accessioned | 2025-10-08T01:16:56Z | - |
| dc.date.available | 2025-10-08T01:16:56Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/115601 | - |
| dc.language.iso | en | en_US |
| dc.publisher | American Geophysical Union | en_US |
| dc.rights | © 2025 The Author(s). | en_US |
| dc.rights | This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. | en_US |
| dc.rights | The following publication Ma, Y., Li, Z., Wu, B., & Duan, R. (2025). DepthFormer: Depth-enhanced transformer network for semantic segmentation of the Martian surface from rover images. Earth and Space Science, 12, e2024EA003812 is available at https://doi.org/10.1029/2024EA003812. | en_US |
| dc.title | DepthFormer : depth-enhanced transformer network for semantic segmentation of the Martian surface from rover images | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 12 | - |
| dc.identifier.issue | 6 | - |
| dc.identifier.doi | 10.1029/2024EA003812 | - |
| dcterms.abstract | The Martian surface, with its diverse landforms that reflect the planet's evolution, has attracted increasing scientific interest. While extensive data is needed for interpretation, identifying landform types is crucial. This semantic information reveals underlying features and patterns, offering valuable scientific insights. Advanced deep learning techniques, particularly Transformers, can enhance semantic segmentation and image interpretation, deepening our understanding of Martian surface features. However, current publicly available neural networks are trained in the context of Earth, rendering the direct use of the Martian surface impossible. Besides, the Martian surface features poorly texture and homogenous scenarios, leading to difficulty in segmenting the images into favorable semantic classes. In this paper, an innovative depth-enhanced Transformer network—DepthFormer is developed for the semantic segmentation of Martian surface images. The stereo images acquired by the Zhurong rover along its traverse are used for training and testing the DepthFormer network. Different from regular deep-learning networks only dealing with three bands (red, green and blue) of images, the DepthFormer incorporates the depth information available from the stereo images as the fourth band in the network to enable more accurate segmentation of various surface features. Experimental evaluations and comparisons using synthesized and actual Mars image data sets reveal that the DepthFormer achieves an average accuracy of 98%, superior to that of conventional segmentation methods. The proposed method is the first deep-learning model incorporating depth information for accurate semantic segmentation of the Martian surface, which is of significance for future Mars exploration missions and scientific studies. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Earth and space science, June 2025, v. 16, no. 2, e2024EA003812 | - |
| dcterms.isPartOf | Earth and space science | - |
| dcterms.issued | 2025-06 | - |
| dc.identifier.scopus | 2-s2.0-105008205731 | - |
| dc.identifier.eissn | 2333-5084 | - |
| dc.identifier.artn | e2024EA003812 | - |
| dc.description.validate | 202510 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_TA | en_US |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work was supported by grants from the Research Grants Council of Hong Kong (Project PolyU 15210520, Project PolyU 15215822, Project PolyU 15236524, RIF Project R5043-19, CRF Project C7004-21GF). The authors would like to thank all those who worked on the archive of the data sets to make them publicly available. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.TA | Wiley (2025) | en_US |
| dc.description.oaCategory | TA | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Ma_DepthFormer_Depth_Enhanced.pdf | 3.92 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



