Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/116427
| Title: | Semantic-aware image matching for large-scale 3-D reconstruction of the Martian surface from rover images | Authors: | Li, Z Wu, B |
Issue Date: | 2025 | Source: | IEEE transactions on geoscience and remote sensing, 2025, v. 63, 4600717 | Abstract: | High-resolution images of the Martian surface, collected by cameras onboard rovers, offer unique insights unavailable from satellite images and are crucial for rover navigation and geological study on the Martian surface. However, the large variations in spatial resolution and viewpoint across images acquired from different rover stations, exacerbated by the textureless nature of the Martian surface, pose significant challenges for effective 3D surface reconstruction, a fundamental task in planetary topographic mapping. Thus, this paper proposes a deep learning-based approach to enable robust image matching constructed from multi-level semantic cues, for large-scale 3D reconstruction from rover images. First, a Siamese transformer-based neural network is used to perform semantic segmentation of the rover images, thereby extracting multi-level semantic cues. Second, feature matching is performed using rover images collected from different stations, in which these semantic cues are integrated to enhance feature descriptor construction, contextual aggregation, and outlier removal, yielding robust cross-station matches. These matches facilitate the bundle adjustment to link cross-station rover images accurately. Third, in the dense matching of rover images, frequency-domain matching is proposed and embedded with semantic cues to improve matching reliability and preserve surface discontinuities. Lastly, the disparity maps generated from the matching results are used to derive the 3D point clouds, which are then meshed to generate 3D surface models. Experiments are conducted on two image datasets of typical Martian scenes collected by the Zhurong rover to evaluate the performance of the proposed method. The results indicate that image residuals of around 1.5 pixels on average are achieved for the bundle adjustment of cross-station images using the matched feature points, and the final generated 3D models exhibit an accuracy better than 0.5 m. Compared with the cutting-edge commercial software, the generated 3D models from our method exhibit superior quality in terms of both accuracy and coverage, highlighting the effectiveness of the semantic-Aware image matching algorithm. | Keywords: | 3-D Reconstruction Image matching Mars Rover Semantic |
Publisher: | Institute of Electrical and Electronics Engineers | Journal: | IEEE transactions on geoscience and remote sensing | ISSN: | 0196-2892 | EISSN: | 1558-0644 | DOI: | 10.1109/TGRS.2025.3594413 | Rights: | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The following publication Z. Li and B. Wu, 'Semantic-Aware Image Matching for Large-Scale 3-D Reconstruction of the Martian Surface From Rover Images,' in IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1-17, 2025, Art no. 4600717 is available at https://doi.org/10.1109/TGRS.2025.3594413. |
| Appears in Collections: | Journal/Magazine Article |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Li_Semantic-aware_Image_Matching.pdf | Pre-Published version | 2.82 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



