Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116427
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Land Surveying and Geo-Informaticsen_US
dc.creatorLi, Zen_US
dc.creatorWu, Ben_US
dc.date.accessioned2025-12-29T03:32:45Z-
dc.date.available2025-12-29T03:32:45Z-
dc.identifier.issn0196-2892en_US
dc.identifier.urihttp://hdl.handle.net/10397/116427-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication Z. Li and B. Wu, 'Semantic-Aware Image Matching for Large-Scale 3-D Reconstruction of the Martian Surface From Rover Images,' in IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1-17, 2025, Art no. 4600717 is available at https://doi.org/10.1109/TGRS.2025.3594413.en_US
dc.subject3-D Reconstructionen_US
dc.subjectImage matchingen_US
dc.subjectMarsen_US
dc.subjectRoveren_US
dc.subjectSemanticen_US
dc.titleSemantic-aware image matching for large-scale 3-D reconstruction of the Martian surface from rover imagesen_US
dc.typeJournal/Magazine Articleen_US
dc.description.otherinformationTitle on author’s file: Semantic-Aware Image Matching for Large-Scale 3D Reconstruction of the Martian Surface from Rover Imagesen_US
dc.identifier.volume63en_US
dc.identifier.doi10.1109/TGRS.2025.3594413en_US
dcterms.abstractHigh-resolution images of the Martian surface, collected by cameras onboard rovers, offer unique insights unavailable from satellite images and are crucial for rover navigation and geological study on the Martian surface. However, the large variations in spatial resolution and viewpoint across images acquired from different rover stations, exacerbated by the textureless nature of the Martian surface, pose significant challenges for effective 3D surface reconstruction, a fundamental task in planetary topographic mapping. Thus, this paper proposes a deep learning-based approach to enable robust image matching constructed from multi-level semantic cues, for large-scale 3D reconstruction from rover images. First, a Siamese transformer-based neural network is used to perform semantic segmentation of the rover images, thereby extracting multi-level semantic cues. Second, feature matching is performed using rover images collected from different stations, in which these semantic cues are integrated to enhance feature descriptor construction, contextual aggregation, and outlier removal, yielding robust cross-station matches. These matches facilitate the bundle adjustment to link cross-station rover images accurately. Third, in the dense matching of rover images, frequency-domain matching is proposed and embedded with semantic cues to improve matching reliability and preserve surface discontinuities. Lastly, the disparity maps generated from the matching results are used to derive the 3D point clouds, which are then meshed to generate 3D surface models. Experiments are conducted on two image datasets of typical Martian scenes collected by the Zhurong rover to evaluate the performance of the proposed method. The results indicate that image residuals of around 1.5 pixels on average are achieved for the bundle adjustment of cross-station images using the matched feature points, and the final generated 3D models exhibit an accuracy better than 0.5 m. Compared with the cutting-edge commercial software, the generated 3D models from our method exhibit superior quality in terms of both accuracy and coverage, highlighting the effectiveness of the semantic-Aware image matching algorithm.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on geoscience and remote sensing, 2025, v. 63, 4600717en_US
dcterms.isPartOfIEEE transactions on geoscience and remote sensingen_US
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105012303947-
dc.identifier.eissn1558-0644en_US
dc.identifier.artn4600717en_US
dc.description.validate202512 bcjzen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.SubFormIDG000486/2025-08-
dc.description.fundingSourceRGCen_US
dc.description.fundingText10.13039/501100004787-Research Grants Council of Hong Kong (Grant Number: PolyU 15215822, PolyU 15236524 and CRF C7004-21GF)en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Li_Semantic-aware_Image_Matching.pdfPre-Published version2.82 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

1
Citations as of Apr 3, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.