Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/117540
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorXiao, Y-
dc.creatorYang, Y-
dc.creatorDu, S-
dc.creatorLiu, M-
dc.creatorChen, X-
dc.creatorSun, M-
dc.date.accessioned2026-02-26T03:46:44Z-
dc.date.available2026-02-26T03:46:44Z-
dc.identifier.issn1939-1404-
dc.identifier.urihttp://hdl.handle.net/10397/117540-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2025 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.rightsThe following publication Y. Xiao, Y. Yang, S. Du, M. Liu, X. Chen and M. Sun, "3-D Point-Guided Aerial–Ground Image Matching for Robust Multiview Reconstruction," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 18, pp. 25939-25951, 2025 is available at https://doi.org/10.1109/JSTARS.2025.3616417.en_US
dc.subject3-D point-guided matching (PGM)en_US
dc.subject3-D reconstructionen_US
dc.subjectAerial–ground image matching and alignmenten_US
dc.subjectTransformer-based regressionen_US
dc.title3-D point-guided aerial-ground image matching for robust multiview reconstructionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage25939-
dc.identifier.epage25951-
dc.identifier.volume18-
dc.identifier.doi10.1109/JSTARS.2025.3616417-
dcterms.abstractMatching and aligning ground and aerial images are critical for enhancing the accuracy and completeness of 3-D reconstruction. However, significant differences in perspective and radiometric characteristics between aerial and ground images make this task highly challenging. Existing mesh-based approaches often overlook the geometric properties of 3-D points in the structure-from-motion model and suffer from limited track length. To address these issues, we propose a 3-D point-guided matching framework that leverages reconstructed 3-D points to guide the matching between aerial and ground images. Our method introduces a 3-D point-guided transformer to encode point coordinates into embeddings and integrate them into image features, enabling effective correspondence between synthetic aerial views and real ground images. In addition, we design a Transformer-based regression module to refine matching positions within local windows, improving the accuracy of aerial–ground correspondences. Our pipeline reduces matching errors, enables long-track correspondences, and facilitates robust multiview integration. Furthermore, we construct two challenging aerial–ground datasets to validate the effectiveness of our method in city-scale 3-D reconstruction. Extensive experiments on public benchmarks and our datasets demonstrate that our framework significantly outperforms state-of-the-art methods in both matching accuracy and reconstruction quality.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE journal of selected topics in applied earth observations and remote sensing, 2025, v. 18, p. 25939-25951-
dcterms.isPartOfIEEE journal of selected topics in applied earth observations and remote sensing-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105018330983-
dc.identifier.eissn2151-1535-
dc.description.validate202602 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Xiao_3D_Point_Guided.pdf29.03 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.