Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/117497
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Land Surveying and Geo-Informatics | - |
| dc.contributor | Research Centre for Artificial Intelligence in Geomatics | - |
| dc.contributor | Research Institute for Land and Space | - |
| dc.creator | Hou, Q | - |
| dc.creator | Hou, C | - |
| dc.creator | Zhang, F | - |
| dc.creator | Weng, Q | - |
| dc.date.accessioned | 2026-02-26T03:46:18Z | - |
| dc.date.available | 2026-02-26T03:46:18Z | - |
| dc.identifier.issn | 0924-2716 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/117497 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier BV | en_US |
| dc.rights | © 2025 The Author(s). Published by Elsevier B.V. on behalf of International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ). | en_US |
| dc.rights | The following publication Hou, Q., Hou, C., Zhang, F., & Weng, Q. (2025). Multi-source geo-localization in urban built environments for crowd-sourced images by contrastive learning. ISPRS Journal of Photogrammetry and Remote Sensing, 230, 616-629 is available at https://doi.org/10.1016/j.isprsjprs.2025.09.024. | en_US |
| dc.subject | Crowd-sourced images | en_US |
| dc.subject | High-resolution satellite images | en_US |
| dc.subject | Image geo-localization | en_US |
| dc.subject | Multi-source data fusion | en_US |
| dc.subject | Street-view images | en_US |
| dc.subject | Urban spatial analytics | en_US |
| dc.title | Multi-source geo-localization in urban built environments for crowd-sourced images by contrastive learning | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 616 | - |
| dc.identifier.epage | 629 | - |
| dc.identifier.volume | 230 | - |
| dc.identifier.doi | 10.1016/j.isprsjprs.2025.09.024 | - |
| dcterms.abstract | Crowd-sourced images (CSIs) offer an unprecedented opportunity for gaining deeper insights into urban built environments. However, the lack of precise geographic information limits their effectiveness in various urban applications. Traditional geo-localization methods, which rely on matching CSIs with geo-tagged street-view images (SVIs), face significant challenges due to sparse coverage and temporal misalignment of reference data, especially in developing countries. To overcome these limitations, this paper proposes a novel contrastive learning framework that integrates SVIs and satellite images (SIs), utilizing a multi-scale channel attention module and InfoNCE loss to enhance the geo-localization accuracy of CSIs. Additionally, we leverage SIs to generate synthetic SVIs in areas where actual SVIs are unavailable or outdated, ensuring comprehensive coverage across diverse urban environments. A simple yet efficient data preprocessing method is proposed to align multi-view images for enhanced feature fusion. As part of our research efforts, we introduce a Multi-Source Geo-localization Dataset (MSGD) consisting of 500k geo-tagged pairs collected from 12 cities across six continents, encompassing diverse urban typologies from dense skyscraper districts to low-density areas, providing a valuable resource for future research and advancements in geo-localization methods. Our experiments show that the proposed method outperforms state-of-the-art approaches on the challenging MSGD dataset, highlighting the importance of incorporating SIs as a complementary data source for accurate geo-localization. Our code and dataset will be released at https://github.com/RCAIG/CrowdsourcingGeoLocalization. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | ISPRS journal of photogrammetry and remote sensing, Dec. 2025, v. 230, p. 616-629 | - |
| dcterms.isPartOf | ISPRS journal of photogrammetry and remote sensing | - |
| dcterms.issued | 2025-12 | - |
| dc.identifier.scopus | 2-s2.0-105018176301 | - |
| dc.identifier.eissn | 1872-8235 | - |
| dc.description.validate | 202602 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This research has received funding from Global STEM Professorship, Hong Kong SAR Government (P0039329) and Hong Kong Polytechnic University (P0046482 and P0038446). | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 1-s2.0-S092427162500382X-main.pdf | 7.13 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



