Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/112819
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Land Surveying and Geo-Informatics | - |
| dc.creator | Guan, F | - |
| dc.creator | Zhao, N | - |
| dc.creator | Fang, Z | - |
| dc.creator | Jiang, L | - |
| dc.creator | Zhang, J | - |
| dc.creator | Yu, Y | - |
| dc.creator | Huang, H | - |
| dc.date.accessioned | 2025-05-09T00:55:08Z | - |
| dc.date.available | 2025-05-09T00:55:08Z | - |
| dc.identifier.issn | 1009-5020 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/112819 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Taylor & Francis Asia Pacific (Singapore) | en_US |
| dc.rights | © 2025 Wuhan University. Published by Informa UK Limited, trading as Taylor & Francis Group. | en_US |
| dc.rights | This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent. | en_US |
| dc.rights | The following publication Guan, F., Zhao, N., Fang, Z., Jiang, L., Zhang, J., Yu, Y., & Huang, H. (2025). Multi-level representation learning via ConvNeXt-based network for unaligned cross-view matching. Geo-Spatial Information Science, 1–14 is available at https://doi.org/10.1080/10095020.2024.2439385. | en_US |
| dc.subject | ConvNeXt | en_US |
| dc.subject | Cross-view matching | en_US |
| dc.subject | Drone view | en_US |
| dc.subject | Multilevel feature | en_US |
| dc.subject | Satellite view | en_US |
| dc.title | Multi-level representation learning via ConvNeXt-based network for unaligned cross-view matching | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.doi | 10.1080/10095020.2024.2439385 | - |
| dcterms.abstract | Cross-view matching refers to the use of images from different platforms (e.g. drone and satellite views) to retrieve the most relevant images, where the key is that the viewpoints and spatial resolution. However, most of the existing methods focus on extracting fine-grained features and ignore the connection of contextual information in the image. Therefore, we propose a novel ConvNeXt-based multi-level representation learning model for the solution of this task. First, we extract global features through the ConvNeXt model. In order to obtain a joint part-based representation learning from the global features, we then replicated the obtained global features, operating one copy with spatial attention and the other copy using a standard convolutional operation. In addition, the features of different branches are aggregated through the multilevel feature fusion module to prepare for cross-view matching. Finally, we created a new hybrid loss function to better limit these features and assist in mining crucial data regarding global features. The experimental results indicate that we have achieved advanced performance on two common datasets, University-1652 and SUES-200 at 89.79% and 95.75% in drone target matching and 94.87% and 98.80 in drone navigation. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Geo-spatial information science (地球空间信息科学学报), Published online: 17 Jan 2025, Latest Articles, https://doi.org/10.1080/10095020.2024.2439385 | - |
| dcterms.isPartOf | Geo-spatial information science (地球空间信息科学学报) | - |
| dcterms.issued | 2025 | - |
| dc.identifier.scopus | 2-s2.0-85215119583 | - |
| dc.identifier.eissn | 1993-5153 | - |
| dc.description.validate | 202505 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | The National Natural Science Foundation of China [grant number 42401517]; the Open Fund of State Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University [grant number 23S01]; the Open Research Fund of Anhui Province Key Laboratory of Physical Geographic Environment, Chuzhou University [grant number 2023PGE001]; the Fundamental Research Funds for the Provincial Universities of Zhejiang [grant number GK249909299001-023]; the Zhiyuan Laboratory [grant number ZYL2024023]; Excellent Scientific Research and Innovation Team of Universities in Anhui Province [grant number 2023AH010071]; Major Project on Natural Science Foundation of Universities in Anhui Province [grant number 2022AH040156]; Excellent Young Scientists Project of Universities in Anhui province [grant number 2022AH030112]; Academic Foundation for Top Talents in Disciplines of Anhui Universities [grant number gxbjZD2022069] | en_US |
| dc.description.pubStatus | Early release | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Guan_Multi-level_Representation_Learning.pdf | 18.48 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



