Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112819
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Land Surveying and Geo-Informatics-
dc.creatorGuan, F-
dc.creatorZhao, N-
dc.creatorFang, Z-
dc.creatorJiang, L-
dc.creatorZhang, J-
dc.creatorYu, Y-
dc.creatorHuang, H-
dc.date.accessioned2025-05-09T00:55:08Z-
dc.date.available2025-05-09T00:55:08Z-
dc.identifier.issn1009-5020-
dc.identifier.urihttp://hdl.handle.net/10397/112819-
dc.language.isoenen_US
dc.publisherTaylor & Francis Asia Pacific (Singapore)en_US
dc.rights© 2025 Wuhan University. Published by Informa UK Limited, trading as Taylor & Francis Group.en_US
dc.rightsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.en_US
dc.rightsThe following publication Guan, F., Zhao, N., Fang, Z., Jiang, L., Zhang, J., Yu, Y., & Huang, H. (2025). Multi-level representation learning via ConvNeXt-based network for unaligned cross-view matching. Geo-Spatial Information Science, 1–14 is available at https://doi.org/10.1080/10095020.2024.2439385.en_US
dc.subjectConvNeXten_US
dc.subjectCross-view matchingen_US
dc.subjectDrone viewen_US
dc.subjectMultilevel featureen_US
dc.subjectSatellite viewen_US
dc.titleMulti-level representation learning via ConvNeXt-based network for unaligned cross-view matchingen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1080/10095020.2024.2439385-
dcterms.abstractCross-view matching refers to the use of images from different platforms (e.g. drone and satellite views) to retrieve the most relevant images, where the key is that the viewpoints and spatial resolution. However, most of the existing methods focus on extracting fine-grained features and ignore the connection of contextual information in the image. Therefore, we propose a novel ConvNeXt-based multi-level representation learning model for the solution of this task. First, we extract global features through the ConvNeXt model. In order to obtain a joint part-based representation learning from the global features, we then replicated the obtained global features, operating one copy with spatial attention and the other copy using a standard convolutional operation. In addition, the features of different branches are aggregated through the multilevel feature fusion module to prepare for cross-view matching. Finally, we created a new hybrid loss function to better limit these features and assist in mining crucial data regarding global features. The experimental results indicate that we have achieved advanced performance on two common datasets, University-1652 and SUES-200 at 89.79% and 95.75% in drone target matching and 94.87% and 98.80 in drone navigation.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationGeo-spatial information science (地球空间信息科学学报), Published online: 17 Jan 2025, Latest Articles, https://doi.org/10.1080/10095020.2024.2439385-
dcterms.isPartOfGeo-spatial information science (地球空间信息科学学报)-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-85215119583-
dc.identifier.eissn1993-5153-
dc.description.validate202505 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe National Natural Science Foundation of China [grant number 42401517]; the Open Fund of State Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University [grant number 23S01]; the Open Research Fund of Anhui Province Key Laboratory of Physical Geographic Environment, Chuzhou University [grant number 2023PGE001]; the Fundamental Research Funds for the Provincial Universities of Zhejiang [grant number GK249909299001-023]; the Zhiyuan Laboratory [grant number ZYL2024023]; Excellent Scientific Research and Innovation Team of Universities in Anhui Province [grant number 2023AH010071]; Major Project on Natural Science Foundation of Universities in Anhui Province [grant number 2022AH040156]; Excellent Young Scientists Project of Universities in Anhui province [grant number 2022AH030112]; Academic Foundation for Top Talents in Disciplines of Anhui Universities [grant number gxbjZD2022069]en_US
dc.description.pubStatusEarly releaseen_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Guan_Multi-level_Representation_Learning.pdf18.48 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

5
Citations as of Dec 19, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.