Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/102294
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.contributorSchool of Design-
dc.creatorZhao, Yen_US
dc.creatorZhang, Hen_US
dc.creatorLu, Pen_US
dc.creatorLi, Pen_US
dc.creatorWu, Een_US
dc.creatorSheng, Ben_US
dc.date.accessioned2023-10-18T07:50:57Z-
dc.date.available2023-10-18T07:50:57Z-
dc.identifier.issn2096-5796en_US
dc.identifier.urihttp://hdl.handle.net/10397/102294-
dc.language.isoenen_US
dc.publisherKeAi Publishing Communications Ltd.en_US
dc.rights©Copyright 2022 Beijing Zhongke Journal Publishing Co. Ltd., Publishing services by Elsevier B.V. on behalf of KeAi Communication Co. Ltd. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).en_US
dc.rightsThe following publication Zhao, Y., Zhang, H., Lu, P., Li, P., Wu, E., & Sheng, B. (2022). DSD-MatchingNet: Deformable Sparse-to-Dense Feature Matching for Learning Accurate Correspondences. Virtual Reality & Intelligent Hardware, 4(5), 432-443 is availale at https://doi.org/10.1016/j.vrih.2022.08.007.en_US
dc.subjectDeformable convolution networken_US
dc.subjectImage matchingen_US
dc.subjectSparse-to-dense matchingen_US
dc.titleDSD-MatchingNet : deformable sparse-to-dense feature matching for learning accurate correspondencesen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage432en_US
dc.identifier.epage443en_US
dc.identifier.volume4en_US
dc.identifier.issue5en_US
dc.identifier.doi10.1016/j.vrih.2022.08.007en_US
dcterms.abstractBackground: Exploring the correspondences across multi-view images is the basis of many computer vision tasks. However, most existing methods are limited on accuracy under challenging conditions. In order to learn more robust and accurate correspondences, we propose the DSD-MatchingNet for local feature matching in this paper. First, we develop a deformable feature extraction module to obtain multi-level feature maps, which harvests contextual information from dynamic receptive fields. The dynamic receptive fields provided by deformable convolution network ensures our method to obtain dense and robust correspondences. Second, we utilize the sparse-to-dense matching with the symmetry of correspondence to implement accurate pixel-level matching, which enables our method to produce more accurate correspondences. Experiments have shown that our proposed DSD-MatchingNet achieves a better performance on image matching benchmark, as well as on visual localization benchmark. Specifically, our method achieves 91.3% mean matching accuracy on HPatches dataset and 99.3% visual localization recalls on Aachen Day-Night dataset.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationVirtual reality & intelligent hardware, Oct. 2022, v. 4, no. 5, p. 432-443en_US
dcterms.isPartOfVirtual reality & intelligent hardwareen_US
dcterms.issued2022-10-
dc.identifier.scopus2-s2.0-85143790327-
dc.identifier.eissn2666-1209en_US
dc.description.validate202310 bcvc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China; Science and Technology Commission of Shanghai Municipalityen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
1-s2.0-S2096579622000821-main.pdf1.74 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

66
Citations as of Apr 14, 2025

Downloads

47
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

23
Citations as of Sep 12, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.