Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/102294
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.contributor | School of Design | - |
| dc.creator | Zhao, Y | en_US |
| dc.creator | Zhang, H | en_US |
| dc.creator | Lu, P | en_US |
| dc.creator | Li, P | en_US |
| dc.creator | Wu, E | en_US |
| dc.creator | Sheng, B | en_US |
| dc.date.accessioned | 2023-10-18T07:50:57Z | - |
| dc.date.available | 2023-10-18T07:50:57Z | - |
| dc.identifier.issn | 2096-5796 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/102294 | - |
| dc.language.iso | en | en_US |
| dc.publisher | KeAi Publishing Communications Ltd. | en_US |
| dc.rights | ©Copyright 2022 Beijing Zhongke Journal Publishing Co. Ltd., Publishing services by Elsevier B.V. on behalf of KeAi Communication Co. Ltd. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/). | en_US |
| dc.rights | The following publication Zhao, Y., Zhang, H., Lu, P., Li, P., Wu, E., & Sheng, B. (2022). DSD-MatchingNet: Deformable Sparse-to-Dense Feature Matching for Learning Accurate Correspondences. Virtual Reality & Intelligent Hardware, 4(5), 432-443 is availale at https://doi.org/10.1016/j.vrih.2022.08.007. | en_US |
| dc.subject | Deformable convolution network | en_US |
| dc.subject | Image matching | en_US |
| dc.subject | Sparse-to-dense matching | en_US |
| dc.title | DSD-MatchingNet : deformable sparse-to-dense feature matching for learning accurate correspondences | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 432 | en_US |
| dc.identifier.epage | 443 | en_US |
| dc.identifier.volume | 4 | en_US |
| dc.identifier.issue | 5 | en_US |
| dc.identifier.doi | 10.1016/j.vrih.2022.08.007 | en_US |
| dcterms.abstract | Background: Exploring the correspondences across multi-view images is the basis of many computer vision tasks. However, most existing methods are limited on accuracy under challenging conditions. In order to learn more robust and accurate correspondences, we propose the DSD-MatchingNet for local feature matching in this paper. First, we develop a deformable feature extraction module to obtain multi-level feature maps, which harvests contextual information from dynamic receptive fields. The dynamic receptive fields provided by deformable convolution network ensures our method to obtain dense and robust correspondences. Second, we utilize the sparse-to-dense matching with the symmetry of correspondence to implement accurate pixel-level matching, which enables our method to produce more accurate correspondences. Experiments have shown that our proposed DSD-MatchingNet achieves a better performance on image matching benchmark, as well as on visual localization benchmark. Specifically, our method achieves 91.3% mean matching accuracy on HPatches dataset and 99.3% visual localization recalls on Aachen Day-Night dataset. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Virtual reality & intelligent hardware, Oct. 2022, v. 4, no. 5, p. 432-443 | en_US |
| dcterms.isPartOf | Virtual reality & intelligent hardware | en_US |
| dcterms.issued | 2022-10 | - |
| dc.identifier.scopus | 2-s2.0-85143790327 | - |
| dc.identifier.eissn | 2666-1209 | en_US |
| dc.description.validate | 202310 bcvc | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | - |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | National Natural Science Foundation of China; Science and Technology Commission of Shanghai Municipality | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 1-s2.0-S2096579622000821-main.pdf | 1.74 MB | Adobe PDF | View/Open |
Page views
66
Citations as of Apr 14, 2025
Downloads
47
Citations as of Apr 14, 2025
SCOPUSTM
Citations
23
Citations as of Sep 12, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



