Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/92053
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Land Surveying and Geo-Informatics | - |
dc.creator | Li, M | - |
dc.creator | Qin, JY | - |
dc.creator | Li, DR | - |
dc.creator | Chen, RZ | - |
dc.creator | Liao, X | - |
dc.creator | Guo, BX | - |
dc.date.accessioned | 2022-02-07T07:05:48Z | - |
dc.date.available | 2022-02-07T07:05:48Z | - |
dc.identifier.issn | 1009-5020 | - |
dc.identifier.uri | http://hdl.handle.net/10397/92053 | - |
dc.language.iso | en | en_US |
dc.publisher | Taylor & Francis Asia Pacific (Singapore) | en_US |
dc.rights | © 2021 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Ain Shams University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). | en_US |
dc.rights | The following publication Li, M., Qin, J., Li, D., Chen, R., Liao, X., & Guo, B. (2021). VNLSTM-PoseNet: A novel deep ConvNet for real-time 6-DOF camera relocalization in urban streets. Geo-Spatial Information Science, 24(3), 422-437 is available at https://doi.org/10.1080/10095020.2021.1960779 | en_US |
dc.subject | Camera relocalization | en_US |
dc.subject | Pose regression | en_US |
dc.subject | Deep convnet | en_US |
dc.subject | RGB image | en_US |
dc.subject | Camera pose | en_US |
dc.title | VNLSTM-PoseNet : a novel deep ConvNet for real-time 6-DOF camera relocalization in urban streets | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.spage | 422 | - |
dc.identifier.epage | 437 | - |
dc.identifier.volume | 24 | - |
dc.identifier.issue | 3 | - |
dc.identifier.doi | 10.1080/10095020.2021.1960779 | - |
dcterms.abstract | Image-based relocalization is a renewed interest in outdoor environments, because it is an important problem with many applications. PoseNet introduces Convolutional Neural Network (CNN) for the first time to realize the real-time camera pose solution based on a single image. In order to solve the problem of precision and robustness of PoseNet and its improved algorithms in complex environment, this paper proposes and implements a new visual relocation method based on deep convolutional neural networks (VNLSTM-PoseNet). Firstly, this method directly resizes the input image without cropping to increase the receptive field of the training image. Then, the image and the corresponding pose labels are put into the improved Long Short-Term Memory based (LSTM-based) PoseNet network for training and the network is optimized by the Nadam optimizer. Finally, the trained network is used for image localization to obtain the camera pose. Experimental results on outdoor public datasets show our VNLSTM-PoseNet can lead to drastic improvements in relocalization performance compared to existing state-of-the-art CNN-based methods. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | Geo-spatial information science (地球空间信息科学学报), 2021, v. 24, no. 3, p. 422-437 | - |
dcterms.isPartOf | Geo-spatial information science (地球空间信息科学学报) | - |
dcterms.issued | 2021 | - |
dc.identifier.isi | WOS:000686822400001 | - |
dc.identifier.eissn | 1993-5153 | - |
dc.description.validate | 202202 bchy | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | This work is supported by the National Key R&D Program of China [grant number 2018YFB0505400], the National Natural Science Foundation of China (NSFC) [grant number 41901407], the LIESMARS Special Research Funding [grant number 2021] and the College Students' Innovative Entrepreneurial Training Plan Program [grant number S2020634016]. | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Li_novel_deep_ConvNet.pdf | 9.24 MB | Adobe PDF | View/Open |
Page views
81
Last Week
0
0
Last month
Citations as of May 11, 2025
Downloads
72
Citations as of May 11, 2025
SCOPUSTM
Citations
16
Citations as of Jun 21, 2024
WEB OF SCIENCETM
Citations
14
Citations as of Jun 5, 2025

Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.