Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/92053
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Land Surveying and Geo-Informatics-
dc.creatorLi, M-
dc.creatorQin, JY-
dc.creatorLi, DR-
dc.creatorChen, RZ-
dc.creatorLiao, X-
dc.creatorGuo, BX-
dc.date.accessioned2022-02-07T07:05:48Z-
dc.date.available2022-02-07T07:05:48Z-
dc.identifier.issn1009-5020-
dc.identifier.urihttp://hdl.handle.net/10397/92053-
dc.language.isoenen_US
dc.publisherTaylor & Francis Asia Pacific (Singapore)en_US
dc.rights© 2021 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Ain Shams University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).en_US
dc.rightsThe following publication Li, M., Qin, J., Li, D., Chen, R., Liao, X., & Guo, B. (2021). VNLSTM-PoseNet: A novel deep ConvNet for real-time 6-DOF camera relocalization in urban streets. Geo-Spatial Information Science, 24(3), 422-437 is available at https://doi.org/10.1080/10095020.2021.1960779en_US
dc.subjectCamera relocalizationen_US
dc.subjectPose regressionen_US
dc.subjectDeep convneten_US
dc.subjectRGB imageen_US
dc.subjectCamera poseen_US
dc.titleVNLSTM-PoseNet : a novel deep ConvNet for real-time 6-DOF camera relocalization in urban streetsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage422-
dc.identifier.epage437-
dc.identifier.volume24-
dc.identifier.issue3-
dc.identifier.doi10.1080/10095020.2021.1960779-
dcterms.abstractImage-based relocalization is a renewed interest in outdoor environments, because it is an important problem with many applications. PoseNet introduces Convolutional Neural Network (CNN) for the first time to realize the real-time camera pose solution based on a single image. In order to solve the problem of precision and robustness of PoseNet and its improved algorithms in complex environment, this paper proposes and implements a new visual relocation method based on deep convolutional neural networks (VNLSTM-PoseNet). Firstly, this method directly resizes the input image without cropping to increase the receptive field of the training image. Then, the image and the corresponding pose labels are put into the improved Long Short-Term Memory based (LSTM-based) PoseNet network for training and the network is optimized by the Nadam optimizer. Finally, the trained network is used for image localization to obtain the camera pose. Experimental results on outdoor public datasets show our VNLSTM-PoseNet can lead to drastic improvements in relocalization performance compared to existing state-of-the-art CNN-based methods.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationGeo-spatial information science (地球空间信息科学学报), 2021, v. 24, no. 3, p. 422-437-
dcterms.isPartOfGeo-spatial information science (地球空间信息科学学报)-
dcterms.issued2021-
dc.identifier.isiWOS:000686822400001-
dc.identifier.eissn1993-5153-
dc.description.validate202202 bchy-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work is supported by the National Key R&D Program of China [grant number 2018YFB0505400], the National Natural Science Foundation of China (NSFC) [grant number 41901407], the LIESMARS Special Research Funding [grant number 2021] and the College Students' Innovative Entrepreneurial Training Plan Program [grant number S2020634016].en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Li_novel_deep_ConvNet.pdf9.24 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

81
Last Week
0
Last month
Citations as of May 11, 2025

Downloads

72
Citations as of May 11, 2025

SCOPUSTM   
Citations

16
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

14
Citations as of Jun 5, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.