Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/94797
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorLi, CT-
dc.creatorSiu, WC-
dc.creatorLun, DPK-
dc.date.accessioned2022-08-30T07:30:56Z-
dc.date.available2022-08-30T07:30:56Z-
dc.identifier.isbn978-1-5386-7024-8 (Electronic)-
dc.identifier.isbn978-1-5386-7023-1 (USB)-
dc.identifier.isbn978-1-5386-7025-5 (Print on Demand(PoD))-
dc.identifier.urihttp://hdl.handle.net/10397/94797-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication C. Li, W. Siu and D. P. K. Lun, "Vision-based Place Recognition Using ConvNet Features and Temporal Correlation Between Consecutive Frames," 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 3062-3067 is available at https://dx.doi.org/10.1109/ITSC.2019.8917364.en_US
dc.titleVision-based place recognition using ConvNet features and emporal correlation between consecutive framesen_US
dc.typeConference Paperen_US
dc.identifier.spage3062-
dc.identifier.epage3067-
dc.identifier.doi10.1109/ITSC.2019.8917364-
dcterms.abstractThe most challenging part of vision-based place recognition is the wide variety in appearance of places. However temporal information between consecutive frames can be used to infer the next locations of a vehicle and obtain information about its ego-motion. Effective use of temporal information is useful to narrow the search ranges of the next locations, hence an efficient place recognition system can be accomplished. This paper presents a robust vision-based place recognition method, using the recent discriminative ConvNet features and proposes a flexible tubing strategy which groups consecutive frames based on their similarities. With the tubing strategy, effective pair searching can be achieved. We also suggest to add additional variations in the appearance of places to further enhance the variety of the training data and fine-tune an off-the-shelf, CALC, network model to obtain better generalization about its extracted features. Experimental results show that our proposed temporal correlation based recognition strategy with the fine-tuned model achieves the best (0.572) F1 score improvement over the original CALC model. The proposed place recognition method is also faster than the linear full search method by a factor of 2.15.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationThe 2019 IEEE Intelligent Transportation Systems Conference - ITSC : Auckland, New Zealand, 27-30 October 2019, 8917364, p. 3062-3067-
dcterms.issued2019-
dc.identifier.scopus2-s2.0-85076801728-
dc.relation.conferenceIntelligent Transportation Systems Conference [ITSC]-
dc.description.validate202208 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera1420en_US
dc.identifier.SubFormID44915en_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe Hong Kong Polytechnic University under research grant 1-ZE1Ben_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Li_Vision-Based_Place_Recognition.pdfPre-Published version1.04 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

90
Last Week
0
Last month
Citations as of Apr 14, 2025

Downloads

56
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

2
Citations as of Dec 19, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.