Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/109540
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Aeronautical and Aviation Engineering-
dc.creatorLeung, YT-
dc.creatorZheng, X-
dc.creatorHo, HY-
dc.creatorWen, W-
dc.creatorHsu, LT-
dc.date.accessioned2024-11-08T06:09:34Z-
dc.date.available2024-11-08T06:09:34Z-
dc.identifier.issn1682-1750-
dc.identifier.urihttp://hdl.handle.net/10397/109540-
dc.description12th International Symposium on Mobile Mapping Technology (MMT 2023), 24-26 May 2023, Padua, Italyen_US
dc.language.isoenen_US
dc.publisherCopernicus GmbHen_US
dc.rights© Author(s) 2023. CC BY 4.0 License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Leung, Y.-T., Zheng, X., Ho, H.-Y., Wen, W., and Hsu, L.-T.: Cost-effective camera localization aided by prior point clouds maps for level 3 autonomous driving vehicles, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLVIII-1/W1-2023, 227–234 is available at https://doi.org/10.5194/isprs-archives-XLVIII-1-W1-2023-227-2023.en_US
dc.subject3D LiDAR mapsen_US
dc.subjectAutonomous Driving Vehiclesen_US
dc.subjectImage reconstructionen_US
dc.subjectMatching geometryen_US
dc.subjectPrior Point Clouds Mapsen_US
dc.subjectVisual localizationen_US
dc.titleCost-effective camera localization aided by prior point clouds maps for level 3 autonomous driving vehiclesen_US
dc.typeConference Paperen_US
dc.identifier.spage227-
dc.identifier.epage234-
dc.identifier.volumeXLVIII-1/W1-2023-
dc.identifier.doi10.5194/isprs-archives-XLVIII-1-W1-2023-227-2023-
dcterms.abstractPrecise and robust localization is critical for many navigation tasks, especially autonomous driving systems. The most popular localization approach is global navigation satellite systems (GNSS). However, it has several shortcomings such as multipath and nonline- of-sight reception. Vision-based localization is one of the approaches without using GNSS which is based on vision. This paper used visual localization with a prior 3D LiDAR map. Compared to common methods for visual localization using camera-acquired maps, this paper used the method that tracks the image feature and poses of a monocular camera to match with prior 3D LiDAR maps. This paper reconstructs the image feature to several sets of 3D points by a local bundle adjustment-based visual odometry system. Those 3D points matched with the prior 3D point cloud map to track the globe pose of the user. The visual localization approach has several advantages. (1) Since it only relies on matching geometry, it is robust to changes in ambient luminosity appearance. (2) Also, it uses the prior 3D map to provide viewpoint invariance. Moreover, the proposed method only requires users to use low-cost and lightweight camera sensors.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationInternational archives of the photogrammetry, remote sensing and spatial information sciences, 2023, v. XLVIII-1/W1-2023, p. 227-234-
dcterms.isPartOfInternational archives of the photogrammetry, remote sensing and spatial information sciences-
dcterms.issued2023-
dc.identifier.scopus2-s2.0-85162083703-
dc.relation.conferenceInternational Symposium on Mobile Mapping Technology [MMT]-
dc.identifier.eissn2194-9034-
dc.description.validate202411 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
isprs-archives-XLVIII-1-W1-2023-227-2023.pdf5.08 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

29
Citations as of Apr 14, 2025

Downloads

8
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

1
Citations as of May 29, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.