Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/109191
DC FieldValueLanguage
dc.contributorDepartment of Aeronautical and Aviation Engineering-
dc.creatorLeung, Yan Tung-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13141-
dc.language.isoEnglish-
dc.titleCost-effective camera localization aided by prior point clouds maps for level 3 autonomous driving vehicles-
dc.typeThesis-
dcterms.abstractFor navigation tasks, particularly within autonomous driving systems, accurate and robust localization is the critical aspect. While global navigation satellite systems (GNSS) are a widespread choice for localization, it has exhibited drawbacks such as susceptibility to issues like multipath and non-line-of-sight reception. Vision-based localization offers an alternative by relying on visual cues, circumventing the use of GNSS signals. In this study, we proposed a visual localization method aided by a prior 3D LiDAR map. Our approach involves reconstructing image features into multiple sets of 3D points using a localized bundle adjustment-based visual odometry system. Subsequently, these reconstructed 3D points are aligned with the prior 3D point cloud map, enabling the tracking of the user's global pose. The proposed visual localization methodology boasts several advantages. Firstly, the aided prior maps contribute to improving the robustness in the face of variations in ambient lighting and appearance. Additionally, it capitalizes on the prior 3D map to confer viewpoint invariance. The key idea of point cloud registration for the proposed approach determines geometric matching to establish the accurate position and orientation of a camera within its surroundings. This is achieved by contrasting the geometric features present in the camera's image with those stored in a reference map. The method identifies and aligns the geometric points between the camera image and the prior 3D point cloud map. Notably, our method is also conducive to the utilization of cost-effective and lightweight camera sensors by end-users. The experiment results show the proposed methods are accurate and frame rates without the need for supplementary information.-
dcterms.accessRightsopen access-
dcterms.educationLevelM.Phil.-
dcterms.extent68 pages : color illustrations-
dcterms.issued2024-
dcterms.LCSHAutomated vehicles-
dcterms.LCSHMotor vehicles -- Automatic location systems-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

97
Citations as of Nov 10, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.