Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/103136
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorHe, Chenhang-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/12714-
dc.language.isoEnglish-
dc.titleEfficient feature learning for point cloud-based 3D object detection-
dc.typeThesis-
dcterms.abstract3D object detection is one of the fundamental techniques for autonomous driving, which aims to locate and track objects such as vehicles, pedestrians, and cyclists in real-time. While image-based computer vision techniques have been successfully used in many scenarios, most autonomous driving systems still rely on high-end LiDAR sensors to provide 3D measurements for high-precision object detection. However, processing the unstructured and unordered data from LiDAR point clouds is challenging and requires efficient feature extraction models.-
dcterms.abstractIn this thesis, we explore different efficient feature learning algorithm for point cloud-based 3D object detection. We first explore the trade-offs between point-based and voxel-based representations for point cloud detection models. To combine the strengths of both approaches, we propose a novel structure-aware single-stage detector (SASSD) that enables voxel-based models to learn fine-grained details from point-based representation with an auxiliary network. This results in a significant improvement in detection accuracy without increasing computational cost.-
dcterms.abstractWe then investigate the use of transformer-based models for feature extraction on point cloud data. While transformers are well-suited for large receptive regions, they are challenging to apply to sparse and spatially imbalanced point cloud data. To overcome this issue, we propose a voxel set transformer (VoxSeT) model that performs attention modeling on multiple voxel sets with arbitrary number of points. Our VoxSeT model outperforms commonly used sparse convolutional models in both accuracy and efficiency and is straightforward to deploy.-
dcterms.abstractWe further explore how to enhance 3D object detection performance with sequential point cloud data. We introduce a motion-guided sequential fusion (MSF) method that efficiently fuses multi-frame features through a proposal propagation algorithm. This approach achieves leading performance on the Waymo dataset, while incurring similar costs to a single-frame detector.-
dcterms.abstractFinally, we present a novel-view synthesis-based augmentation framework, namely AugMono3D, for monocular 3D object detection. We leverage point clouds to re-construct the scene geometry of a camera image and generate the synthetic image data by augmenting camera views at multiple virtual depths. By training on a large number of synthetic images with virtual depth, our framework consistently improves the detection accuracy.-
dcterms.abstractOverall, the proposed four methods demonstrate their effectiveness in learning efficient point cloud features for high-quality 3D object detection. We evaluate our methods on benchmark datasets such as Waymo and KITTI and show significant improvements in accuracy and efficiency.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxix, 115 pages : color illustrations-
dcterms.issued2023-
dcterms.LCSHComputer vision-
dcterms.LCSHImage analysis -- Data processing-
dcterms.LCSHMachine learning-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

190
Last Week
0
Last month
Citations as of Nov 9, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.