Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/103136
Title: Efficient feature learning for point cloud-based 3D object detection
Authors: He, Chenhang
Degree: Ph.D.
Issue Date: 2023
Abstract: 3D object detection is one of the fundamental techniques for autonomous driving, which aims to locate and track objects such as vehicles, pedestrians, and cyclists in real-time. While image-based computer vision techniques have been successfully used in many scenarios, most autonomous driving systems still rely on high-end LiDAR sensors to provide 3D measurements for high-precision object detection. However, processing the unstructured and unordered data from LiDAR point clouds is challenging and requires efficient feature extraction models.
In this thesis, we explore different efficient feature learning algorithm for point cloud-based 3D object detection. We first explore the trade-offs between point-based and voxel-based representations for point cloud detection models. To combine the strengths of both approaches, we propose a novel structure-aware single-stage detector (SASSD) that enables voxel-based models to learn fine-grained details from point-based representation with an auxiliary network. This results in a significant improvement in detection accuracy without increasing computational cost.
We then investigate the use of transformer-based models for feature extraction on point cloud data. While transformers are well-suited for large receptive regions, they are challenging to apply to sparse and spatially imbalanced point cloud data. To overcome this issue, we propose a voxel set transformer (VoxSeT) model that performs attention modeling on multiple voxel sets with arbitrary number of points. Our VoxSeT model outperforms commonly used sparse convolutional models in both accuracy and efficiency and is straightforward to deploy.
We further explore how to enhance 3D object detection performance with sequential point cloud data. We introduce a motion-guided sequential fusion (MSF) method that efficiently fuses multi-frame features through a proposal propagation algorithm. This approach achieves leading performance on the Waymo dataset, while incurring similar costs to a single-frame detector.
Finally, we present a novel-view synthesis-based augmentation framework, namely AugMono3D, for monocular 3D object detection. We leverage point clouds to re-construct the scene geometry of a camera image and generate the synthetic image data by augmenting camera views at multiple virtual depths. By training on a large number of synthetic images with virtual depth, our framework consistently improves the detection accuracy.
Overall, the proposed four methods demonstrate their effectiveness in learning efficient point cloud features for high-quality 3D object detection. We evaluate our methods on benchmark datasets such as Waymo and KITTI and show significant improvements in accuracy and efficiency.
Subjects: Computer vision
Image analysis -- Data processing
Machine learning
Hong Kong Polytechnic University -- Dissertations
Pages: xix, 115 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

190
Last Week
0
Last month
Citations as of Nov 9, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.