Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/109486
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorHe, Cen_US
dc.creatorLi, Ren_US
dc.creatorZhang, Yen_US
dc.creatorLi, Sen_US
dc.creatorZhang, Len_US
dc.date.accessioned2024-11-01T08:04:34Z-
dc.date.available2024-11-01T08:04:34Z-
dc.identifier.isbn979-8-3503-0129-8en_US
dc.identifier.urihttp://hdl.handle.net/10397/109486-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication C. He, R. Li, Y. Zhang, S. Li and L. Zhang, "MSF: Motion-guided Sequential Fusion for Efficient 3D Object Detection from Point Cloud Sequences," 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 2023, pp. 5196-5205 is available at https://doi.org/10.1109/CVPR52729.2023.00503.en_US
dc.titleMSF : motion-guided sequential fusion for efficient 3D object detection from point cloud sequencesen_US
dc.typeConference Paperen_US
dc.identifier.spage5196en_US
dc.identifier.epage5205en_US
dc.identifier.doi10.1109/CVPR52729.2023.00503en_US
dcterms.abstractPoint cloud sequences are commonly used to accurately detect 3D objects in applications such as autonomous driving. Current top-performing multi-frame detectors mostly follow a Detect-and-Fuse framework, which extracts features from each frame of the sequence and fuses them to detect the objects in the current frame. However, this inevitably leads to redundant computation since adjacent frames are highly correlated. In this paper, we propose an efficient Motion-guided Sequential Fusion (MSF) method, which exploits the continuity of object motion to mine useful sequential contexts for object detection in the current frame. We first generate 3D proposals on the current frame and propagate them to preceding frames based on the estimated velocities. The points-of-interest are then pooled from the sequence and encoded as proposal features. A novel Bidi-rectional Feature Aggregation (BiFA) module is further proposed to facilitate the interactions of proposal features across frames. Besides, we optimize the point cloud pooling by a voxel-based sampling technique so that millions of points can be processed in several milliseconds. The proposed MSF method achieves not only better efficiency than other multi-frame detectors but also leading accuracy, with 83.12% and 78.30% mAP on the LEVEL1 and LEVEL2 test sets of Waymo Open Dataset, respectively. Codes can be found at https://github.com/skyhehe123/MSF.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition : Vancouver, Canada, 18 - 22 June 2023, p. 5196-5205en_US
dcterms.issued2023-
dc.identifier.scopus2-s2.0-85173958238-
dc.relation.ispartofbook2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition : Vancouver, Canada, 18 - 22 June 2023en_US
dc.relation.conferenceConference on Computer Vision and Pattern Recognition [CVPR]-
dc.description.validate202411 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberOA_Others-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
He_MSF_Motion-guided_Sequential.pdfPre-Published version1.94 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

40
Citations as of Apr 14, 2025

Downloads

12
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

16
Citations as of May 29, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.