Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106305
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Mechanical Engineering-
dc.creatorSun, Y-
dc.creatorZuo, W-
dc.creatorHuang, H-
dc.creatorCai, P-
dc.creatorLiu, M-
dc.date.accessioned2024-05-09T00:52:36Z-
dc.date.available2024-05-09T00:52:36Z-
dc.identifier.urihttp://hdl.handle.net/10397/106305-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication Y. Sun, W. Zuo, H. Huang, P. Cai and M. Liu, "PointMoSeg: Sparse Tensor-Based End-to-End Moving-Obstacle Segmentation in 3-D Lidar Point Clouds for Autonomous Driving," in IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 510-517, April 2021 is available at https://doi.org/10.1109/LRA.2020.3047783.en_US
dc.subject3-D Lidaren_US
dc.subjectAutonomous drivingen_US
dc.subjectEnd-to-enden_US
dc.subjectMoving obstacleen_US
dc.subjectPoint clouden_US
dc.subjectSparse tensoren_US
dc.titlePointMoSeg : sparse tensor-based end-to-end moving-obstacle segmentation in 3-D lidar point clouds for autonomous drivingen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage510-
dc.identifier.epage517-
dc.identifier.volume6-
dc.identifier.issue2-
dc.identifier.doi10.1109/LRA.2020.3047783-
dcterms.abstractMoving-obstacle segmentation is an essential capability for autonomous driving. For example, it can serve as a fundamental component for motion planning in dynamic traffic environments. Most of the current 3-D Lidar-based methods use road segmentation to find obstacles, and then employ ego-motion compensation to distinguish the static or moving states of the obstacles. However, when there is a slope on a road, the widely-used flat-road assumption for road segmentation may be violated. Moreover, due to the signal attenuation, GPS-based ego-motion compensation is often unreliable in urban environments. To provide a solution to these issues, this letter proposes an end-to-end sparse tensor-based deep neural network for moving-obstacle segmentation without using GPS or the planar-road assumption. The input to our network are merely two consecutive (previous and current) point clouds, and the output is directly the point-wise mask for moving obstacles on the current frame. We train and evaluate our network on the public nuScenes dataset. The experimental results confirm the effectiveness of our network and the superiority over the baselines.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE robotics and automation letters, Apr. 2021, v. 6, no. 2, p. 510-517-
dcterms.isPartOfIEEE robotics and automation letters-
dcterms.issued2021-04-
dc.identifier.scopus2-s2.0-85099096399-
dc.identifier.eissn2377-3766-
dc.description.validate202405 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberME-0095en_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextYoung Scientists Fund of the National Natural Science Foundation of China; Start-up Fund of The Hong Kong Polytechnic Universityen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS48283679en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Sun_Pointmoseg_Sparse_Tensor-Based.pdfPre-Published version3.56 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

8
Citations as of Jun 30, 2024

Downloads

2
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

15
Citations as of Jul 4, 2024

WEB OF SCIENCETM
Citations

13
Citations as of Jul 4, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.