Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105458
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorTang, Jen_US
dc.creatorXu, Den_US
dc.creatorJia, Ken_US
dc.creatorZhang, Len_US
dc.date.accessioned2024-04-15T07:34:30Z-
dc.date.available2024-04-15T07:34:30Z-
dc.identifier.isbn978-1-6654-4509-2 (Electronic)en_US
dc.identifier.isbn978-1-6654-4510-8 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/105458-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication J. Tang, D. Xu, K. Jia and L. Zhang, "Learning Parallel Dense Correspondence from Spatio-Temporal Descriptors for Efficient and Robust 4D Reconstruction," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 6018-6027 is available at https://doi.org/10.1109/CVPR46437.2021.00596.en_US
dc.titleLearning parallel dense correspondence from spatio-temporal descriptors for efficient and robust 4D reconstructionen_US
dc.typeConference Paperen_US
dc.identifier.spage6018en_US
dc.identifier.epage6027en_US
dc.identifier.doi10.1109/CVPR46437.2021.00596en_US
dcterms.abstractThis paper focuses on the task of 4D shape reconstruction from a sequence of point clouds. Despite the recent success achieved by extending deep implicit representations into 4D space [29], it is still a great challenge in two respects, i.e. how to design a flexible framework for learning robust spatio-temporal shape representations from 4D point clouds, and develop an efficient mechanism for capturing shape dynamics. In this work, we present a novel pipeline to learn a temporal evolution of the 3D human shape through spatially continuous transformation functions among cross-frame occupancy fields. The key idea is to parallelly establish the dense correspondence between predicted occupancy fields at different time steps via explicitly learning continuous displacement vector fields from robust spatio-temporal shape representations. Extensive comparisons against previous state-of-the-arts show the superior accuracy of our approach for 4D human reconstruction in the problems of 4D shape auto-encoding and completion, and a much faster network inference with about 8 times speedup demonstrates the significant efficiency of our approach. The trained models and implementation code are available at https://github.com/tangjiapeng/LPDC-Net.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19-25 June 2021, p. 6018-6027en_US
dcterms.issued2021-
dc.identifier.scopus2-s2.0-85119644404-
dc.relation.conferenceIEEE/CVF Conference on Computer Vision and Pattern Recognition [CVPR]-
dc.description.validate202402 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberCOMP-0041-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China; the Program for Guangdong Introducing Innovative and Entrepreneurial Teams; the Guangdong R&D key project of China; Alibaba DAMO Academyen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS56309878-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Zhang_Learning_Parallel_Dense.pdfPre-Published version1.9 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

106
Last Week
9
Last month
Citations as of Nov 9, 2025

Downloads

63
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

25
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

21
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.