Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105458
PIRA download icon_1.1View/Download Full Text
Title: Learning parallel dense correspondence from spatio-temporal descriptors for efficient and robust 4D reconstruction
Authors: Tang, J
Xu, D
Jia, K
Zhang, L 
Issue Date: 2021
Source: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19-25 June 2021, p. 6018-6027
Abstract: This paper focuses on the task of 4D shape reconstruction from a sequence of point clouds. Despite the recent success achieved by extending deep implicit representations into 4D space [29], it is still a great challenge in two respects, i.e. how to design a flexible framework for learning robust spatio-temporal shape representations from 4D point clouds, and develop an efficient mechanism for capturing shape dynamics. In this work, we present a novel pipeline to learn a temporal evolution of the 3D human shape through spatially continuous transformation functions among cross-frame occupancy fields. The key idea is to parallelly establish the dense correspondence between predicted occupancy fields at different time steps via explicitly learning continuous displacement vector fields from robust spatio-temporal shape representations. Extensive comparisons against previous state-of-the-arts show the superior accuracy of our approach for 4D human reconstruction in the problems of 4D shape auto-encoding and completion, and a much faster network inference with about 8 times speedup demonstrates the significant efficiency of our approach. The trained models and implementation code are available at https://github.com/tangjiapeng/LPDC-Net.
Publisher: Institute of Electrical and Electronics Engineers
ISBN: 978-1-6654-4509-2 (Electronic)
978-1-6654-4510-8 (Print on Demand(PoD))
DOI: 10.1109/CVPR46437.2021.00596
Rights: ©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication J. Tang, D. Xu, K. Jia and L. Zhang, "Learning Parallel Dense Correspondence from Spatio-Temporal Descriptors for Efficient and Robust 4D Reconstruction," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 6018-6027 is available at https://doi.org/10.1109/CVPR46437.2021.00596.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
Zhang_Learning_Parallel_Dense.pdfPre-Published version1.9 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

106
Last Week
9
Last month
Citations as of Nov 9, 2025

Downloads

63
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

25
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

21
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.