Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115571
PIRA download icon_1.1View/Download Full Text
Title: Distilling complementary information from temporal context for enhancing human appearance in human-specific NeRF
Authors: Zhang, R 
Wang, X 
Baciu, G 
Li, P 
Issue Date: Jul-2025
Source: Visual computer, July 2025, v. 41, no. 9, p. 6479-6491
Abstract: Reconstructing and animating digital avatars with free views from monocular videos have been an interesting research task in the computer vision field for a long time. Recently, some methods have introduced a novel category method of leveraging the neural radiance field to represent the human body in a canonical space with the help of the SMPL model. With the deformation of the points from an observation space into a canonical space, the human appearance can be learned in various poses and viewpoints. However, previous methods highly rely on pose-dependent representation learned from frame-independent optimization and ignore the temporal contexts across the continuous motion video, causing a bad influence on the dynamic appearance texture generation. To overcome these problems, we propose a novel free-viewpoint rendering framework, TMIHuman. It aims at introducing temporal information into NeRF-based rendering and distilling task-relevant information from complex pixel-wise representations. To be specific, we build a temporal fusion encoder that imports timestamps into the learning of non-rigid deformation and fuses the visual features of other frames into human representation. Then, we propose to disentangle the fused features and extract useful visual cues via mutual information objectives. We have extensively evaluated our method and achieved state-of-the-art performance on different public datasets.
Keywords: Avatar reconstruction
Mutual information
Neural rendering
Novel view synthesis
Publisher: Springer
Journal: Visual computer 
ISSN: 0178-2789
EISSN: 1432-2315
DOI: 10.1007/s00371-025-03948-z
Description: Computer Graphics International 2025, The Hong Kong Polytechnic University, Kowloon, Hong Kong, July 14-18, 2025
Rights: © The Author(s) 2025
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
The following publication Zhang, R., Wang, X., Baciu, G. et al. Distilling complementary information from temporal context for enhancing human appearance in human-specific NeRF. Vis Comput 41, 6479–6491 (2025) is available at https://doi.org/10.1007/s00371-025-03948-z.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
s00371-025-03948-z.pdf3.01 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.