Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115571
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorZhang, R-
dc.creatorWang, X-
dc.creatorBaciu, G-
dc.creatorLi, P-
dc.date.accessioned2025-10-08T01:16:33Z-
dc.date.available2025-10-08T01:16:33Z-
dc.identifier.issn0178-2789-
dc.identifier.urihttp://hdl.handle.net/10397/115571-
dc.descriptionComputer Graphics International 2025, The Hong Kong Polytechnic University, Kowloon, Hong Kong, July 14-18, 2025en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.rights© The Author(s) 2025en_US
dc.rightsOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.en_US
dc.rightsThe following publication Zhang, R., Wang, X., Baciu, G. et al. Distilling complementary information from temporal context for enhancing human appearance in human-specific NeRF. Vis Comput 41, 6479–6491 (2025) is available at https://doi.org/10.1007/s00371-025-03948-z.en_US
dc.subjectAvatar reconstructionen_US
dc.subjectMutual informationen_US
dc.subjectNeural renderingen_US
dc.subjectNovel view synthesisen_US
dc.titleDistilling complementary information from temporal context for enhancing human appearance in human-specific NeRFen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage6479-
dc.identifier.epage6491-
dc.identifier.volume41-
dc.identifier.issue9-
dc.identifier.doi10.1007/s00371-025-03948-z-
dcterms.abstractReconstructing and animating digital avatars with free views from monocular videos have been an interesting research task in the computer vision field for a long time. Recently, some methods have introduced a novel category method of leveraging the neural radiance field to represent the human body in a canonical space with the help of the SMPL model. With the deformation of the points from an observation space into a canonical space, the human appearance can be learned in various poses and viewpoints. However, previous methods highly rely on pose-dependent representation learned from frame-independent optimization and ignore the temporal contexts across the continuous motion video, causing a bad influence on the dynamic appearance texture generation. To overcome these problems, we propose a novel free-viewpoint rendering framework, TMIHuman. It aims at introducing temporal information into NeRF-based rendering and distilling task-relevant information from complex pixel-wise representations. To be specific, we build a temporal fusion encoder that imports timestamps into the learning of non-rigid deformation and fuses the visual features of other frames into human representation. Then, we propose to disentangle the fused features and extract useful visual cues via mutual information objectives. We have extensively evaluated our method and achieved state-of-the-art performance on different public datasets.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationVisual computer, July 2025, v. 41, no. 9, p. 6479-6491-
dcterms.isPartOfVisual computer-
dcterms.issued2025-07-
dc.identifier.scopus2-s2.0-105006442791-
dc.relation.conferenceComputer Graphics International conference [CGI]-
dc.identifier.eissn1432-2315-
dc.description.validate202510 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TAen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe authors would like to thank the editors and anonymous reviewers for their insightful comments and suggestions. This work was supported by The Hong Kong Polytechnic University under Grants P0044520, P0048387, P0050657, and P0049586.en_US
dc.description.pubStatusPublisheden_US
dc.description.TASpringer Nature (2025)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
s00371-025-03948-z.pdf3.01 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.