Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112268
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textilesen_US
dc.contributorResearch Centre of Textiles for Future Fashionen_US
dc.creatorLi, Jen_US
dc.creatorZhou, Yen_US
dc.creatorFan, Jen_US
dc.creatorShou, Den_US
dc.creatorXu, Sen_US
dc.creatorMok, PYen_US
dc.date.accessioned2025-04-08T00:44:13Z-
dc.date.available2025-04-08T00:44:13Z-
dc.identifier.issn0262-8856en_US
dc.identifier.urihttp://hdl.handle.net/10397/112268-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.rights© 2025 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ).en_US
dc.rightsThe following publication Li, J., Zhou, Y., Fan, J., Shou, D., Xu, S., & Mok, P. Y. (2025). Feature Field Fusion for few-shot novel view synthesis. Image and Vision Computing, 156, 105465 is available at https://doi.org/10.1016/j.imavis.2025.105465.en_US
dc.subjectFeature field fusionen_US
dc.subjectFew-shot learningen_US
dc.subjectNeural radiance fieldsen_US
dc.subjectNovel view synthesisen_US
dc.subjectRegularizationsen_US
dc.subjectSparse settingen_US
dc.titleFeature field fusion for few-shot novel view synthesisen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume156en_US
dc.identifier.doi10.1016/j.imavis.2025.105465en_US
dcterms.abstractReconstructing neural radiance fields from limited or sparse views has given very promising potential for this field of research. Previous methods usually constrain the reconstruction process with additional priors, e.g. semantic-based or patch-based regularization. Nevertheless, such regularization is given to the synthesis of unseen views, which may not effectively assist the field of learning, in particular when the training views are sparse. Instead, we propose a feature Field Fusion (FFusion) NeRF in this paper that can learn structure and more details from features extracted from pre-trained neural networks for the sparse training views, and use as extra guide for the training of the RGB field. With such extra feature guides, FFusion predicts more accurate color and density when synthesizing novel views. Experimental results have shown that FFusion can effectively improve the quality of the synthesized novel views with only limited or sparse inputs.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationImage and Vision Computing, Apr. 2025, v. 156, 105465en_US
dcterms.isPartOfImage and Vision Computingen_US
dcterms.issued2025-04-
dc.identifier.scopus2-s2.0-85218416258-
dc.identifier.artn105465en_US
dc.description.validate202504 bcfcen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TA-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextHong Kong Polytechnic University (Grant Numbers Q-8883; CD95/P0049355; BDVH/P0051330; BBFL/P0052601)en_US
dc.description.pubStatusPublisheden_US
dc.description.TAElsevier (2025)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
1-s2.0-S0262885625000538-main.pdf3.95 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

1
Citations as of Apr 14, 2025

Downloads

1
Citations as of Apr 14, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.