Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/96492
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorLiu, ZSen_US
dc.creatorCani, MPen_US
dc.creatorSiu, WCen_US
dc.date.accessioned2022-12-07T02:55:11Z-
dc.date.available2022-12-07T02:55:11Z-
dc.identifier.issn1057-7149en_US
dc.identifier.urihttp://hdl.handle.net/10397/96492-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/en_US
dc.rightsThe following publication Liu, Z. S., Cani, M. P., & Siu, W. C. (2022). See360: Novel Panoramic View Interpolation. IEEE Transactions on Image Processing, 31, 1857-1869 is available at https://doi.org/10.1109/TIP.2022.3148819.en_US
dc.subject3D sceneen_US
dc.subjectAdversarial networken_US
dc.subjectView renderingen_US
dc.titleSee360 : novel panoramic view interpolationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1857en_US
dc.identifier.epage1869en_US
dc.identifier.volume31en_US
dc.identifier.doi10.1109/TIP.2022.3148819en_US
dcterms.abstractWe present See360, which is a versatile and efficient framework for 360◦ panoramic view interpolation using latent space viewpoint estimation. Most of the existing view rendering approaches only focus on indoor or synthetic 3D environments and render new views of small objects. In contrast, we suggest to tackle camera-centered view synthesis as a 2D affine transformation without using point clouds or depth maps, which enables an effective 360◦ panoramic scene exploration. Given a pair of reference images, the See360 model learns to render novel views by a proposed novel Multi-Scale Affine Transformer (MSAT), enabling the coarse-to-fine feature rendering. We also propose a Conditional Latent space AutoEncoder (C-LAE) to achieve view interpolation at any arbitrary angle. To show the versatility of our method, we introduce four training datasets, namely UrbanCity360, Archinterior360, HungHom360 and Lab360, which are collected from indoor and outdoor environments for both real and synthetic rendering. Experimental results show that the proposed method is generic enough to achieve real-time rendering of arbitrary views for all four datasets. In addition, our See360 model can be applied to view synthesis in the wild: with only a short extra training time (approximately 10 mins), and is able to render unknown real-world scenes. The superior performance of See360 opens up a promising direction for camera-centered view rendering and 360◦ panoramic view interpolation.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on image processing, 2022, v. 31, p. 1857-1869en_US
dcterms.isPartOfIEEE transactions on image processingen_US
dcterms.issued2022-
dc.identifier.scopus2-s2.0-85124744557-
dc.identifier.pmid35139016-
dc.identifier.eissn1941-0042en_US
dc.description.validate202212 bckw-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
See360_Novel_Panoramic_View_Interpolation.pdf4.54 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

56
Last Week
3
Last month
Citations as of May 12, 2024

Downloads

30
Citations as of May 12, 2024

SCOPUSTM   
Citations

2
Citations as of May 16, 2024

WEB OF SCIENCETM
Citations

2
Citations as of May 16, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.