Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/95954
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineeringen_US
dc.creatorLi, CTen_US
dc.creatorSiu, WCen_US
dc.creatorLiu, ZSen_US
dc.creatorWang, LWen_US
dc.creatorLun, DPKen_US
dc.date.accessioned2022-10-28T07:28:26Z-
dc.date.available2022-10-28T07:28:26Z-
dc.identifier.issn0302-9743en_US
dc.identifier.urihttp://hdl.handle.net/10397/95954-
dc.description16th European Conference on Computer Vision 2020, Online, August 23–28, 2020en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.rights© Springer Nature Switzerland AG 2020en_US
dc.rightsThis version of the contribution has been accepted for publication, after peer review (when applicable) but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/978-3-030-66823-5_1.Use of this Accepted Version is subject to the publisher’s Accepted Manuscript terms of use https://www.springernature.com/gp/open-research/policies/accepted-manuscript-termsen_US
dc.subjectImage inpaintingen_US
dc.subjectAttentionen_US
dc.subjectBack projectionen_US
dc.titleDeepGIN : deep generative inpainting network for extreme image inpaintingen_US
dc.typeConference Paperen_US
dc.identifier.spage5en_US
dc.identifier.epage22en_US
dc.identifier.volume12538en_US
dc.identifier.doi10.1007/978-3-030-66823-5_1en_US
dcterms.abstractThe degree of difficulty in image inpainting depends on the types and sizes of the missing parts. Existing image inpainting approaches usually encounter difficulties in completing the missing parts in the wild with pleasing visual and contextual results as they are trained for either dealing with one specific type of missing patterns (mask) or unilaterally assuming the shapes and/or sizes of the masked areas. We propose a deep generative inpainting network, named DeepGIN, to handle various types of masked images. We design a Spatial Pyramid Dilation (SPD) ResNet block to enable the use of distant features for reconstruction. We also employ Multi-Scale Self-Attention (MSSA) mechanism and Back Projection (BP) technique to enhance our inpainting results. Our DeepGIN outperforms the state-of-the-art approaches generally, including two publicly available datasets (FFHQ and Oxford Buildings), both quantitatively and qualitatively. We also demonstrate that our model is capable of completing masked images in the wild.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationLecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics), 2020, v. 12538, p. 5-22en_US
dcterms.isPartOfLecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics)en_US
dcterms.issued2020-
dc.relation.ispartofbookComputer vision–ECCV 2020 workshops : Glasgow, UK, August 23–28, 2020 proceedings, part IVen_US
dc.relation.conferenceEuropean Conference on Computer Vision [ECCV]en_US
dc.identifier.eissn1611-3349en_US
dc.description.validate202210 bckwen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera1422-
dc.identifier.SubFormID44929-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe Hong Kong Polytechnic University under a studentship for Mr. Chu-Tak Li.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
EIE-0092_Li_Deepgin_Deep_Generative.pdfPre-Published version15.4 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

98
Last Week
0
Last month
Citations as of Apr 14, 2025

Downloads

66
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

11
Citations as of Jun 21, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.