Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/100001
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorSaxena, Den_US
dc.creatorCao, Jen_US
dc.creatorXu, Jen_US
dc.creatorKulshrestha, Ten_US
dc.date.accessioned2023-07-26T07:37:47Z-
dc.date.available2023-07-26T07:37:47Z-
dc.identifier.urihttp://hdl.handle.net/10397/100001-
dc.descriptionIEEE/CVF Conference on Computer Vision and Pattern Recognition 2023, 18-22 June 2023, Vancouver Convention Center, Vancouver, Canada.en_US
dc.language.isoenen_US
dc.publisherComputer Vision Foundationen_US
dc.rightsThis CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.en_US
dc.titleRe-GAN : data-efficient GANs training via architectural reconfigurationen_US
dc.typeConference Paperen_US
dc.identifier.spage16230en_US
dc.identifier.epage16240en_US
dcterms.abstractTraining Generative Adversarial Networks (GANs) on high-fidelity images usually requires a vast number of training images. Recent research on GAN tickets reveals that dense GANs models contain sparse sub-networks or "lottery tickets" that, when trained separately, yield better results under limited data. However, finding GANs tickets requires an expensive process of train-prune-retrain. In this paper, we propose Re-GAN, a data-efficient GANs training that dynamically reconfigures GANs architecture during training to explore different sub-network structures in training time. Our method repeatedly prunes unimportant connections to regularize GANs network and regrows them to reduce the risk of prematurely pruning important connections. Re-GAN stabilizes the GANs models with less data and offers an alternative to the existing GANs tickets and progressive growing methods. We demonstrate that Re-GAN is a generic training methodology which achieves stability on datasets of varying sizes, domains, and resolutions (CIFAR-10, Tiny-ImageNet, and multiple few-shot generation datasets) as well as different GANs architectures (SNGAN, ProGAN, StyleGAN2 and AutoGAN). Re-GAN also improves performance when combined with the recent augmentation approaches. Moreover, Re-GAN requires fewer floating-point operations (FLOPs) and less training time by removing the unimportant connections during GANs training while maintaining comparable or even generating higher-quality samples. When compared to state-of-the-art StyleGAN2, our method outperforms without requiring any additional fine-tuning step. Code can be found at this link: https://github.com/IntellicentAI-Lab/Re-GANen_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 16230-16240). Computer Vision Foundation, 2023.en_US
dcterms.issued2023-
dc.relation.conferenceIEEE / CVF Computer Vision and Pattern Recognition Conference [CVPR]en_US
dc.description.validate202307 bckwen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera2256-
dc.identifier.SubFormID47248-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextPolyU Internal Start-up Fund; Research Impact Funden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Saxena_Re-GAN_Data-Efficient_GANs.pdfPre-Published version7.54 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Show simple item record

Page views

140
Citations as of Apr 14, 2025

Downloads

196
Citations as of Apr 14, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.