Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/100001
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | en_US |
| dc.creator | Saxena, D | en_US |
| dc.creator | Cao, J | en_US |
| dc.creator | Xu, J | en_US |
| dc.creator | Kulshrestha, T | en_US |
| dc.date.accessioned | 2023-07-26T07:37:47Z | - |
| dc.date.available | 2023-07-26T07:37:47Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/100001 | - |
| dc.description | IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023, 18-22 June 2023, Vancouver Convention Center, Vancouver, Canada. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Computer Vision Foundation | en_US |
| dc.rights | This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. | en_US |
| dc.title | Re-GAN : data-efficient GANs training via architectural reconfiguration | en_US |
| dc.type | Conference Paper | en_US |
| dc.identifier.spage | 16230 | en_US |
| dc.identifier.epage | 16240 | en_US |
| dcterms.abstract | Training Generative Adversarial Networks (GANs) on high-fidelity images usually requires a vast number of training images. Recent research on GAN tickets reveals that dense GANs models contain sparse sub-networks or "lottery tickets" that, when trained separately, yield better results under limited data. However, finding GANs tickets requires an expensive process of train-prune-retrain. In this paper, we propose Re-GAN, a data-efficient GANs training that dynamically reconfigures GANs architecture during training to explore different sub-network structures in training time. Our method repeatedly prunes unimportant connections to regularize GANs network and regrows them to reduce the risk of prematurely pruning important connections. Re-GAN stabilizes the GANs models with less data and offers an alternative to the existing GANs tickets and progressive growing methods. We demonstrate that Re-GAN is a generic training methodology which achieves stability on datasets of varying sizes, domains, and resolutions (CIFAR-10, Tiny-ImageNet, and multiple few-shot generation datasets) as well as different GANs architectures (SNGAN, ProGAN, StyleGAN2 and AutoGAN). Re-GAN also improves performance when combined with the recent augmentation approaches. Moreover, Re-GAN requires fewer floating-point operations (FLOPs) and less training time by removing the unimportant connections during GANs training while maintaining comparable or even generating higher-quality samples. When compared to state-of-the-art StyleGAN2, our method outperforms without requiring any additional fine-tuning step. Code can be found at this link: https://github.com/IntellicentAI-Lab/Re-GAN | en_US |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 16230-16240). Computer Vision Foundation, 2023. | en_US |
| dcterms.issued | 2023 | - |
| dc.relation.conference | IEEE / CVF Computer Vision and Pattern Recognition Conference [CVPR] | en_US |
| dc.description.validate | 202307 bckw | en_US |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.FolderNumber | a2256 | - |
| dc.identifier.SubFormID | 47248 | - |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | PolyU Internal Start-up Fund; Research Impact Fund | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Conference Paper | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Saxena_Re-GAN_Data-Efficient_GANs.pdf | Pre-Published version | 7.54 MB | Adobe PDF | View/Open |
Page views
140
Citations as of Apr 14, 2025
Downloads
196
Citations as of Apr 14, 2025
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


