Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116815
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorLi, R-
dc.creatorGuo, J-
dc.creatorZhou, Q-
dc.creatorGuo, S-
dc.date.accessioned2026-01-21T03:52:53Z-
dc.date.available2026-01-21T03:52:53Z-
dc.identifier.isbn979-8-4007-0686-8-
dc.identifier.urihttp://hdl.handle.net/10397/116815-
dc.description32nd ACM International Conference on Multimedia, Melbourne VIC, Australia, 28 October 2024 - 1 November 2024en_US
dc.language.isoenen_US
dc.publisherThe Association for Computing Machineryen_US
dc.rightsThis work is licensed under a Creative Commons Attribution International 4.0 License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rights©2024 Copyright held by the owner/author(s).en_US
dc.rightsThe following publication Li, R., Guo, J., Zhou, Q., & Guo, S. (2024). FreePIH: Training-Free Painterly Image Harmonization with Diffusion Model Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne VIC, Australia is available at https://doi.org/10.1145/3664647.3680780.en_US
dc.subjectDiffusion modelen_US
dc.subjectImage editingen_US
dc.subjectImage harmonizationen_US
dc.titleFreePIH : training-free painterly image harmonization with diffusion modelen_US
dc.typeConference Paperen_US
dc.identifier.spage7464-
dc.identifier.epage7473-
dc.identifier.doi10.1145/3664647.3680780-
dcterms.abstractThis paper provides an efficient training-free painterly image harmonization (PIH) method, dubbed FreePIH, that leverages only a pre-trained diffusion model to achieve state-of-the-art harmonization results. Unlike existing methods that require either training auxiliary networks or fine-tuning a large pre-trained backbone, or both, to harmonize a foreground object with a painterly-style background image, our FreePIH tames the denoising process as a plug-in module for foreground image style transfer. Specifically, we find that the very last few steps of the denoising (i.e., generation) process strongly correspond to the stylistic information of images, and based on this, we propose to augment the latent features of both the foreground and background images with Gaussians for a direct denoising-based harmonization. To guarantee the fidelity of the harmonized image, we make use of latent features to enforce the consistency of the content and stability of the foreground objects in the latent space, and meanwhile, aligning both fore-/back-grounds with the same style. Moreover, to accommodate the generation with more structural and textural details, we further integrate text prompts to attend to the latent features, hence improving the generation quality. Quantitative and qualitative evaluations on COCO and LAION 5B datasets demonstrate that our method can surpass representative baselines by large margins.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn MM ’24: Proceedings of the 32nd ACM International Conference on Multimedia, p. 7464-7473. New York, NY: The Association for Computing Machinery, 2024-
dcterms.issued2024-
dc.identifier.scopus2-s2.0-85209810229-
dc.relation.ispartofbookMM ’24: Proceedings of the 32nd ACM International Conference on Multimedia-
dc.relation.conferenceACM International Conference on Multimedia [MM]-
dc.publisher.placeNew York, NYen_US
dc.description.validate202601 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis research was supported by fundings from the Key-Area Research and Development Program of Guangdong Province (No. 2021B0101400003), Hong Kong RGC Research Impact Fund (No. R5060-19, No. R5034-18), Areas of Excellence Scheme (AoE/E-601/22-R), General Research Fund (No. 152203/20E, 152244/21E, 152169/22E, 152228/23E), Hong Kong RGC General Research Fund (No. 152211/23E and 15216424/24E), National Natural Science Foundation of China (No. 62102327), and PolyU Internal Fund (No. P0043932). This research was also supported by NVIDIA AI Technology Center (NVAITC).en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
3664647.3680780.pdf5.77 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.