Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112797
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorChan, CYen_US
dc.creatorSiu, WCen_US
dc.creatorChan, YHen_US
dc.creatorAnthony, Chan, Hen_US
dc.date.accessioned2025-05-09T00:55:00Z-
dc.date.available2025-05-09T00:55:00Z-
dc.identifier.issn0098-3063en_US
dc.identifier.urihttp://hdl.handle.net/10397/112797-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/en_US
dc.rightsThe following publication C. -Y. Chan, W. -C. Siu, Y. -H. Chan and H. A. Chan, "Back Projection Generative Strategy for Low and Normal Light Image Pairs With Enhanced Statistical Fidelity and Diversity," in IEEE Transactions on Consumer Electronics, vol. 71, no. 2, pp. 3575-3586, May 2025 is available at https://doi.org/10.1109/TCE.2024.3516366.en_US
dc.subjectData Augmentationen_US
dc.subjectDiffusionen_US
dc.subjectGenerative Modelen_US
dc.subjectImage Synthesisen_US
dc.subjectLow Light Image Enhancementen_US
dc.titleBack projection generative strategy for low and normal light image pairs with enhanced statistical fidelity and diversityen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage3575en_US
dc.identifier.epage3586en_US
dc.identifier.volume71en_US
dc.identifier.issue2en_US
dc.identifier.doi10.1109/TCE.2024.3516366en_US
dcterms.abstractLow light image enhancement (LLIE) using supervised deep learning is limited by the scarcity of matched low/normal light image pairs. We propose Back Projection Normal-to-Low Diffusion Model (N2LDiff-BP), a novel diffusion-based generative model that realistically transforms normal-light images into diverse low-light counterparts. By injecting noise perturbations over multiple timesteps, our model synthesizes low-light images with authentic noise, blur, and color distortions. We introduce innovative architectural components -Back Projection Attention, BP Feedforward, and BP Transformer Blocks -that integrate back projection to model the narrow dynamic range and nuanced noise of real low-light images. Experiment and results show N2LDiff-BP significantly outperforms prior augmentation techniques, enabling effective data augmentation for robust LLIE. We also introduce LOL-Diff, a large-scale synthetic low-light dataset. Our novel framework, architectural innovations, and dataset advance deep learning for low-light vision tasks by addressing data scarcity. N2LDiff-BP establishes a new state-of-the-art in realistic low-light image synthesis for LLIE.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on consumer electronics, May 2025, v. 71, no. 2, p. 3575-3586en_US
dcterms.isPartOfIEEE transactions on consumer electronicsen_US
dcterms.issued2025-05-
dc.identifier.scopus2-s2.0-85212042359-
dc.identifier.eissn1558-4127en_US
dc.description.validate202505 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextSaint Francis University (1SG200206); the HK Polytechnic University of Hong Kong SARen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Chan_Back_Projection_Generative.pdf4.9 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.