Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115487
DC FieldValueLanguage
dc.contributorDepartment of Aeronautical and Aviation Engineeringen_US
dc.creatorHu, Hen_US
dc.creatorBin, Yen_US
dc.creatorWen, CYen_US
dc.creatorWang, Ben_US
dc.date.accessioned2025-10-02T01:25:56Z-
dc.date.available2025-10-02T01:25:56Z-
dc.identifier.urihttp://hdl.handle.net/10397/115487-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.subjectDiffusion prioren_US
dc.subjectDomain adaptationen_US
dc.subjectUnderwater Image Enhancementen_US
dc.subjectUNderwater Image Formation Modelen_US
dc.titleUnderwater sequential images enhancement via diffusion and physics priors fusionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume124en_US
dc.identifier.doi10.1016/j.inffus.2025.103365en_US
dcterms.abstractAlthough learning-based Underwater Image Enhancement (UIE) methods have demonstrated its remarkable performance, several issues remain to be addressed. A critical research gap is that different water effects are not properly removed, including color bias, low contrast, and blur. This is mainly due to the synthetic-real domain gap of the training data. They are either (1) real underwater images but with synthetic pseudo-labels or (2) synthetic underwater images although with accurate labels. However, it is extremely challenging to collect real-world data with true labels, where the water should be removed to obtain true references. Besides, the inter-frame consistency is not preserved because the previous works are designed for single-image enhancement. To address these two issues, a novel UIE framework fusing both diffusion and physics priors is present in this work. The extensive prior knowledge embedded in the pre-trained video diffusion model is leveraged for the first time to achieve zero-shot generalization from synthetic to real-world UIE task, including both single-frame quality and inter-frame consistency. In addition, a synthetic data augmentation strategy based on the physical imaging model is proposed to further alleviate the synthetic-real inconsistency. Qualitative and quantitative experiments on various real-world underwater scenes demonstrate the significance of our approach, producing results superior to existing works in terms of both visual fidelity and quantitative metrics.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationInformation fusion, Dec. 2025, v. 124, 103365en_US
dcterms.isPartOfInformation fusionen_US
dcterms.issued2025-12-
dc.identifier.scopus2-s2.0-105008173462-
dc.identifier.eissn1566-2535en_US
dc.identifier.artn103365en_US
dc.description.validate202510 bchyen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG000174/2025-07-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was jointly supported by the Young Scientists Fund of the National Natural Science Foundation of China (42301520), the Research Grants Council of Hong Kong (25206524), the Platform Project of Unmanned Autonomous Systems Research Centre (P0049516), the Seed Project of Smart Cities Research Institute (P0051028).en_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2027-12-31en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2027-12-31
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.