Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115487
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Aeronautical and Aviation Engineering | en_US |
| dc.creator | Hu, H | en_US |
| dc.creator | Bin, Y | en_US |
| dc.creator | Wen, CY | en_US |
| dc.creator | Wang, B | en_US |
| dc.date.accessioned | 2025-10-02T01:25:56Z | - |
| dc.date.available | 2025-10-02T01:25:56Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/115487 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier | en_US |
| dc.subject | Diffusion prior | en_US |
| dc.subject | Domain adaptation | en_US |
| dc.subject | Underwater Image Enhancement | en_US |
| dc.subject | UNderwater Image Formation Model | en_US |
| dc.title | Underwater sequential images enhancement via diffusion and physics priors fusion | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 124 | en_US |
| dc.identifier.doi | 10.1016/j.inffus.2025.103365 | en_US |
| dcterms.abstract | Although learning-based Underwater Image Enhancement (UIE) methods have demonstrated its remarkable performance, several issues remain to be addressed. A critical research gap is that different water effects are not properly removed, including color bias, low contrast, and blur. This is mainly due to the synthetic-real domain gap of the training data. They are either (1) real underwater images but with synthetic pseudo-labels or (2) synthetic underwater images although with accurate labels. However, it is extremely challenging to collect real-world data with true labels, where the water should be removed to obtain true references. Besides, the inter-frame consistency is not preserved because the previous works are designed for single-image enhancement. To address these two issues, a novel UIE framework fusing both diffusion and physics priors is present in this work. The extensive prior knowledge embedded in the pre-trained video diffusion model is leveraged for the first time to achieve zero-shot generalization from synthetic to real-world UIE task, including both single-frame quality and inter-frame consistency. In addition, a synthetic data augmentation strategy based on the physical imaging model is proposed to further alleviate the synthetic-real inconsistency. Qualitative and quantitative experiments on various real-world underwater scenes demonstrate the significance of our approach, producing results superior to existing works in terms of both visual fidelity and quantitative metrics. | en_US |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Information fusion, Dec. 2025, v. 124, 103365 | en_US |
| dcterms.isPartOf | Information fusion | en_US |
| dcterms.issued | 2025-12 | - |
| dc.identifier.scopus | 2-s2.0-105008173462 | - |
| dc.identifier.eissn | 1566-2535 | en_US |
| dc.identifier.artn | 103365 | en_US |
| dc.description.validate | 202510 bchy | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G000174/2025-07 | - |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work was jointly supported by the Young Scientists Fund of the National Natural Science Foundation of China (42301520), the Research Grants Council of Hong Kong (25206524), the Platform Project of Unmanned Autonomous Systems Research Centre (P0049516), the Seed Project of Smart Cities Research Institute (P0051028). | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2027-12-31 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



