Please use this identifier to cite or link to this item:
Title: Improving image quality by dynamic range and resolution enhancement
Authors: Li, Hui
Degree: Ph.D.
Issue Date: 2020
Abstract: Digital cameras play a vital role in recording various images in our daily life. However, the quality of captured images may not meet our requirements due to the limited dynamic range and resolution of imaging sensors. High dynamic range imaging (HDRI) and super-resolution techniques have been developed to improve the image quality. In this thesis, we investigate some key issues in HDRI and super-resolution. Specifically, we study the problems of tone mapping, multi-exposure fusion (MEF), and real-world single image super-resolution. Generally, there are two approaches to achieving HDRI: tone mapping for high dynamic range (HDR) data and multi-exposure fusion. For the HDR data captured by high-bit sensor, a process called tone mapping is needed to display the image on conventional low dynamic range display devices. We present a novel clustering based content and colour adaptive tone mapping method. First, the radiance map containing HDR contents is partitioned into various clusters via clustering. Then a Principal Component Analysis (PCA) dictionary is learned for each cluster. For each input patch, it is adaptively assigned to the closest cluster, and projected onto the dictionary associated with this cluster. By adopting an effective compression function to adjust the coefficients, tone mapping can be achieved while some noise and trivial details can be suppressed. To reduce the computational cost, an off-line version of the proposed method is built by pre-learning PCA dictionaries from natural images. The experimental results demonstrate that the proposed tone mapping method can produce high-quality image with well-preserved local contrast, as well as vivid colour appearance with little artefacts. MEF is another popular way for image dynamic range enhancement. We propose a multi-scale fast MEF approach based on the structural patch decomposition of images. The proposed method rephrases patch decomposition into image decomposition and merges the weights of signal strength and structure components. As a result, it decomposes each image into two components: one base layer and an implicit detail layer. We indicate that the patch decomposition based aggregation is essentially a process of mean filtering of weight maps, based on which the computational complexity of patch aggregation can be largely reduced so that it is independent of patch size. The multi-scale technique can be implemented by progressively decomposing the base layer, which helps alleviating the hallo effect. The weights at each scale can be designed in a scale-aware manner based on simple image statistical information. Our approach can produce pleasing MEF results with less artefacts and computational cost than previous state-of-the-art methods for both static and dynamic scenes.
We also investigate the MEF methods by deep learning. In particular, we make the first attempt to use deep features to fuse multi-exposure images via an unsupervised method, while the features are extracted via a pre-trained network. We employ the shallow features guided by the deeper semantic features in a classification network to design the fusion weight maps, which are computed via local visibility and temporal consistency. The proposed method works well in both static and dynamic scenarios, bring pleasing fusion results with less computational cost. We then explore an end-to-end network for MEF based on two public datasets. The trained network can effectively fuse multi-exposures images from the test dataset. However, the generalization ability is limited because of limited number of available dynamic multi-exposure images with ground-truth. In addition to dynamic range, resolution is another important factor affecting image quality. Deep convolutional neural networks (DCNN) have achieved impressive performance in super-resolving bicubicly downsampled low-resolution (LR) images from their high-resolution (HR) counterparts. However, the DCNN models trained by such simulated data become less effective when applied to real-world LR images because the practical degradation of real images is far more complicated than bicubic downsampling. To improve the super-resolution performance of real-world images, we construct a novel dataset of LR and HR pairs captured by adjusting the lens focus of digital cameras. With the new dataset, a plain regression network with simple loss functions can generate desirable results in real-world image super-resolution. Compared with other DCNN models driven by simulated data, our model can better preserve the fine-scale image edges and textures. To sum up, in this thesis we proposed a local adaptive tone mapping method, a fast multi-scale patch decomposition MEF method, deep learning based MEF methods, and a new real-world image super-resolution method. The developed methods demonstrated competitive performance to improve the image quality with high efficiency.
Subjects: Hong Kong Polytechnic University -- Dissertations
High dynamic range imaging
High resolution imaging
Image processing -- Digital techniques
Imaging systems -- Image quality
Pages: xviii, 120 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

Citations as of May 22, 2022

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.