Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/85798
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorZhang, Xiang-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/6067-
dc.language.isoEnglish-
dc.titleReconstruction of super-resolution image from low-resolution images using adaptive Wiener filtering-
dc.typeThesis-
dcterms.abstractSuper-resolution image reconstruction is an image fusion issue in which multiple low resolution images are fused into a higher resolution image. Among the various approaches proposed to realize super-resolution image reconstruction, the approach proposed by Hardie in 2007 is one of the efficient approaches as it is based on adaptive Wiener filtering (AWF) in which only weighted summation of the observed low resolution pixels is required in the core process of the estimation of the high resolution pixels. In Hardie's AWF algorithm, the weights are derived with an isotropic covariance model based on (1) the geometric distance between two samples and (2) the local variance of the observed low resolution pixels in the region of interest. Though this simple model reflects the correlation of two pixels in the general situation to a certain extent, it is not appropriate to describe the pixel correlation in a local region with presence of edge or texture. Obviously, the covariance model adopted in AWF plays a significant role to the success of the algorithm. In this work, we devote our effort to improve the covariance model so as to improve the performance of AWF and investigate the impact of the covariance model to the performance of AWF. Hardie's covariance model adapts to the local sample variance only and does not take the edge characteristics into account. In practice, pixels are more correlated in a non-edge region than an edge region as an edge breaks the correlation between the pixels in different sides of the edge. A simple trick is exploited in our first attempt to improve the model. In particular, separate covariance models are trained for edge regions and non-edge regions. To process an image, local regions are classified into either edge regions or non-edge regions, and then corresponding covariance models are used to derive the weights for individual regions.-
dcterms.abstractThe success of the simple trick exploited in our first attempt confirms the direction of improvement. The idea is further extended to improve the covariance model. Obviously, pixels along an edge should be more correlated than those pixels in different sides of the edge. As a result, the covariance model function should be anisotropic according to the edge orientation in a local region. In our second attempt, we evaluate the sample variances along various directions in a local region and, based on the evaluation result, adjust the shape of covariance model function. By doing so, the orientation of the covariance model function and the extent of its asymmetricity are fully adaptive to the local characteristics. Unlike the approach adopted in our first attempt, no explicit region classification is required. Though a visible improvement in the super-resolution image reconstruction performance can be achieved with the aforementioned improved covariance models, there is still room for further improvement. To a certain extent, our second attempt tries to detect the edges and then rectify the covariance model according to the detected edge direction. This is not always an easy task. For example, it is not easy to detect a complicated edge especially when it is in a texture or noisy region. Besides, since the high resolution image is not available, the detection has to be based on the available low resolution images and hence the detection may not be accurate. To solve this problem, in the third proposed solution we add an additional element to the covariance model to take care of the edge characteristics implicitly such that no explicit edge detection is required. In general, the more two pixel's intensity values are different, the more likely that the two pixels are separated by an edge or corrupted by noise. Based on this observation, a new covariance model is defined to be a function of both the intensity difference and the geometric distance of two pixels. Without any explicit edge detection, the model automatically deemphasizes the correlation between two pixels which are not in the same side of an edge. Accordingly, the weights of the pixels for estimating the high resolution pixels can be determined more reliably with the covariance model, and a better super-resolution image reconstruction result can be obtained at the end.-
dcterms.accessRightsopen access-
dcterms.educationLevelM.Phil.-
dcterms.extentviii, 100 leaves : ill. (some col.) ; 30 cm.-
dcterms.issued2010-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
dcterms.LCSHImage processing-
dcterms.LCSHFilters (Mathematics)-
Appears in Collections:Thesis
Show simple item record

Page views

46
Last Week
2
Last month
Citations as of Apr 14, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.