Back to results list
Please use this identifier to cite or link to this item:
|Title:||Image reflection removal based on image gradient regeneration||Authors:||Li, Tingtian||Advisors:||Lun, P. K. Daniel (EIE)||Keywords:||Image processing -- Digital techniques||Issue Date:||2019||Publisher:||The Hong Kong Polytechnic University||Abstract:||In daily photography, it is common that we take pictures through a semi-reflective material (such as glass) and obtain images with a reflection of another unwanted scene. Reflection does not only degrade the visual quality of the captured images but also affects the subsequent applications of the images, such as recognition. Therefore, the methods for removing the reflection in images has attracted much attention from hobbyists to professionals in photography. However, reflection removal is a challenging and severely ill-posed problem. It is because we need to solve two unknowns (background and reflection) from only one observation (the captured image with reflection). Due to the ill-posedness of the problem, traditional reflection removal methods often introduce different priors of the background and reflection for constraining the problem. Since the background and reflection images have very similar morphological properties, those priors are only valid in some specific situations. Whenever the prior is not valid, the residues of the reflection will appear in the resulting image and degrade the image quality. In this thesis, we propose a novel strategy for reflection removal. Rather than following the existing approaches in searching for a perfect prior that can accurately distinguish the background and reflection in all situations, we believe it is more realistic and effective to look for a remedial strategy in case the separation is unsuccessful. The proposed background gradient regeneration strategy suggests to firstly remove the reflection components in an aggressive manner even in the expense of losing some of the background components. The missing background components are then regenerated based on the remaining ones using different estimation methods. As shown in our experiments, such a strategy can lead to fewer reflection residues in the reconstructed background image and the resulting algorithms are more robust in general imaging environments. Based on this strategy, three reflection removal algorithms are proposed in this thesis. The first algorithm is for the situation that the light field (LF) images are available. It first estimates the depths along the image edges using the LF epipolar plane image (EPI). Based on the edge depths, we identify the background edges in the condition that they have a distinct depth difference from the reflection edges. For those edges that cannot be confidently classified, they will be ignored and iteratively regenerated using a Markov Random Field (MRF) method. The final background image is reconstructed using another iterative optimization process when all the background edges are regenerated.
Although this method is effective, the required iterative optimization processes are time-consuming. For improving the computation speed, we propose the second deep neural network (DNN) based method using multi-view images. The second proposed algorithms have a similar framework as the first one. The major difference is their implementation backbone. The proposed DNN-based method firstly estimates the edge depths using a convolutional neural network (CNN). The background edges are identified following a similar approach as the first method. Then, a generative adversarial network (GAN) is used to regenerate the missing background edges. Finally, the background image is reconstructed based on the estimated background edges using another CNN. Comparing with the first approach, the deep learning-based method can increase the speed by over 1,000 times when running with a Graphics Processing Unit (GPU) without sacrificing the image quality. In practice, we often need to deal with the reflection removal problem given only a single image of the scene. Therefore, we also propose the third method that only requires a single input image. With a single image, it is more difficult to achieve an accurate estimation of the edge depths. To solve the problem, we make use of a prior that many traditional approaches have used, that is, the reflection images are often blurry. Such prior is valid in many practical situations since background and reflection components often reside in different depth ranges. A camera focuses on the background is likely to have the reflection out-of-focused and leads to the blurry reflection image. Following the background gradient regeneration strategy, we firstly train a CNN to aggressively remove the blurry components in the image, which are likely the reflection components. Such aggressive strategy will also remove some background edges as well. Then, based on the resulting image, we derive a reflection edge confidence map. We use the map to obtain the background edges with high confidence and regenerate the missing ones using a GAN. The background image is also reconstructed at the same time. The proposed algorithm gives state-of-the-art performance compared with the existing single-image DNN-based approaches. Similar to the second proposed approach, the proposed algorithm just needs a couple of seconds to complete the task of reflection removal when implementing with GPUs. The algorithm is particularly suitable to those images with blurry reflection, which is not uncommon in practice. Overall, we show in this thesis that the proposed reflection removal methods using the background gradients regeneration strategy can achieve more robust and better performance compared to the traditional reflection removal methods. In particular, the proposed deep learning-based algorithms have provided real-time performance due to their high computational efficiencies when implementing with GPUs. We believe that the research results of this work have significantly contributed to the field of study and will arouse great interests from the digital imaging industry.
|Description:||xx, 128 pages : color illustrations
PolyU Library Call No.: [THS] LG51 .H577P EIE 2019 LiT
|URI:||http://hdl.handle.net/10397/81957||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|991022347059803411_link.htm||For PolyU Users||168 B||HTML||View/Open|
|991022347059803411.pdf||For All Users||3.26 MB||Adobe PDF||View/Open|
Citations as of May 6, 2020
Citations as of May 6, 2020
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.