Back to results list
Please use this identifier to cite or link to this item:
|Title:||Face image analysis and its applications||Authors:||Xie, Xudong||Keywords:||Hong Kong Polytechnic University -- Dissertations
Human face recognition (Computer science)
|Issue Date:||2006||Publisher:||The Hong Kong Polytechnic University||Abstract:||The aim of this research is to develop efficient algorithms for facial image analysis. Our research focuses on three areas: face recognition, illumination models and compensation, and facial expression recognition. We also review some well-known face recognition techniques and the recent development of the methods for face recognition under varying illuminations and the methods for facial expression recognition. We have proposed two methods for face recognition under various conditions: Elastic Shape-Texture Matching (ESTM) and Doubly nonlinear mapping kernel Principal Component Analysis (DKPCA). ESTM uses not only the shape information but also the texture information in comparison of two faces without establishing any precise pixel-wise correspondence. Because elastic matching is carried out within the neighborhood of each edge pixel concerned, which is robust to small, local distortions of the feature points such as facial expression variations, this method is robust to small shape variations. DKPCA is a Gabor-based method which uses the Gabor wavelets to extract facial features. Then, a doubly nonlinear mapping kernel PCA is proposed to perform feature transformation and face recognition. The proposed nonlinear mapping not only considers the statistical property of the input features, but also adopts an eigenmask to emphasize those important facial feature points. Therefore, after this mapping, the transformed features have a higher discriminating power, and the relative importance of the features adapts to the spatial importance of the face images. This new nonlinear mapping is combined with the conventional kernel PCA for face recognition.
Lighting conditions have a serious impact on the performance of face recognition methods. Most of them will perform poorly under various conditions. Therefore, we investigate and propose two model-based methods for modeling illumination on the human face, so the effect of uneven lighting can be reduced or compensated for. Depending on the illumination model and human face model used, we model an illumination using a series of multiplicative factors and additive factors, which can be determined by the illumination model concerned and the shape of a human face. The first method can compensate for the uneven illuminations on human faces and reconstruct face images in normal lighting conditions, where a 2D face shape model is used to obtain a shape-free texture image. Instead of computing the multiplicative factors and the additive factors, the second illumination compensation method proposed in this thesis aims to reduce or even remove the effect of these factors. In this method, a local normalization technique is applied to an image, which can effectively and efficiently eliminate the effect of uneven illuminations while keeping the local statistical properties of the processed image the same as in the corresponding image under normal lighting conditions. After processing, the image under varying illumination will have similar pixel values to the corresponding image under normal lighting conditions. Then, the processed images can be used for face recognition. We have also presented an efficient method for facial expression recognition. We first propose a representation model for facial expressions, namely spatially maximum occurrence model (SMOM), which is based on the statistical characteristics of training facial images and has a powerful representation capability. The ESTM algorithm is then used to measure the similarity between images for facial expression recognition. By combining SMOM and ESTM, the algorithm is called SMOM-ESTM and can achieve a higher recognition performance level. To reduce the computational complexity when face recognition is applied to a large-scale database, it is necessary to filter the large database to form a smaller one that contains face images similar to the query input. Therefore, we propose an efficient indexing structure for searching a human face in a large database, which can produce a condensed database including the target image and therefore reduce the search time. All these methods proposed in this thesis have been evaluated and compared to the existing methods. Experimental results show that our algorithms can have convincing and consistent performances.
|Description:||xxi, 176 p. : ill. ; 30 cm.
PolyU Library Call No.: [THS] LG51 .H577P EIE 2006 Xie
|URI:||http://hdl.handle.net/10397/3885||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|b19579792_link.htm||For PolyU Users||162 B||HTML||View/Open|
|b19579792_ir.pdf||For All Users (Non-printable)||3.51 MB||Adobe PDF||View/Open|
Citations as of Mar 19, 2018
Citations as of Mar 19, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.