Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/83591
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorZhou, Huiling-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/8878-
dc.language.isoEnglish-
dc.titleFacial image analysis and its applications to face recognition-
dc.typeThesis-
dcterms.abstractThe human face, which carries various modalities like identity, expression, age, ethnicity, gender, etc., has been more intensely studied in the past few decades than any other part of the human body. Great advances have been made into some of the face-related research areas, and face recognition is one of them, being the task used to identify or verify a person from images and videos. With the development of big data and better computational power, general face recognition methods dealing with real-life variations like poses, expressions, lighting, etc. have achieved superior performances. Latest recognition rates on the most difficult face dataset at present, i.e. the Labeled Faces in the Wild dataset (LFW), have been improved to over 99.5%, which is reported to be even better than human performance. However, face recognition on low-resolution (LR) and aging face images still remains as some of the most challenging problems. On the other hand, currently, only a few research works have investigated high-resolution (HR) face recognition, where more distinctive information on faces (e.g. mark-scale features like moles and scars, and pore-scale features like pores), can be extracted to improve face recognition accuracy. Besides, an efficient scheme for facial feature localization is useful for better face alignment, which is a preprocessing step for all face-related applications. Therefore, the objective of this thesis is to develop efficient methods for facial image analysis and face recognition from low resolution to high resolution, and across ages. In this thesis, we have first proposed a shape-appearance-correlated Active Appearance Model for facial feature localization, which aims to deal with the poor performance of current methods in generic situations, where the images of a query do not exist in the training set. We first introduce a fast initialization scheme, which retrieves the most similar faces to a test face in terms of both poses and textures. Based on the idea of locality constraint, these nearest neighbors form a locally linear subspace. Then, the shape and appearance of the selected images are analyzed, and their correlation is maximized by applying Canonical Correlation Analysis (CCA). Our approach can increase the correlation between the principal components learned for face appearances and shapes, as well as the respective projection coefficients. This can improve the convergence speed and the fitting accuracy, while almost no additional computational cost is necessary. Having aligned face images to be compared based on the facial feature points, more reliable and accurate face recognition can be carried out. In this thesis, we investigate efficient face recognition algorithms for low-resolution, high-resolution, and aging face images. It has been found that, when the resolution of a face image is lower than 24{604}24 pixels, the performances of existing face recognition methods degrade significantly. One way to solve this problem is to perform image super-resolution, which increases image resolution by recovering the missing high-frequency information in the input LR face images. We have proposed a two-step face super-resolution (also known as face-hallucination) framework based on orthogonal Canonical Correlation Analysis (OCCA) and linear mapping. In the global face reconstruction, the OCCA is applied to increase the correlation between the PCA coefficients of the LR and the HR face pairs. In the residual face compensation, a linear-mapping method is proposed to include both the inter- and intra-information about the manifolds of different resolutions. Both contributions improve the visual quality of super-resolved face images. When these super-resolved face images are used for face recognition, a higher recognition rate can be obtained, compared to that using the original LR face images.-
dcterms.abstractWhen the resolution of face images increases, it has also been found that the performances of existing face recognition methods have slight improvements. In order to explore more distinctive information from HR faces, we have proposed to use pore-scale facial features for HR face recognition. To compare two HR face images, the pore-scale facial features are detected and extracted from these two images, and matched keypoints between them are established. In our proposed algorithm, the matched keypoints are converted to block matches, which are further aggregated to eliminate the outliers. During face verification, the performance is determined, based on the maximum density of the local, aggregated, matched blocks on the face images. In this way, face verification, based on pore-scale facial features, can deal with the expression and pose variations well, and it has also been proven to be more robust against alignment error. In the last part of this thesis, we focus on age-invariant face recognition, which is a very challenging task and performances of existing algorithms are still unsatisfactory. We have proposed two methods for face recognition across ages. In our first work, we predict facial features at different ages by linear mapping on the local features. As the facial features of a person at two different ages should be correlated with each other, CCA is applied to the two sets of features, at different ages, to generate more coherent features for face recognition. In our second work, we consider the fact that age progression is quite personalized, so a person may look younger or older than another person, even though their ages are the same. In our algorithm, we have proposed to use the appearance-age labels so that computers can learn more effectively and consistently, than that with the real-age labels. In our method, an identity inference model is designed based on age-subspace learned from appearance ages. We first model human identity and aging variables simultaneously using probabilistic Linear Discriminant Analysis (PLDA). The aging subspace is learnt independently using the appearance-age labels from a recently proposed aging dataset. Then, the identity subspace is determined iteratively with the Expectation-Maximization (EM) algorithm. In this way, the face recognition becomes simpler as identity inference no longer needs age labels. Besides, different identity features are further combined using CCA, where their correlations are maximized for face recognition. All the methods proposed in this thesis have been evaluated and compared to existing state-of-the-art methods. Experimental results and analyses show that our algorithms can achieve convincing and consistent performances.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxx, 165 pages : color illustrations-
dcterms.issued2017-
dcterms.LCSHHuman face recognition (Computer science)-
dcterms.LCSHImage analysis -- Data processing.-
dcterms.LCSHImage processing -- Digital techniques.-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

55
Last Week
1
Last month
Citations as of Mar 24, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.