Back to results list
Please use this identifier to cite or link to this item:
|Title:||Facial image analysis and recognition in the wild||Authors:||Shakeel, Muhammad Saad||Advisors:||Lam, Kin-man (EIE)||Keywords:||Human face recognition (Computer science)
Image analysis -- Data processing
Image processing -- Digital techniques
|Issue Date:||2019||Publisher:||The Hong Kong Polytechnic University||Abstract:||The human face is the most widely used biometric for recognition and verification of one's identity. It has been widely studied and analyzed in the past few decades due to its various advantages over other biometrics. In the past few years, face recognition research has reached many milestones, due to the availability of large amounts of training data and high computational power. Researchers have already achieved more than 99% recognition accuracy on one of the most challenging face datasets, namely, Labeled Faces in the Wild (LFW). In spite of this, some challenges still remain in the areas of low-resolution (LR), and age-invariant face recognition. Moreover, none of the research works have investigated the problem of noise variations in cross-age face images. The major objective of this thesis is to develop efficient algorithms that can handle and overcome these major challenges. In this thesis, we have first proposed a sparse-coding based method, which aims to recognize low-resolution face images up to the size of 8×8 captured under controlled and uncontrolled environments. We first down-sample gallery faces to the same resolution as a query image, and then extract effective local features, namely Gabor wavelets, and local binary pattern difference feature. Extracted features are then decomposed into a low-rank feature matrix, and a sparse error matrix. After that, a sparse coding-based objective function is proposed that projects learned gallery and query face images onto a discriminant low-dimensional sparse feature subspace for recognition. Our method preserves the structural information while projecting samples onto a new feature subspace, which results in the accurate classification. Our method provides state-of-the-art performance in recognizing very LR images, and outperforms both conventional and deep learning-based face recognition methods. In the second part of this thesis, we investigate the existing work for solving age-invariant face recognition problem. A typical approach to solving the aging problem is to synthesize a test image to be the same age as a gallery image, and then perform recognition. However, development of an accurate aging model requires strong parametric assumptions and also a large amount of training data, which makes it unsuitable for real-world applications. Another approach, based on discriminative models, aims to learn high-level facial features invariant to age progression. In this thesis, we have proposed a robust deep-feature encoding-based discriminative model for aging face recognition. First, deep features are learned using a pre-trained deep convolutional neural network (AlexNet), which are then encoded using our proposed locality-constraint feature-encoding framework. By incorporating the locality information, correlation between the features of the same identity can be well captured by sharing the local bases of the learned codebook. To make the codebook discriminative in terms of age-progression, canonical correlation analysis (CCA) is utilized to fuse the pair of training set features with large age gaps. Encoded features are then passed to the linear-regression based classifier for recognition. Our proposed method does not require any age-label information for recognition purposes.
Aging variation is a complex non-linear process, which affects various facial regions over a period of time. However, the periocular region of a human face contains complex biomedical features, such as eyebrows, contour, eyeballs, eyelids, etc. that vary very little with time. Furthermore, the available training and testing data might be corrupted with some random noise. Previous methods assume that training data is collected under controlled environments, which then degrade their performance, when corrupted testing data is presented for recognition. To solve this problem, we have proposed a manifold-constrained low-rank decomposition algorithm, which recovers underlying identity information from corrupted data samples to provide better feature representation. Furthermore, our method also preserves the local structure of the data samples, while removing the sparse errors. The resultant low-rank feature matrix is then encoded by learning an age-discriminative codebook using our proposed feature encoding-based framework. Since CCA cannot model the non-linear relationship between the two data samples, we utilize kernel canonical correlation analysis (KCCA) to fuse the pair of training set's features with large age differences, which are then used to learn an age-discriminative codebook. Encoded features are then passed to the nearest neighbor classifier for recognition. Performance of our proposed method is evaluated using both the whole face region and the periocular region with different levels of corrupted pixels in both training and testing data. Our proposed method proves to be highly robust against different levels of noise variations, and provides superior performance in terms of recognition rate. All the proposed methods in this thesis are evaluated by conducting extensive sets of experiments on challenging face datasets. Furthermore, proposed methods are also compared with other state-of-the-art face-recognition methods.
|Description:||xix, 147 pages : color illustrations
PolyU Library Call No.: [THS] LG51 .H577P EIE 2019 Shakeel
|URI:||http://hdl.handle.net/10397/80586||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|991022210743903411_link.htm||For PolyU Users||167 B||HTML||View/Open|
|991022210743903411_pira.pdf||For All Users (Non-printable)||4.35 MB||Adobe PDF||View/Open|
Citations as of May 21, 2019
Citations as of May 21, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.