Back to results list
Show full item record
Please use this identifier to cite or link to this item:
|Title:||Towards least-constrained human identification by recognizing iris and periocular at-a-distance||Authors:||Zhao, Zijing||Degree:||Ph.D.||Issue Date:||2018||Abstract:||Recognizing humans in least-constrained environments is one of the key research goals for academia and industry. At-a-distance eye region based human recognition using iris and periocular information has emerged as a promising approach for addressing this problem due to the high level of uniqueness and stability of eye regions under less-constrained environments. However, image samples acquired under less-constrained conditions usually suffer from degrading factors such as noise, occlusion and lower resolution. Therefore, advanced algorithms beyond traditional methods are required to fully exploit useful iris and periocular information from degraded images. This thesis focuses on developing effective and reliable algorithms for at-a-distance iris and periocular recognition under such conditions. The first stage of this thesis investigates accurate iris segmentation under less constrained environments, which is a key prerequisite for the iris recognition process. The key challenge comes from undesired factors such as noise, occlusion and light source reflection in degraded eye images. We built a novel relative total variation model with l1-norm regularization, referred to as RTV-L1, to deal with the aforementioned obstacles. With this new model, noise and texture can be suppressed from the acquired eye images while structures are soundly preserved, which provides ideal conditions for preliminary segmentation. We then applied a series of robust post-processing to refine the segmentation contours. The proposed approach significantly outperforms other state-of-the-art iris segmentation methods, especially for degraded eye images acquired under less constrained environments.
Followed by the RTV-L¹ based iris segmentation framework, we developed a novel deep learning based approach for extracting spatially corresponding features from iris images for more accurate and reliable matching. This approach is based on fully convolutional network (FCN) which can retain critical locality of the deep iris features, and a newly designed extended triplet loss (ETL) function is able to accommodate non-iris occlusion and spatial translation during the learning process. The learned features are shown to offer superior matching accuracy and outstanding generalizability to different imaging environments, compared with traditional hand-crafted iris features as well as convolutional neural network (CNN) based deep features. Another important contribution of this thesis is the development of deep learning based periocular recognition algorithms for improved accuracy and adaptiveness. Inspired by human inference mechanism, we firstly investigated combining high-level semantic information in the periocular images (e.g., gender, left/right) into deep features learned by CNN. Supplement of such semantic information can help to recover more comprehensive and discriminative features and reduce the over-fitting problem, and superior performances over state-of-the-art periocular recognition methods were obtained. Furthermore, we proposed an attention based deep architecture for periocular recognition to further simulate the visual classification system of human. In this part, we inferred that regions of eye and eyebrow are of critical importance for identifying perioculars and deserve more attention during visual feature extraction. We therefore incorporated such visual attention by emphasizing convolutional responses within detected eye and eyebrow regions in CNNs to enhance the feature discriminability. This approach further boosted state-of-the-art performance dramatically for periocular recognition under varying less constrained situations.
|Subjects:||Hong Kong Polytechnic University -- Dissertations
Pattern recognition systems
Optical pattern recognition
|Pages:||xv, 155 pages : color illustrations|
|Appears in Collections:||Thesis|
View full-text via https://theses.lib.polyu.edu.hk/handle/200/9759
Citations as of May 22, 2022
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.