Back to results list
Show full item record
Please use this identifier to cite or link to this item:
|Title:||Accurate iris recognition at-a-distance and under less constrained environments||Authors:||Tan, Chun Wei||Degree:||Ph.D.||Issue Date:||2014||Abstract:||Accurate iris recognition using eye or face images acquired at-a-distance and under less constrained environments requires development of specialized iris segmentation and recognition strategies. Image quality of such distantly acquired eye or face images under less constrained imaging conditions are usually degraded due to the multiple commonly observed noise sources such as occlusions (eyeglasses, hair, eyelashes, eyelid and shadow), reflections, motion or defocus blur, off-angle and partial eye images. The influence from the noise is even more noticeable from the eye images acquired using visible illumination imaging. Performing iris segmentation and recognition on such noisy eye images can be highly challenging. Therefore, it is the main objective of this thesis to provide feasible solutions to improve the effectiveness of the iris recognition strategy at-a-distance and under less constrained environments. We develop an iris segmentation approach by exploiting the random walker algorithm in order to efficiently estimate coarsely segmented iris region. Such coarsely segmented iris region reduces the search space for further refinement through a set of developed post processing operations which can effectively improve the segmentation accuracy. Most of the commonly observed noise sources can be identified and masked by the developed post processing operations. The segmentation accuracy is evaluated on subsets of distantly acquired images from three publicly available databases: UBIRIS.v2, FRGC and CASIA.v4-distance, by comparing the binary segmented iris mask with the corresponding ground truth mask.
The second contribution of this thesis is the development of a global iris bits stabilization encoding and a localized Zernike moments phase-based encoding strategies. The global iris encoding strategy has its strength in less noisy region pixels while the localized iris encoding strategy can be more tolerant to imaging quality variations (e.g. scale change, illumination change, rotation, and translation) and noise. The complementary matching information from the joint strategy of both global and localized iris encoding can provide more accurate recognition accuracy for the iris recognition at-a-distance and under less constrained environments. The reported recognition performance from using such joint matching strategy on UBIRIS.v2, FRGC and CASIA.v4-distance databases is encouraging, but further research efforts are still required to improve the recognition accuracy. Therefore, we present a study on the recent emerging research in the periocular recognition. The joint matching information from simultaneously acquired iris and periocular features has shown to achieve even better recognition accuracy than any of the iris or periocular features alone. A final contribution of this thesis is the development of a computationally attractive binary encoding strategy by exploiting the geometric information for localized iris encoding, which we refer it as geometric key iris encoding. Such geometric key iris encoding strategy is aimed to provide an alternative to the global iris bits stabilization encoding which incurs relatively higher computational complexity. Our experimental results suggest that the joint matching information from the geometric key encoding and Zernike moments phase-based encoding strategies achieve better or comparable recognition performance but with reduced computational complexity.
|Subjects:||Optical pattern recognition
Pattern recognition systems
Hong Kong Polytechnic University -- Dissertations
|Pages:||xv, 138 leaves : illustrations (some color) ; 30 cm|
|Appears in Collections:||Thesis|
View full-text via https://theses.lib.polyu.edu.hk/handle/200/7769
Citations as of May 28, 2023
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.