Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/84218
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorKanhangad, Vivek-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/5424-
dc.language.isoEnglish-
dc.titleBiometric identification using contact-free 3D hand scans-
dc.typeThesis-
dcterms.abstractThe hand identification problem has been extensively studied in the biometrics literature. Commercially available identification systems based on hand geometry features have gained high user acceptance and found wide ranging applications for personal verification tasks. Nevertheless, there are several critical issues that remain to be addressed in order to make hand identification systems more robust and user-friendly. Major limitations of current two dimensional image based hand identification include its high vulnerability to spoof attacks, inconvenience caused to the user by the constrained imaging set up, especially to elderly and people suffering from limited dexterity, and hygienic concerns among users due to the placement of the hand on the imaging platform. Obviating the need for hand position restricting pegs and the imaging platform, however, introduces a highly challenging problem of having to handle hand pose variations in three dimensional (3D) space. This dissertation explores the use of 3D contact-free hand scans and the possibility of integrating three dimensional shape and intensity information in order to overcome the above limitations. A two step, fully automatic, approach for hand matching that handles large changes in pose is developed. In the first step, the acquired 3D hand is utilized to robustly estimate its orientation based on a single detected point on the hand. The estimated orientation information is then used to normalize the pose of the 3D hand along with its texture. In the second step, multimodal hand features extracted from the pose corrected range and intensity images are utilized to perform identification. The extracted palmprint and finger geometry features are combined using a new dynamic fusion strategy. It is shown that the dynamic fusion approach performs significantly better than the straightforward fusion using a weighted combination rule. In order to extract discriminatory features from the palmprint region of the 3D hand, two approaches that exploit local surface details have been developed. The proposed 3D palmprint matcher is shown to be more robust against spoof attacks. For the purpose of 3D finger matching, two representations that characterize the 3D finger surface features are extracted from the range images. The matching metrics proposed for the two finger geometry features effectively handle limited pose variations and perform partial feature matching in order to enhance the performance. Finally, an adaptive fusion framework based on hybrid particle swarm optimization (PSO) that chooses the optimal fusion rule and weight parameters for a desired level of security is developed. Experiments are performed on synthetic as well as real biometric matching scores to demonstrate that the proposed fusion approach consistently outperforms the existing framework based on decision level fusion.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxvii, 178 p. : ill. ; 30 cm.-
dcterms.issued2010-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
dcterms.LCSHBiometric identification.-
dcterms.LCSHThree-dimensional imaging in biology-
Appears in Collections:Thesis
Show simple item record

Page views

47
Last Week
0
Last month
Citations as of Apr 14, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.