Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/83947
Title: Texton encoding based texture classification and its applications to hand-back skin texture analysis
Authors: Xie, Jin
Degree: Ph.D.
Issue Date: 2012
Abstract: Real world objects have various types of texture surfaces. With the increasing demands of image understanding and object recognition in many computer vision applications, texture classification has been receiving considerable attention, and plenty of texture classification methods have been proposed in the past decades. However, how to efficiently represent texture and extract texture features is still a challenging problem in texture image analysis and classification. In this thesis, we investigate this problem and propose new solutions for efficient texture feature extraction, representation and classification. As an interesting application, we also apply the proposed methods to hand back skin texture analysis. First, we present a sparse representation (SR) based dictionary learning method to learn a dictionary of textons for texture image representation. In traditional texton learning based texture representation approaches, texton learning is usually implemented by the K-means clustering method. However, the K-means clustering process may not be able to well characterize the intrinsic feature space of textons, which is often embedded into a low dimensional manifold. To improve the representation accuracy and capability, we propose to use the dictionary learning method under the SR framework to learn a dictionary of textons. Consequently, the SR coefficients of the texture image over the dictionary of textons are used to construct the histograms for classification. The proposed SR based texton dictionary learning method yields better performance than the traditional K-means clustering based texture classification methods.
We further propose an efficient texton encoding based texture classification scheme. The scheme consists of four stages: texton dictionary learning, texton encoding, feature description and classification. In the stage of texton dictionary learning, a regularized least square based texton learning model is proposed. Compared with the texton learning based on SR or K-means clustering, the proposed model is much more accurate than the K-means clustering while being much more efficient than the SR to implement. Meanwhile, we propose a fast texton encoding method to code the texture feature over the learned dictionary. Consequently, two types of texton encoding induced statistical features, coefficient histogram and residual histogram, are extracted for classification. The proposed method, namely texton encoding induced statistical feature (TEISF), is validated on three representative benchmark texture datasets: CUReT, KTH_TIPS and UIUC. The experimental results demonstrate that TEISF outperforms state-of-the-arts, especially when the number of the training samples is small. Finally, we study the hand back skin texture (HBST) pattern classification problem for personal identification and gender classification. A specially designed HBST imaging system is developed to capture the HBST images, and an HBST image dataset is established, which consists of 1920 images from 80 persons (160 hands). Then the proposed texton learning based texture analysis methods are applied to the established HBST dataset, and the experimental results demonstrate that HBST is very useful to aid human identity identification and gender classification. As a kind of specific texture images, the established HBST dataset is rather challenging and provides a good platform to evaluate various texture classification algorithms.
Subjects: Pattern recognition systems.
Biometric identification.
Image processing.
Hong Kong Polytechnic University -- Dissertations
Pages: xiii, 116 p. : ill. ; 30 cm.
Appears in Collections:Thesis

Show full item record

Page views

60
Last Week
1
Last month
Citations as of Apr 21, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.