Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/25157
Title: Using the idea of the sparse representation to perform coarse-to-fine face recognition
Authors: Xu, Y
Zhu, Q
Fan, Z
Zhang, D 
Mi, J
Lai, Z
Keywords: Biometrics
Decision making
Face recognition
Information fusion
Security access
Issue Date: 2013
Publisher: Elsevier
Source: Information sciences, 2013, v. 238, p. 138-148 How to cite?
Journal: Information sciences 
Abstract: In this paper, we propose a coarse-to-fine face recognition method. This method consists of two stages and works in a similar way as the well-known sparse representation method. The first stage determines a linear combination of all the training samples that is approximately equal to the test sample. This stage exploits the determined linear combination to coarsely determine candidate class labels of the test sample. The second stage again determines a weighted sum of all the training samples from the candidate classes that is approximately equal to the test sample and uses the weighted sum to perform classification. The rationale of the proposed method is as follows: the first stage identifies the classes that are "far" from the test sample and removes them from the set of the training samples. Then the method will assign the test sample into one of the remaining classes and the classification problem becomes a simpler one with fewer classes. The proposed method not only has a high accuracy but also can be clearly interpreted.
URI: http://hdl.handle.net/10397/25157
ISSN: 0020-0255
EISSN: 1872-6291
DOI: 10.1016/j.ins.2013.02.051
Appears in Collections:Journal/Magazine Article

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

97
Last Week
0
Last month
1
Citations as of Jan 19, 2019

WEB OF SCIENCETM
Citations

82
Last Week
1
Last month
3
Citations as of Jan 12, 2019

Page view(s)

101
Last Week
3
Last month
Citations as of Jan 13, 2019

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.