Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/20504
Title: Evaluate dissimilarity of samples in feature space for improving KPCA
Authors: Xu, Y
Zhang, D 
Yang, J
Zhong, J
Yang, J
Keywords: Feature extraction
Kernel methods
Kernel PCA
Issue Date: 2011
Source: International journal of information technology and decision making, 2011, v. 10, no. 3, p. 479-495 How to cite?
Journal: International Journal of Information Technology and Decision Making 
Abstract: Since in the feature space the eigenvector is a linear combination of all the samples from the training sample set, the computational efficiency of KPCA-based feature extraction falls as the training sample set grows. In this paper, we propose a novel KPCA-based feature extraction method that assumes that an eigenvector can be expressed approximately as a linear combination of a subset of the training sample set ("nodes"). The new method selects maximally dissimilar samples as nodes. This allows the eigenvector to contain the maximum amount of information of the training sample set. By using the distance metric of training samples in the feature space to evaluate their dissimilarity, we devised a very simple and quite efficient algorithm to identify the nodes and to produce the sparse KPCA. The experimental result shows that the proposed method also obtains a high classification accuracy.
URI: http://hdl.handle.net/10397/20504
ISSN: 0219-6220
DOI: 10.1142/S0219622011004415
Appears in Collections:Journal/Magazine Article

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

33
Last Week
0
Last month
0
Citations as of Oct 17, 2018

WEB OF SCIENCETM
Citations

21
Last Week
0
Last month
0
Citations as of Oct 18, 2018

Page view(s)

82
Last Week
1
Last month
Citations as of Oct 14, 2018

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.