Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/20504
Title: Evaluate dissimilarity of samples in feature space for improving KPCA
Authors: Xu, Y
Zhang, D 
Yang, J
Zhong, J
Yang, J
Issue Date: 2011
Source: International journal of information technology and decision making, 2011, v. 10, no. 3, p. 479-495
Abstract: Since in the feature space the eigenvector is a linear combination of all the samples from the training sample set, the computational efficiency of KPCA-based feature extraction falls as the training sample set grows. In this paper, we propose a novel KPCA-based feature extraction method that assumes that an eigenvector can be expressed approximately as a linear combination of a subset of the training sample set ("nodes"). The new method selects maximally dissimilar samples as nodes. This allows the eigenvector to contain the maximum amount of information of the training sample set. By using the distance metric of training samples in the feature space to evaluate their dissimilarity, we devised a very simple and quite efficient algorithm to identify the nodes and to produce the sparse KPCA. The experimental result shows that the proposed method also obtains a high classification accuracy.
Keywords: Feature extraction
Kernel methods
Kernel PCA
Journal: International Journal of Information Technology and Decision Making 
ISSN: 0219-6220
DOI: 10.1142/S0219622011004415
Appears in Collections:Journal/Magazine Article

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

38
Last Week
0
Last month
0
Citations as of Aug 29, 2020

WEB OF SCIENCETM
Citations

27
Last Week
0
Last month
0
Citations as of Sep 26, 2020

Page view(s)

157
Last Week
0
Last month
Citations as of Sep 27, 2020

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.