Back to results list
Please use this identifier to cite or link to this item:
|Title:||Localized generalization error bound for multiple classifier system||Authors:||Chan, Pak-kei Patrick||Keywords:||Hong Kong Polytechnic University -- Dissertations
Neural networks (Computer science)
|Issue Date:||2009||Publisher:||The Hong Kong Polytechnic University||Abstract:||The objective of this thesis is to study the Localized Generalization Error Model (L-GEM) for Multiple Classifier Systems (MCSs). L-GEM for a single classifier was proposed by Yeung (2007). It is based on the observation that it will not be reasonable to expect a classifier which is trained by using a set of learning samples to recognize unseen samples very different from the training set. The L-GEM provides an upper bound for the mean square error (MSE) of unseen samples in a neighborhood of each training sample. One significant application of the L-GEM is that it could be used as an objective function for base classifier training. The assumption of the same width for all dimensions of a hidden neuron in the initial version of the L-GEM is now relaxed. The parameters of a RBF network are selected by minimizing its localized generalization error bound. The characteristics of the proposed objective function are compared with those for regularization methods. For the problem of weight selection, RBF networks trained by minimizing the proposed objective function consistently outperform RBF networks trained by techniques such as Training Error Minimization, Tikhonov Regularization, Weight Decay or Locality Regularization. The proposed objective function is equally effective in the selection of three parameters simultaneously: center, width and weight. RBF networks trained by minimizing the proposed objective function yield better testing accuracies when compared to those which minimize training error only. A new dynamic fusion method for the construction of a MCS based on L-GEM is also proposed. This L-GEM based Fusion method (LFM) uses the L-GEM to estimate the local competence of the base classifiers in a MCS. Different from the current dynamic classifier selection methods, the LFM estimates the performance of the base classifiers not only using the training samples but also points in local neighborhood regions. Experimental results show that a MCS using LFM has a good performance consistently in terms of testing classification accuracy and time complexity. The LFM is also compared with twenty one current dynamic fusion methods experimentally. The results show that the LFM yields better testing accuracies than other dynamic fusion methods. L-GEM has been extended from a single classifier system to a MCS, named L-GEMMCS. L-GEMMCS is closely related to the existing error model "Bias Ambiguity Decomposition". L-GEMMCS consists of four terms: base classifier training error, diversity of training error, base classifier sensitivity and diversity of sensitivity. The two terms, diversity of training error and diversity of sensitivity, are new concepts which could be used to characterize the interactions among the base classifiers in MCS. The meaning and relationship of these four terms are analyzed and discussed. L-GEMMCS is shown to be useful in evaluating the generalization ability of a MCS. It can be used as a selection criterion for the best set of classifiers for the construction of a MCS from a pool of diverse base classifiers.||Description:||xviii, 155 p. : ill. ; 30 cm.
PolyU Library Call No.: [THS] LG51 .H577P COMP 2009 Chan
|URI:||http://hdl.handle.net/10397/3444||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|b23210230_link.htm||For PolyU Users||162 B||HTML||View/Open|
|b23210230_ir.pdf||For All Users (Non-printable)||4.2 MB||Adobe PDF||View/Open|
Citations as of Dec 17, 2018
Citations as of Dec 17, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.