Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/84002
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorLi, Yingjie-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/8424-
dc.language.isoEnglish-
dc.titleExtended ELM-based architectures and techniques for fast learning of feature interaction and intervals from data-
dc.typeThesis-
dcterms.abstractThis research focuses on the fast learning and extraction of knowledge from data. The particular technique that we adopt in this research is called extreme learning machine (ELM), which is a fast learning algorithm for single layer feedforward network (SLFN). The ELM theories show that all the hidden nodes can be independent from training samples and do not need to be tuned. In this case, training a SLFN is simply equivalent to finding a least-square solution of a linear system, which can be achieved fast and accurately by using the generalized inverse technique. In this research, several extended ELM-based architectures and techniques are developed for fast learning from data. The contributions of this work can be summarized into three aspects: (i) ELM mapping and modeling, (ii) ELM architecture selection, and (iii) input data compression for ELM. Focus on the ELM mapping and modeling aspect, a generalized framework named fuzzy ELM (FELM), is developed for fast learning of feature interaction from data. In order to solve the problem of high complexity in determining fuzzy measure, FELM extends the original ELM structure based on the subset selection concept of fuzzy measure. The main contribution is a new set selection algorithm, which transfers the input samples from the original feature space to a higher dimensional feature space for fuzzy measure representation.Then, the fuzzy measure can be obtained using the related fuzzy integral in this high dimensional feature space. The subset selection scheme in FELM is feasible for many kinds of fuzzy integrals such as Choquet integral, Sugeno integral, Mean-based fuzzy integral and Order-based fuzzy integral. Compared with traditional genetic algorithm (GA) and particle swarm optimization (PSO) algorithm for determining fuzzy measure, FELM achieves faster learning speed and smaller testing error on both simulated data and real data from computer game.-
dcterms.abstractFocus on the ELM architecture selection aspect, an architecture selection algorithm for ELM is developed. This algorithm uses the multi-criteria decision making (MCDM) model in selecting the optimal number of hidden neurons, it ranks the alternatives by measuring the closeness of their criteria. The major contribution is made by introducing a tolerance concept to evaluate a model's generalization capability in approximating unseen samples. Two trade-off criteria, training accuracy and Q-value which is estimated by the localized generalization error model (LGEM), are used. The training accuracy reflects the generalization ability of the model on training samples, and the Q-value estimated by LGEM reflects the generalization ability of the model on unseen samples. Compared with k-fold cross validation (CV) and LGEM, our method achieves better testing accuracy on most of the data datasets with shorter time. Focus on the data compression aspect, a learning model named interval ELM is developed for large-scale data classification. Two contributions are made for selecting representative samples and removing data redundancy. The first is a newly developed discretization method based on uncertainty reduction inspired by the traditional decision tree (DT) induction algorithm. The second is a new concept named class label fuzzification, which is performed on the class labels of the compressed intervals. The fuzzified class labels can represent the dependency among different classes. Experimental comparison are conducted among basic ELM and Interval ELM with four different kinds of discretization methods. We have achieved a better and more promising result.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxiv, 121 pages : illustrations-
dcterms.issued2015-
dcterms.LCSHMachine learning.-
dcterms.LCSHArtificial intelligence.-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

50
Last Week
0
Last month
Citations as of Mar 24, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.