Please use this identifier to cite or link to this item:
Title: Extended ELM-based architectures and techniques for fast learning of feature interaction and intervals from data
Authors: Li, Yingjie
Degree: Ph.D.
Issue Date: 2015
Abstract: This research focuses on the fast learning and extraction of knowledge from data. The particular technique that we adopt in this research is called extreme learning machine (ELM), which is a fast learning algorithm for single layer feedforward network (SLFN). The ELM theories show that all the hidden nodes can be independent from training samples and do not need to be tuned. In this case, training a SLFN is simply equivalent to finding a least-square solution of a linear system, which can be achieved fast and accurately by using the generalized inverse technique. In this research, several extended ELM-based architectures and techniques are developed for fast learning from data. The contributions of this work can be summarized into three aspects: (i) ELM mapping and modeling, (ii) ELM architecture selection, and (iii) input data compression for ELM. Focus on the ELM mapping and modeling aspect, a generalized framework named fuzzy ELM (FELM), is developed for fast learning of feature interaction from data. In order to solve the problem of high complexity in determining fuzzy measure, FELM extends the original ELM structure based on the subset selection concept of fuzzy measure. The main contribution is a new set selection algorithm, which transfers the input samples from the original feature space to a higher dimensional feature space for fuzzy measure representation.Then, the fuzzy measure can be obtained using the related fuzzy integral in this high dimensional feature space. The subset selection scheme in FELM is feasible for many kinds of fuzzy integrals such as Choquet integral, Sugeno integral, Mean-based fuzzy integral and Order-based fuzzy integral. Compared with traditional genetic algorithm (GA) and particle swarm optimization (PSO) algorithm for determining fuzzy measure, FELM achieves faster learning speed and smaller testing error on both simulated data and real data from computer game.
Focus on the ELM architecture selection aspect, an architecture selection algorithm for ELM is developed. This algorithm uses the multi-criteria decision making (MCDM) model in selecting the optimal number of hidden neurons, it ranks the alternatives by measuring the closeness of their criteria. The major contribution is made by introducing a tolerance concept to evaluate a model's generalization capability in approximating unseen samples. Two trade-off criteria, training accuracy and Q-value which is estimated by the localized generalization error model (LGEM), are used. The training accuracy reflects the generalization ability of the model on training samples, and the Q-value estimated by LGEM reflects the generalization ability of the model on unseen samples. Compared with k-fold cross validation (CV) and LGEM, our method achieves better testing accuracy on most of the data datasets with shorter time. Focus on the data compression aspect, a learning model named interval ELM is developed for large-scale data classification. Two contributions are made for selecting representative samples and removing data redundancy. The first is a newly developed discretization method based on uncertainty reduction inspired by the traditional decision tree (DT) induction algorithm. The second is a new concept named class label fuzzification, which is performed on the class labels of the compressed intervals. The fuzzified class labels can represent the dependency among different classes. Experimental comparison are conducted among basic ELM and Interval ELM with four different kinds of discretization methods. We have achieved a better and more promising result.
Subjects: Machine learning.
Artificial intelligence.
Hong Kong Polytechnic University -- Dissertations
Pages: xiv, 121 pages : illustrations
Appears in Collections:Thesis

Show full item record

Page views

Last Week
Last month
Citations as of Jun 11, 2023

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.