Please use this identifier to cite or link to this item:
Title: TDN : twice-least-square double-parallel neural networks
Authors: Li, GQ 
Chan, KCC 
Keywords: Deep Neural Networks
Extreme Learning Machine
Feedforward Neural Network
Least Square
Issue Date: 2017
Publisher: Institute of Electrical and Electronics Engineers
Source: 2017 IEEE International Conference on Big Knowledge, ICBK 2017, Hefei, China, 9 - 10 August 2017, 8023395, p. 57-64 How to cite?
Abstract: Extreme learning machines (ELMs) have been shown to have good performance in various generalization tasks. Recently, Deep Neural Networks (DNNs) have also been shown to be able to represent and capture higher-level abstractions to achieve even better generalization performance. However, when it comes to learning speed, the latter may take a rather long time to adjust and determine weights and biases when compared with the relatively fast learning speed of the former. Motivated by the relative merit of the ELM and deep neural networks, we have developed a novel Twice-least-square Double-parallel Neural Networks (TDN). In TDN, the weights that connect the hidden layers are determined by twice Least square methods and the weights and biases that connect the input and output layers are randomly generated. The output neurons in TDN are connected both to the hidden layer and to the input layer directly so that they do not only capture higher-level abstractions from the last hidden layer but also the information that is hidden directly in the input neurons. With these characteristics, TDN can be shown to be able to achieve very good generalization performance for different classification and regression problems.
ISBN: 9781538631195
DOI: 10.1109/ICBK.2017.17
Appears in Collections:Conference Paper

View full-text via PolyU eLinks SFX Query
Show full item record

Page view(s)

Last Week
Last month
Citations as of Oct 22, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.