Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/28847
Title: Enhanced Two-Phase Method in fast learning algorithms
Authors: Cheung, CC
Ng, SC
Lui, AK
Xu, S
Keywords: Backpropagation
Convergence
Multilayer perceptrons
Problem solving
Recurrent neural nets
Issue Date: 2010
Publisher: IEEE
Source: The 2010 International Joint Conference on Neural Networks (IJCNN), 18-23 July 2010, Barcelona, p. 1-7 How to cite?
Journal: The 2010 International Joint Conference on Neural Networks (IJCNN), 18-23 July 2010, Barcelona 
Abstract: Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, the performance of these modifications is still not promising due to the existence of the local minimum problem and the error overshooting problem. This paper proposes an Enhanced Two-Phase method to solve these two problems to improve the performance of existing fast learning algorithms. The proposed method effectively locates the existence of the above problems and assigns appropriate fast learning algorithms to solve them. Throughout our investigation, the proposed method significantly improves the performance of different fast learning algorithms in terms of the convergence rate and the global convergence capability in different problems. The convergence rate can be increased up to 100 times compared with the existing fast learning algorithms.
URI: http://hdl.handle.net/10397/28847
ISBN: 978-1-4244-6916-1
ISSN: 1098-7576
DOI: 10.1109/IJCNN.2010.5596519
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

1
Citations as of Feb 25, 2017

Page view(s)

32
Last Week
1
Last month
Checked on May 21, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.