Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/37740
Title: The multi-phase method in fast learning algorithms
Authors: Cheung, CC
Ng, SC
Keywords: Backpropagation
Feedforward neural nets
Gradient methods
Issue Date: 2009
Source: Proceedings of the 2009 International Joint Conference on Neural Networks (IJCNN'2009), Atlanta, GA, 14-19 June 2009, p. 552-559 (CD) How to cite?
Abstract: Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications that have been proposed to improve the performance of BP have focused on solving the ldquoflat spotrdquo problem to increase the convergence rate. However, their performance is limited due to the error overshooting problem. A novel approach called BP with two-phase magnified gradient function (2P-MGFPROP) was introduced to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. In this paper, this approach is further enhanced by proposing to divide the learning process into multiple phases, and different fast learning algorithms are assigned in different phases to improve the convergence rate in different adaptive problems. Through the performance investigation, it is found that the convergence rate can be increased up to two times, compared with existing fast learning algorithms.
URI: http://hdl.handle.net/10397/37740
ISBN: 978-1-4244-3548-7
978-1-4244-3553-1 (E-ISBN)
ISSN: 1098-7576
DOI: 10.1109/IJCNN.2009.5178684
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

2
Citations as of Jul 7, 2017

Page view(s)

29
Last Week
2
Last month
Checked on Aug 13, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.