Please use this identifier to cite or link to this item:
Title: Improving the quickprop algorithm
Authors: Cheung, CC
Ng, SC
Lui, AK
Keywords: Backpropagation
Convergence of numerical methods
Feedforward neural nets
Issue Date: 2012
Source: Proceedings of the International Joint Conference on Neural Networks (IJCNN'2012), Brisbane, Australia, 10-15 June 2012, p. p. 1-6 How to cite?
Abstract: Backpropagation (BP) algorithm is the most popular supervised learning algorithm that is extensively applied in training feed-forward neural networks. Many BP modifications have been proposed to increase the convergence rate of the standard BP algorithm, and Quickprop is one the most popular fast learning algorithms. The convergence rate of Quickprop is very fast; however, it is easily trapped into a local minimum and thus it cannot converge to the global minimum. This paper proposes a new fast learning algorithm modified from Quickprop. By addressing the drawbacks of the Quickprop algorithm, the new algorithm has a systematic approach to improve the convergence rate and the global convergence capability of Quickprop. Our performance investigation shows that the proposed algorithm always converges with a faster learning rate compared with Quickprop. The improvement in the global convergence capability is especially large. In one learning problem (application), the global convergence capability increased from 4% to 100%.
ISBN: 978-1-4673-1488-6
978-1-4673-1489-3 (E-ISBN)
DOI: 10.1109/IJCNN.2012.6252546
Appears in Collections:Conference Paper

View full-text via PolyU eLinks SFX Query
Show full item record


Last Week
Last month
Citations as of Jul 29, 2018

Page view(s)

Last Week
Last month
Citations as of Aug 14, 2018

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.