Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/75270
Title: Magnified gradient function to improve first-order gradient-based learning algorithms
Authors: Ng, SC
Cheung, CC 
Lui, AKF
Xu, SS
Keywords: Backpropagation
Gradient-based algorithms
Magnified gradient function
Quickprop
Rprop
Issue Date: 2012
Publisher: Springer
Source: Lecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics), 2012, v. 7367 LNCS, no. PART 1, p. 448-457 How to cite?
Journal: Lecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics) 
Abstract: In this paper, we propose a new approach to improve the performance of existing first-order gradient-based fast learning algorithms in terms of speed and global convergence capability. The idea is to magnify the gradient terms of the activation function so that fast learning speed and global convergence can be achieved. The approach can be applied to existing gradient-based algorithms. Simulation results show that this approach can significantly speed up the convergence rate and increase the global convergence capability of existing popular first-order gradient-based fast learning algorithms for multi-layer feed-forward neural networks.
Description: 9th International Symposium on Neural Networks, ISNN 2012, Shenyang, 11-14 July 2012
URI: http://hdl.handle.net/10397/75270
ISBN: 9783642313455
ISSN: 0302-9743
EISSN: 1611-3349
DOI: 10.1007/978-3-642-31346-2_51
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page view(s)

1
Citations as of May 21, 2018

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.