Please use this identifier to cite or link to this item:
Title: Iterative project quasi-Newton algorithm for training RBM
Authors: Mi, S
Zhao, X
Hou, Y
Zhang, P
Li, W 
Song, D
Issue Date: 2016
Publisher: AAAI press
Source: 30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016, p. 4236-4237 How to cite?
Abstract: The restricted Boltzmann machine (RBM) has been used as building blocks for many successful deep learning models, e.g., deep belief networks (DBN) and deep Boltzmann machine (DBM) etc. The training of RBM can be extremely slow in pathological regions. The second order optimization methods, such as quasi-Newton methods, were proposed to deal with this problem. However, the non-convexity results in many obstructions for training RBM, including the infeasibility of applying second order optimization methods. In order to overcome this obstruction, we introduce an em-like iterative project quasi-Newton (IPQN) algorithm. Specifically, we iteratively perform the sampling procedure where it is not necessary to update parameters, and the sub-training procedure that is convex. In sub-training procedures, we apply quasi-Newton methods to deal with the pathological problem. We further show that Newton's method turns out to be a good approximation of the natural gradient (NG) method in RBM training. We evaluate IPQN in a series of density estimation experiments on the artificial dataset and the MNIST digit dataset. Experimental results indicate that IPQN achieves an improved convergent performance over the traditional CD method.
Description: 30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, US, 12-17 February 2016
ISBN: 9781577357605
Appears in Collections:Conference Paper

View full-text via PolyU eLinks SFX Query
Show full item record


Last Week
Last month
Citations as of Nov 13, 2018

Page view(s)

Last Week
Last month
Citations as of Nov 18, 2018

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.