Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/12106
Title: A new gradient method with an optimal stepsize property
Authors: Dai, YH
Yang, XQ 
Keywords: (Shifted) power method
Gradient method
Linear system
Steepest descent method
Issue Date: 2006
Publisher: Springer
Source: Computational optimization and applications, 2006, v. 33, no. 1, p. 73-88 How to cite?
Journal: Computational optimization and applications 
Abstract: The gradient method for the symmetric positive definite linear system Ax=b is as follows xk + 1=xk - αk g k where gk=Axk-b is the residual of the system at x k and α k is the stepsize. The stepsize alphak = 2/λ1+λn is optimal in the sense that it minimizes the modulus
I - α A
2 , where λ1 and λ n are the minimal and maximal eigenvalues of A respectively. Since λ1 and λ n are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this paper, we will propose a new stepsize formula which tends to the optimal stepsize as k → ∞. At the same time, the minimal and maximal eigenvalues, λ1 and λ n , of A and their corresponding eigenvectors can be obtained.
URI: http://hdl.handle.net/10397/12106
ISSN: 0926-6003
EISSN: 1573-2894
DOI: 10.1007/s10589-005-5959-2
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

11
Last Week
0
Last month
1
Citations as of Sep 10, 2017

WEB OF SCIENCETM
Citations

10
Last Week
0
Last month
1
Citations as of Sep 21, 2017

Page view(s)

39
Last Week
0
Last month
Checked on Sep 17, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.