On the complexity of steepest descent, Newton's and regularized
Newton's methods for nonconvex unconstrained optimization
C. Cartis, N. I. M. Gould and Ph. L. Toint
Report 09/14
It is shown that the steepest descent and Newton's method for unconstrained
nonconvex optimization under standard assumptions may be both require a number
of iterations and function evaluations arbitrarily close to O(epsilon^{-2}) to
drive the norm of the gradient below epsilon. This shows that the upper bound
of O(epsilon^{-2}) evaluations known for the steepest descent is tight, and
that Newton's method may be as slow as steepest descent in the worst case. The
improved evaluation complexity bound of O(epsilon^{-3/2}) evaluations known
for cubically-regularised Newton methods is also shown to be tight.