Consider the classical gradient descent method:
It’s a thing of beauty isn’t it? While it’s not used directly in practice any more,
the proof techniques used in its analysis are the building blocks behind the theory of
more advanced optimization methods. I know of 8 different ways of proving its
convergence rate. Each of the proof techniques are interesting in their own right, but
most books on convex optimization give just a single proof of convergence, then move
onto greater things. But to do research in modern convex optimization you should
know them all.
The purpose of this series of posts is to detail each of these proof techniques and
what applications they have to more advanced methods. This post will cover the
proofs under strong convexity assumptions, and the next post will cover the
non-strongly convex case. Unlike most proofs in the literature, we will go into detail
of every step, so that these proofs can be used as a reference (don’t cite this post
directly though, cite the original source preferably, or the technical notes version). If
you are aware of any methods I’ve not covered, please leave a comment with a
reference so I can update this post.
For most of the proofs we end with a statement like
,
where
is some quantity of interest, like distance to solution or function value
sub-optimality. A full proof requires chaining these inequalities for each
, giving something
of the form .
We leave this step as a given.
Basic lemmas
These hold for any
and . Here
is the strong convexity
constant and
the Lipschitz smoothness constant. These are completely
standard, see Nesterov’s book [7] for proofs. We use the notation
for the unique
minimizer of
(for strongly convex problems).
| (1) |
| (2) |
| (3) |
| (4) |
| (5) |
| (6) |
1 Function Value Descent
There is a very simple proof involving just the function values. We start
by showing that the function value descent is controlled by the gradient
norm:
Lemma 1. For any given ,
the change in function value between steps can be bounded as follows:
in particular, if
we have .
Proof. We start with (1), the Lipschitz upper bound about :
Now we plug in the step equation
Negating and rearranging gives:
□
Now since we are considering strongly convex problems, we actually have found
a bound on the gradient norm in terms of function value. We apply (4):
using
,
:
So combining these two results:
We then negate, add & subtract ,
then rearrange:
Note that this function value style proof requires the step size
or smaller,
instead of ,
which we shall see gives the fastest convergence when using some of the other proof
techniques below.
Comments
This proof (when
is used) treats gradient descent as an upper bound minimization scheme. Such
methods, sometimes known under the Majorization-Minimization nomenclature [3],
are quite widespread in optimization. They can be applied to non-convex problems
even, although the convergence rates in that case are necessarily weak. Likewise this
proof gives the weakest convergence rate of the proof techniques presented in this
post, but it is perhaps the simplest. Upper bound minimization techniques
have recently seen interesting applications in 2nd order optimization, in the
form of Nesterov’s cubicly regularized Newton’s method [9]. For stochastic
optimization, the MISO method is also a upper bound minimization scheme [6]. For
non-smooth problems, an interesting application of the MM approach is
in minimizing convex problems with non-convex regularizers of the form
, in
the form of reweighted L1 regularization [5].
2 Iterate Descent
There is also a simple proof involving just the distance of the iterates
to the solution. Using
the definition of the step :
We now apply both the inner product bounds (5)
and (6)
, in the following
negated forms, using :
The inner product term has a weight
, and we apply each
of these with weight ,
giving:
Now if we take
then the last term cancels and we have:
This proof is not as tight as possible. Instead of splitting the inner product term
and applying both bounds (5) and (6), we can apply the following stronger combined
bound from Nesterov’s Book [7]:
| (7) |
Doing so yields:
Now clearly to cancel out the gradient norm term we can take
,
which yields the convergence rate:
Comments
This proof technique is the building block of the standard stochastic
gradient descent (SGD) proof. The above proof is mostly based on
Nesterov’s book, I’m not sure what the original citation is. It has a
nice geometric interpretation, as the bound on the inner product term
can
easily be illustrated in 2 dimensions, say on a white-board. It’s effectively a statement
on the angles that gradients in convex problems can take. To get the strongest
bound using this technique, the complex bound in Equation 7 has to be
used. That stronger bound is not really straight-forward, and perhaps too
technical (in my opinion) to use in a textbook proof of the convergence
rate.
3 Using the Second Fundamental Theorem of Calculus
Recall the second fundamental theorem of calculus:
This can be applied along intervals in higher dimensions.
The case we care about is applying it to the first derivatives of
,
giving an integral involving the Hessian:
We abuse the angle bracket notation here to apply to matrix-vector products as
well as the usual dot-product. Using this result gives an interesting proof of
convergence of gradient descent that doesn’t rely on the usual convexity
lemmas. This proof bounds the distance to solution, just like the previous
proof.
Lemma 2. For any positive :
Proof. We start by applying the second fundamental theorem of calculus in the above
form:
Now we examine the eigenvalues of
. the minimum one
is at least and the
maximum at most .
An examination of the possible range of the eigenvalues of
gives
. □
Using this lemma gives a simple proof along the lines of the iterate descent
proof.
First, note that
is in the right form for direct application of this lemma after substituting in the step
equation:
Note we introduced
for “free”, as it’s of course equal to zero. The next step is optimize this bound in terms of
. Note that
is always
larger than , so
we take the
absolute value as negative, and the other positive, and match their magnitudes:
Which gives the convergence rate:
Note that this rate is in terms of the distance to solution directly, rather than its
square like in the previous proof. Converting to squared norm gives the same rate as
before.
Comments
This proof technique has a linear-algebra feel to it, and is perhaps most comfortable
to people with that background. The absolute values make it ugly in my opinion
though. This proof technique is the building block used in the standard proof of the
convergence of the heavy ball method for strongly convex problems [10]. It doesn’t
appear to have many other applications, and so is probably the least seen of
the techniques in this document. The main use of this kind of argument
is in lower complexity bounds, where we often do some sort of eigenvalue
analysis.
4 Lyapunov Style
The above results prove convergence of either the iterates or the function value
separately. There is an interesting proof involving the sum of the two quantities. First
we start with with the iterate convergence:
Now we use the function descent amount equation (Lemma 1) to bound the gradient norm
term: , where we
have defined :
Now we use the strong convexity lower bound (2) in a rearranged form:
to simplify:
Now rearranging further:
Now this equation gives a descent rate for the weighted sum of
and
The
best rate is given by matching the two convergence rates, that of the iterate distance
terms:
and that of the function value terms, which changes from
to
:
Matching these two rates:
Using this derived value for
gives a convergence rate of .
I.e.
and therefore after
steps:
The constants can be simplified to:
Now we use:
on the right, and we just drop the function value term altogether on the
left:
If we instead use the more robust step size
, which doesn’t
require knowledge of ,
then a simple calculation shows that we instead get
, and
so:
The right hand side is obviously a much tighter bound then when
is
used, but the geometric rate is roughly twice as slow.
Comments
This proof technique has seen a lot of application lately. It is used for the SAGA
[2]and SVRG [4] methods, and can be applied to accelerated method even, such as
the accelerated coordinate descent theory [8]. The Lyapunov function analysis
technique is of great general utility, and so it is worth studying carefully. It is covered
perhaps best in Polyak’s book [10].
5 Gradient Norm Descent
In the strongly convex case, it is actually possible to show that the gradient norm decreases
at least linearly as well as the function value and iterates. This requires a fixed step size
of , as
it is not true when line searches are used.
Lemma 3. For :
Note that
and .
Proof. We start by expanding in terms of the step equation
Now applying both inner product bounds (5) and (6):
So for
this simplifies to:
□
Chaining this result (Lemma 3) over
gives:
We now use
Comments
This technique is probably the weirdest of those listed here. It has seen
application in proving the convergence rate of MISO under some different
stochastic orderings [1]. While clearly a primal result, this proof has some
components normally seen in the proof for a dual method. The gradient
is
effectively the dual iterate. Another interesting property is that the portion of the
proof concerning the gradient’s convergence uses the strong convexity between
and
,
whereas the other proofs considered all use the degree of strong convexity between
and
.
This proof technique can’t work when line searches are used, as bounding the
inner product:
would fail if
changed between steps, as it would become
,
which is a weird expression to work with.
References
[1] Aaron Defazio. New Optimization Methods for Machine Learning. PhD
thesis, Australian National University, 2014.
[2] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A
fast incremental gradient method with support for non-strongly convex
composite objectives. Advances in Neural Information Processing Systems
27 (NIPS 2014), 2014.
[3] David R. Hunter and Kenneth Lange. Quantile regression via an mm
algorithm. Journal of Computational and Graphical Statistics, 9, 2000.
[4] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent
using predictive variance reduction. NIPS, 2013.
[5] Qiang Liu and Alexander Ihler. Learning scale free networks by
reweighted l1 regularization. AISTATS, 2011.
[6] Julien Mairal. Incremental majorization-minimization optimization
with application to large-scale machine learning. Technical report, INRIA
Grenoble RhÃŽne-Alpes / LJK Laboratoire Jean Kuntzmann, 2014.
[7] Yu. Nesterov. Introductory Lectures On Convex Programming. Springer,
1998.
[8] Yu. Nesterov. Efficiency of coordinate descent methods on huge-scale
optimization problems. Technical report, CORE, 2010.
[9] Yu. Nesterov and B.T. Polyak. Cubic regularization of newton method
and its global performance. Mathematical Programming, 108(1):177–205,
2006.
[10] Boris Polyak. Introduction to Optimization. Optimization Software,
Inc., Publications Division., 1987.