Learning Rate Annealing Can Provably Help Generalization,
Even for Convex Problems
Abstract
Learning rate schedule can significantly affect generalization performance in modern neural networks, but the reasons for this are not yet understood. Li et al., (2019) recently proved this behavior can exist in a simplified non-convex neural-network setting. In this note, we show that this phenomenon can exist even for convex learning problems – in particular, linear regression in 2 dimensions.
We give a toy convex problem where learning rate annealing (large initial learning rate, followed by small learning rate) can lead gradient descent to minima with provably better generalization than using a small learning rate throughout. In our case, this occurs due to a combination of the mismatch between the test and train loss landscapes, and early-stopping.
1 Introduction
The learning rate schedule of stochastic gradient descent is known to strongly affect the generalization of modern neural networks, in ways not explained by optimization considerations alone. In particular, training with a large initial learning-rate followed by a smaller annealed learning rate can vastly outperform training with the smaller learning rate throughout – even when allowing both to reach the same train loss. The recent work of Li et al., (2019) sheds some light on this, by showing a simplified neural-network setting in which this behavior provably occurs.



It may be conjectured that non-convexity is crucial for learning-rate schedule to affect generalization (for example, because strongly-convex objectives have unique global minima). However, we show this behavior can appear even in strongly-convex learning problems. The key insight is that although strongly-convex problems have a unique minima, early-stopping breaks this uniqueness: there is a set of minima with the same train loss . And these minima can have different generalization properties when the train loss surface does not closely approximate the test loss surface. In our setting, a large learning-rate prevents gradient descent from optimizing along high-curvature directions, and thus effectively regularizes against directions that are high-curvature in the train set, but low-curvature on the true distribution (see Figure 1).
Technically, we model a small learning rate by considering gradient flow. Then we compare the following optimizers:
-
(A)
Gradient descent with an infinitesimally small step size (i.e. gradient flow).
-
(B)
Gradient descent with a large step size, followed by gradient flow.
We show that for a particular linear regression problem, if we run both (A) and (B) until reaching the same train loss, then with probability over the train samples, the test loss of (A) is twice that of (B). That is, starting with a large learning rate and then annealing to an infinitesimally small one helps generalization significantly.
Our Contribution. We show that non-convexity is not required to reproduce an effect of learning rate on generalization that is observed in deep learning models. We give a simple model where the mechanisms behind this can be theoretically understood, which shares some features with deep learning models in practice. We hope such simple examples can yield insight and intuition that may eventually lead to a better understanding of the effect of learning rate in deep learning.
Organization. In Section 2 we describe the example. In Section 3 we discuss features of the example, and its potential implications (and non-implications) in deep learning. In Appendix A, we include formal proofs and provide a mild generalization of the example in Section 2.
1.1 Related Work
This work was inspired by Li et al., (2019), which gives a certain simplified neural-network setting in which learning-rate schedule provably affects generalization, despite performing identically with respect to optimization. The mechanisms and intuitions in Li et al., (2019) depend crucially on non-convexity of the objective. In contrast, in this work we provide complementary results, by showing that this behavior can occur even in convex models due to the interaction between early-stopping and estimation error. Our work is not incompatible with Li et al., (2019); indeed, the true mechanisms behind generalization in real neural networks may involve both of these factors (among others).
There are many empirical and theoretical works attempting to understand the effect of learning-rate in deep learning, which we briefly describe here. Several recent works study the effect of learning rate by considering properties (e.g. curvature) of the loss landscape, finding that different learning rate schedules can lead SGD to regions in parameter space with different local geometry Jastrzebski et al., (2020); Lewkowycz et al., (2020). This fits more broadly into a study of the implicit bias of SGD. For example, there is debate on whether generalization is connected to “sharpness” of the minima (e.g. Hochreiter and Schmidhuber, (1997); Keskar et al., (2016); Dinh et al., (2017)). The effect of learning-rate is also often also studied alongside the effect of batch-size, since the optimal choice of these two parameters is coupled in practice Krizhevsky, (2014); Goyal et al., (2017); Smith et al., (2017). Several works suggest that the stochasticity of SGD is crucial to the effect of learning-rate, in that different learning rates lead to different stochastic dynamics, with different generalization behavior Smith and Le, (2017); Mandt et al., (2017); Hoffer et al., (2017); McCandlish et al., (2018); Welling and Teh, (2011); Li et al., (2019). In particular, it may be the case that the stochasticity in the gradient estimates of SGD acts as an effective “regularizer” at high learning rates. This aspect is not explicitly present in our work, and is an interesting area for future theoretical and empirical study.
2 Convex Example
The main idea is illustrated in the following linear regression problem, as visualized in Figure 1. Consider the data distribution over defined as:
for some ground-truth . We want to learn a linear model with small mean-squared-error on the population distribution. Specifically, for model parameters , the population loss is
We want to approximate . Suppose we try to find this minima by drawing samples from , and performing gradient descent on the empirical loss:
starting at , and stopping when for some small .
Now for simplicity let , and let the ground-truth model be . With probability , two of the samples will have the same value of – say this value is , so the samples are . In this case the empirical loss is
(1) |
which is distorted compared to the population loss, since we have few samples relative to the dimension. The population loss is:
(2) |
The key point is that although the global minima of and are identical, their level sets are not: not all points of small train loss have identical test loss . To see this, refer to Equation 1, and consider two different ways that train loss could be achieved. In the “good” situation, term (A) in Equation 1 is , and term (B) is . In the “bad” situation, term (A) is and term (B) is . These two cases have test losses which differ by a factor of two, since terms are re-weighted differently in the test loss (Equation 2). This is summarized in the below table.
Residual | Train Loss | Test Loss | |
---|---|---|---|
Good: | |||
Bad: |
Now, if our optimizer stops at train loss , it will pick one of the points in . We see from the above that some of these points are twice as bad as others.
Notice that gradient flow on (Equation 1) will optimize twice as fast along the coordinate compared to . The gradient flow dynamics of the residual are:
This will tend to find solutions closer to the “Bad” solution from the above table, where and .
However, gradient descent with a large step size can oscillate on the coordinate, and achieve low train loss by optimizing instead. Then in the second stage, once the learning-rate is annealed, gradient descent will optimize on while keeping the coordinate small. This will lead to a minima closer to the “Good” solution, where . These dynamics are visualized in Figure 1.
This example is formalized in the following claim. The proof is straightforward, and included in Appendix A.
Claim 1.
For all , there exists a distribution over and a learning-rate (the “large” learning-rate) such that for all , the following holds. With probability over samples:
-
1.
Gradient flow from -initialization, early-stopped at train loss , achieves population loss
-
2.
Annealed gradient descent from -initialization (i.e. gradient descent with stepsize for steps, followed by gradient flow) stopped at train loss , achieves population loss
In particular, since can be taken arbitrarily small, and taken arbitrarily large, this implies that gradient flow achieves a population loss twice as high as gradient descent with a careful step size, followed by gradient flow.
Moreover, with the remaining probability, the samples will be such that gradient flow and annealed gradient descent behave identically:
3 Discussion
Similarities. This example, though stylized, shares several features with deep neural networks in practice.
-
•
Neural nets trained with cross-entropy loss cannot be trained to global emperical risk minimas, and are instead early-stopped at some small value of train loss.
-
•
There is a mismatch between train and test loss landscapes (in deep learning, this is due to overparameterization/undersampling).
-
•
Learning rate annealing typically generalizes better than using a small constant learning rate Goodfellow et al., (2016).
-
•
The “large” learning rates used in practice are far larger than what optimization theory would prescribe. In particular, consecutive iterates of SGD are often negatively-correlated with each other in the later stage of training Xing et al., (2018), suggesting that the iterates are oscillating around a sharp valley. Jastrzebski et al., (2018) also finds that the typical SGD step is too large to optimize along the steepest directions of the loss.
Limitations. Our example is nevertheless a toy example, and does not exhibit some features of real networks. For example, our setting has only one basin of attraction. But real networks have many basins of attraction, and there is evidence that a high initial learning rate influences the choice of basin (e.g. Li et al., (2019); Jastrzebski et al., (2020)). It remains an important open question to understand the effect of learning rate in deep learning.
Acknowledgements
We thank John Schulman for a discussion around learning rates that led to wondering if this can occur in convex problems. We thank Aditya Ramesh, Ilya Sutskever, and Gal Kaplun for helpful comments on presentation, and Jacob Steinhardt for technical discussions refining these results.
Work supported in part by the Simons Investigator Awards of Boaz Barak and Madhu Sudan, and NSF Awards under grants CCF 1565264, CCF 1715187, and CNS 1618026.
References
- Dinh et al., (2017) Dinh, L., Pascanu, R., Bengio, S., and Bengio, Y. (2017). Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1019–1028. JMLR. org.
- Goodfellow et al., (2016) Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press.
- Goyal et al., (2017) Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. (2017). Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677.
- Hochreiter and Schmidhuber, (1997) Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735–1780.
- Hoffer et al., (2017) Hoffer, E., Hubara, I., and Soudry, D. (2017). Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In Advances in Neural Information Processing Systems, pages 1731–1741.
- Jastrzebski et al., (2018) Jastrzebski, S., Kenton, Z., Ballas, N., Fischer, A., Bengio, Y., and Storkey, A. (2018). On the relation between the sharpest directions of dnn loss and the sgd step length. arXiv preprint arXiv:1807.05031.
- Jastrzebski et al., (2020) Jastrzebski, S., Szymczak, M., Fort, S., Arpit, D., Tabor, J., Cho, K., and Geras, K. (2020). The break-even point on optimization trajectories of deep neural networks. arXiv preprint arXiv:2002.09572.
- Keskar et al., (2016) Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. (2016). On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836.
- Krizhevsky, (2014) Krizhevsky, A. (2014). One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997.
- Lewkowycz et al., (2020) Lewkowycz, A., Bahri, Y., Dyer, E., Sohl-Dickstein, J., and Gur-Ari, G. (2020). The large learning rate phase of deep learning: the catapult mechanism. arXiv preprint arXiv:2003.02218.
- Li et al., (2019) Li, Y., Wei, C., and Ma, T. (2019). Towards explaining the regularization effect of initial large learning rate in training neural networks.
- Mandt et al., (2017) Mandt, S., Hoffman, M. D., and Blei, D. M. (2017). Stochastic gradient descent as approximate bayesian inference. The Journal of Machine Learning Research, 18(1):4873–4907.
- McCandlish et al., (2018) McCandlish, S., Kaplan, J., Amodei, D., and Team, O. D. (2018). An empirical model of large-batch training. arXiv preprint arXiv:1812.06162.
- Smith et al., (2017) Smith, S. L., Kindermans, P.-J., Ying, C., and Le, Q. V. (2017). Don’t decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489.
- Smith and Le, (2017) Smith, S. L. and Le, Q. V. (2017). A bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451.
- Welling and Teh, (2011) Welling, M. and Teh, Y. W. (2011). Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688.
- Xing et al., (2018) Xing, C., Arpit, D., Tsirigotis, C., and Bengio, Y. (2018). A walk with sgd. arXiv preprint arXiv:1802.08770.
Appendix A Formal Statements
In this section, we state and prove Claim 1, along with a mild generalization of the setting via Lemma 1.
A.1 Notation and Preliminaries
For a distribution over , let be the population loss for parameters . Let be the train loss for samples .
Let be the function which optimizes train loss from initial point , until the train loss reaches early-stopping threshold , and then outputs the resulting parameters.
Let be the function which first optimizes train loss with gradient-descent starting at initial point , with step-size , for steps, and an early-stopping threshold , and then continues with gradient flow with early-stopping threshold .
For samples, let be the design matrix of samples. Let be the population covariance and let be the emperical covariance.
We assume throughout that and are simultaneously diagonalizable:
for diagonal and orthonormal . Further, assume without loss of generality that (if is not full rank, we can restrict attention to its image).
We consider optimizing the the train loss:
Observe that the optimal population loss for a point with train loss is:
(3) |
Similarly, the worst population loss for a point with train loss is:
(4) |
Now, the gradient flow dynamics on the train loss is:
Switching to coordinates , these dynamics are equivalent to:
The empirical and population losses in these coordinates can be written as:
And the trajectory of gradient flow, from initialization , is
With this setup, we now state and prove the following main lemma, characterizing the behavior of gradient flow and gradient descent for a given sample covariance.
A.2 Main Lemma
Lemma 1.
Let be as above. Let eigenvalues be ordered according to : .
Let for some . That is, let index a block of “small” eigenvalues. Let . Assume there is an eigenvalue gap between the “large” and “small” eigenvalues, where for some
For all , if the initialization satisfies
-
1.
At initialization, the contribution of the largest eigenspace to the train loss is at least :
-
2.
The eigenvalue gap is large enough to ensure that eventually only the “small” eigenspace is significant, specifically:
Then there exists a learning-rate (the “large” learning-rate) such that
-
1.
Gradient flow achieves population loss
-
2.
Annealed gradient descent run for steps has loss
In particular, if the top eigenvalue of is unique, then
The constant depends on the spectrum, and taking suffices.
Proof.
Gradient Flow.
The proof idea is to show that gradient flow must run for some time in order to reach train loss – and since higher-eigenvalues optimize faster, most of the train loss will be due to contributions from the “small” coordinates .
Let be the stopping-time of gradient flow, such that . We can lower-bound the time required to reach train loss as:
(5) | ||||
(6) |
Decompose the train loss as:
Where and . Now, we can bound as:
(by Equation 6) | ||||
Where the last inequality is due to Condition 2 of the Lemma. Now, since , and , and , we must have
This lower-bounds the population loss as desired.
Annealed Gradient Descent.
For annealed gradient descent, the proof idea is: set the learning-rate such that in the gradient descent stage, the optimizer oscillates on the first (highest-curvature) coordinate , and optimizes until the remaining coordinates are . Then, in the gradient flow stage, it optimizes on the first coordinate until hitting the target train loss of . Thus, at completion most of the train loss will be due to contributions from the “large” first coordinate.
Specifically, the gradient descent dynamics with stepsize is:
for discrete . We set , so the first coordinate simply oscillates: . This is the case for the top eigenspace, i.e. all coordinates where . The remaining coordinates decay exponentially. That is, let be the remaining coordinates (corresponding to smaller eigenspaces). Then we have:
(7) |
for constant .
Recall, the population and empirical losses are:
By Equation 7, after steps of gradient descent, the contribution to the population loss from the coordinates is small:
By Condition 1 of the Lemma, the train loss is still not below after the gradient descent stage, since the first coordinate did not optimize: . We now run gradient flow, stopping at time when the train loss is . At this point, we have
And thus,
as desired. ∎
A.3 Proof of Claim 1
Proof sketch of Claim 1.
Consider the distribution defined as: Sample uniformly at random, and let for ground-truth . The population covariance is simply
With probability , two of the three samples will have the same value of . In this case, the sample covariance will be equal (up to reordering of coordinates) to
This satisfies the conditions of Lemma 1, taking the “small” set of eigenvalue indices to be . Thus, the conclusion follows by Lemma 1.
With probability , all the samples will all be identical, and in particular share the same value of . Here, the optimization trajectory will be 1-dimensional, and it is easy to see that gradient flow and annealed gradient descent will reach identical minima. ∎