This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On progressive sharpening, flat minima and generalisation

Lachlan E. MacDonald
Mathematical Institute for Data Science
Johns Hopkins University
Baltimore, MD 21218, USA
[email protected]
   Jack Valmadre & Simon Lucey
Australian Institute for Machine Learning
University of Adelaide
Adelaide, SA 5000, Australia
Abstract

We present a new approach to understanding the relationship between loss curvature and input-output model behaviour in deep learning. Specifically, we use existing empirical analyses of the spectrum of deep network loss Hessians to ground an ansatz tying together the loss Hessian and the input-output Jacobian over training samples during the training of deep neural networks. We then prove a series of theoretical results which quantify the degree to which the input-output Jacobian of a model approximates its Lipschitz norm over a data distribution, and deduce a novel generalisation bound in terms of the empirical Jacobian. We use our ansatz, together with our theoretical results, to give a new account of the recently observed progressive sharpening phenomenon, as well as the generalisation properties of flat minima. Experimental evidence is provided to validate our claims.

1 Introduction

In this paper, we attempt to clarify how the curvature of the loss landscape of a deep neural network is related to the input-output behaviour of the model. Via a single mechanism, our Ansatz 3.1, we offer new explanations of both the as-yet unexplained progressive sharpening phenomenon Cohen et al. [2021] observed in the training of deep neural networks, and the long-speculative relationship between loss curvature and generalisation Hochreiter and Schmidhuber [1997], Keskar et al. [2017], Chaudhari et al. [2017], Foret et al. [2021].

The mechanism we propose in Ansatz 3.1 to mediate the relationship between loss curvature and input-output model behaviour is the input-output Jacobian of the model over the training sample, which is already understood to play a role in determining generalisation Drucker and Le Cun [1992], Bartlett et al. [2017], Neyshabur et al. [2017], Hoffman et al. [2019], Bubeck and Sellke [2021], Gouk et al. [2021], Novak et al. [2018], Ma and Ying [2021]. Our proposal is based on empirical observations made of the eigenspectrum of the Hessian of deep neural networks Papyan [2018, 2019], Ghorbani et al. [2019], whose outliers can be attributed to a summand of the Hessian known as the Gauss-Newton matrix. Crucially, the Gauss-Newton matrix is second-order only in the cost function, and solely first-order in the network layers.

The Gauss-Newton matrix is a Gram matrix (i.e. a product ATAA^{T}A for some matrix AA), and its conjugate AATAA^{T}, closely related to the tangent kernel identified in Jacot et al. [2018], has the same nonzero eigenvalues. Expanding AATAA^{T} reveals that it is determined in part by composite input-output layer Jacobians. Thus, insofar as the outlying Hessian eigenvalues are determined by those of the Gauss-Newton matrix, and insofar as input-output layer Jacobians determine the spectrum of the Gauss-Newton matrix via its isospectrality to its conjugate, one expects the largest singular values of the loss Hessian to be closely related to those of the model’s input-output Jacobian.

Our contributions in this paper are as follows.

  1. 1.

    Based on previous empirical work identifying outlying loss Hessian eigenvalues with those of the Gauss-Newton matrix, we propose an ansatz: that under certain conditions, the largest eigenvalues of the loss Hessian control the growth of the largest singular values of the model’s input-output Jacobian. We focus in particular on the largest eigenvalue of the Hessian (the sharpness) and the largest singular value of the Jacobian (its spectral norm, which we abbreviate to simply norm in what follows).

  2. 2.

    In Section 4 we provide theorems which quantify the extent to which the maximum input-output Jacobian spectral norm of a model over a training set will approximate the Lipschitz norm of the model over the underlying data distribution during training for datasets of practical relevance, including those generated by a generative adversarial network (GAN) or implicit neural function.

  3. 3.

    In Section 5, we provide a theorem which gives a data-dependent, high probability lower bound on the empirical Jacobian norm of a model that grows during any effective training procedure. We combine this result with Ansatz 3.1 to give a new account of progressive sharpening, which enables us to change the severity of sharpening by scaling inputs and outputs. We report on classification experiments that validate our account.

  4. 4.

    In Section 6, we provide a novel bound on the generalisation gap of the model in terms of the empirical Jacobian norm. Synthesising this result with Ansatz 3.1, we argue that low loss curvature implies good generalisation only insofar as low loss curvature implies small empirical Jacobian norm. We report on experiments measuring the effect of hyperparameters such as learning rate and batch size on loss sharpness, Jacobian norm and generalisation, revealing the validity of our explanation. Finally, we present the results of experiments measuring the loss sharpness, Jacobian norm and generalisation gap of networks trained with a variety of regularisation measures, revealing in all cases the superior correlation of Jacobian norm and generalisation when compared with loss sharpness, whether or not the regularisation technique targets loss sharpness.

2 Related work

Flatness, Jacobians and generalisation: The position that flatter minima generalise better dates back to Hochreiter and Schmidhuber [1997]. It has since become a staple concept in the toolbox of deep learning practitioners Keskar et al. [2017], Chaudhari et al. [2017], Foret et al. [2021], Chen et al. [2020], with its incorporation into training schemes yielding state-of-the-art performance in many tasks. Its effectiveness has motivated the study of the effect that learning hyperparameters such as learning rate and batch size have on the sharpness of minima to which the algorithm will converge Wu et al. [2018], Zhu et al. [2019], Mulayoff et al. [2021]. The hypothesis has received substantial criticism. In Dinh et al. [2017] it is shown that flatness is not necessary for good generalisation in ReLU models, since such models are invariant to scaling symmetries of the parameters which arbitrarily sharpen their loss landscapes. This observation has motivated the consideration of scale-invariant measures of loss sharpness Lyu et al. [2022], Jang et al. [2022]. In Granziol [2020], the sufficiency of loss flatness for good generalisation is disputed by showing that models trained with the cross-entropy cost generalise better with weight decay than without, despite the former ending up in sharper minima. Sufficiency of flatness for generalisation is further challenged for Gaussian-activated networks on regression tasks in Ramasinghe et al. [2023].

Relatively little work has looked into the mechanism underlying the relationship between loss curvature and generalisation. PAC-Bayes bounds Dziugaite and Roy [2017], Foret et al. [2021], F. He and T. Liu and D. Tao [2019] provide some theoretical support for the hypothesis that loss flatness is sufficient for good generalisation, however this hypothesis, taken unconditionally, is known to be false empirically Granziol [2020], Ramasinghe et al. [2023]. The same is true of Petzka et al. [2021], which cleverly bounds the generalisation gap by a rescaled loss Hessian trace. Despite advocating for the empirical Jacobian as the link between loss curvature and generalisation as we do, the insightful papers Ma and Ying [2021], Gamba et al. [2023] consider the relationship only at a critical point of the square cost, and in particular cannot offer explanations for progressive sharpening or the generalisation benefits of training with an only an initially large learning rate that is gradually decayed, as our account can. Lee et al. [2023] advocates empirically for the parameter-output Jacobian of the model as the mediating link between loss sharpness and generalisation, but provide no theoretical indication for why this should be the case. We provide evidence in Appendix D indicating that our proposal may be able to fill this gap in Lee et al. [2023].

Finally, we note that among other generalisation studies in the literature, ours is most closely related to Ma and Ying [2021] which exploits properties of the data distribution to prove a generalisation bound. Our bound is also related to Bartlett et al. [2017], Bubeck and Sellke [2021], Muthukumar and Sulam [2023], which recognise sensitivity to input perturbations as key to generalisation. Like Wei and Ma [2019], our work in addition formalises the intuitive idea that the empirical Jacobian norm regulates generalisation Drucker and Le Cun [1992], Hoffman et al. [2019], Novak et al. [2018]. Further study of our ansatz in relation to the edge of stability phenomenon Cohen et al. [2021] may be of utility to algorithmic stability approaches to generalisation Bousquet and Elisseeff [2002], Hardt et al. [2016], Charles and Papailiopoulos [2018], Kuzborskij and Lampert [2018], Bassily et al. [2020].

Progressive sharpening and edge of stability: The effect of learning rate η\eta on loss sharpness has been understood to some extent for several years Wu et al. [2018]. In Cohen et al. [2021], this relationship was attended to for deep networks with a rigorous empirical study which showed that, after an initial period of sharpness increase during training called progressive sharpening, the sharpness increase halted at around 2/η2/\eta and oscillated there while loss continued to decrease non-monotonically, a phase called edge of stability. These phenomena show that the typical assumptions made in theoretical work, namely that the learning rate is always smaller than twice the reciprocal of the sharpness Allen-Zhu et al. [2019], Du et al. [2019a, b], MacDonald et al. [2022], do not hold in practice. Significant work has since been conducted to understand these phenomena Lyu et al. [2022], Lee and Jang [2023], Arora et al. [2022], Wang et al. [2022], Zhu et al. [2023], Ahn et al. [2022], with Damian et al. [2023] showing in particular that edge of stability is a universal phenomenon arising from a third-order stabilising effect that must be taken into account when the sharpness grows above 2/η2/\eta. In contrast, progressive sharpening is not universal; it is primarily observed only in genuinely deep neural networks. Although it has been correlated to growth in the norm of the output layer Wang et al. [2022], the cause of progressive sharpening has so far remained mysterious. The mechanism we propose is the first account of which we are aware for a cause of progressive sharpening.

3 Background and paper outline

3.1 Outliers in the spectrum

We follow the formal framework introduced in MacDonald et al. [2022], which allows us to treat all neural network layers on a common footing. Specifically, we consider a multilayer parameterised system {fi:pi×di1di}i=1L\{f_{i}:\mathbb{R}^{p_{i}}\times\mathbb{R}^{d_{i-1}}\rightarrow\mathbb{R}^{d_{i}}\}_{i=1}^{L} (the layers), with a data matrix Xd0×NX\in\mathbb{R}^{d_{0}\times N} consisting of NN data vectors in d0\mathbb{R}^{d_{0}}. We denote by F:p1++pLdL×NF:\mathbb{R}^{p_{1}+\dots+p_{L}}\rightarrow\mathbb{R}^{d_{L}\times N} the associated parameter-function map defined by

FX(θ1,,θL)i:=fL(θL)f1(θ1)(X)idLi=1,,N.F_{X}(\theta_{1},\dots,\theta_{L})_{i}:=f_{L}(\theta_{L})\circ\cdots\circ f_{1}(\theta_{1})(X)_{i}\in\mathbb{R}^{d_{L}}\,\qquad i=1,\dots,N. (1)

Given a convex cost function c:dL×dLc:\mathbb{R}^{d_{L}}\times\mathbb{R}^{d_{L}}\rightarrow\mathbb{R} and a target matrix YdL×NY\in\mathbb{R}^{d_{L}\times N}, we consider the associated loss

(θ):=γYFX(θ)=1Ni=1Nc(FX(θ)i,Yi),\ell(\vec{\theta}):=\gamma_{Y}\circ F_{X}(\vec{\theta})=\frac{1}{N}\sum_{i=1}^{N}c\big{(}F_{X}(\vec{\theta})_{i},Y_{i}\big{)}, (2)

where γY:dL×N\gamma_{Y}:\mathbb{R}^{d_{L}\times N}\rightarrow\mathbb{R} is defined by γY(Z):=N1i=1Nc(Zi,Yi)\gamma_{Y}(Z):=N^{-1}\sum_{i=1}^{N}c(Z_{i},Y_{i}).

Using the chain rule and the product rule, one observes that the Hessian D2D^{2}\ell of \ell admits the decomposition

D2=DFXTD2γYDFX+DγYD2FX.D^{2}\ell=DF_{X}^{T}\,D^{2}\gamma_{Y}\,DF_{X}+D\gamma_{Y}\,D^{2}F_{X}. (3)

The first of these terms, often called the Gauss-Newton matrix, is positive-semidefinite by convexity of γY\gamma_{Y}. It has been demonstrated empirically in a vast number of practical settings that the largest, outlying eigenvalues of the Hessian throughout training correlate closely with those of the Gauss-Newton matrix Papyan [2018, 2019], Cohen et al. [2021]. In attempting to understand the relationship between loss sharpness and model behaviour, therefore, empirical evidence invites us to devote special attention to the Gauss-Newton matrix. In what follows we assume based on this evidence that the outlying eigenvalues of the Gauss-Newton matrix determine those of the loss Hessian.

3.2 The Gauss-Newton matrix and input-output Jacobians

Letting CC denote the square root (D2γY)12(D^{2}\gamma_{Y})^{\frac{1}{2}}, we see that the Gauss-Newton matrix DFXTC2DFXDF_{X}^{T}\,C^{2}\,DF_{X} has the same nonzero eigenvalues as its conjugate matrix CDFXDFXTCTC\,DF_{X}\,DF_{X}^{T}\,C^{T}, which is closely related to the tangent kernel DFXDFXTDF_{X}\,DF_{X}^{T} identified in Jacot et al. [2018]. Letting JflJf_{l} and DflDf_{l} denote the input-output and parameter derivatives of a layer fl:p×dl1dlf_{l}:\mathbb{R}^{p}\times\mathbb{R}^{d_{l-1}}\rightarrow\mathbb{R}^{d_{l}} respectively, one sees that

CDFXDFXTCT=C(l=1L(JfLJfl+1DflDflTJfl+1TJfLT))CTC\,DF_{X}\,DF_{X}^{T}\,C^{T}=C\,\bigg{(}\sum_{l=1}^{L}\bigg{(}Jf_{L}\cdots Jf_{l+1}\,Df_{l}\,Df_{l}^{T}\,Jf_{l+1}^{T}\cdots Jf_{L}^{T}\bigg{)}\bigg{)}C^{T} (4)

is a sum of positive-semidefinite matrices. Each summand is determined in large part by the composites of input-output layer Jacobians. Note, however, that the input-output Jacobian of the first layer does not appear; only its parameter derivative does.

It is clear then that insofar as the largest eigenvalues of the Hessian are determined by the Gauss-Newton matrix, these eigenvalues are determined in part by the input-output Jacobians of all layers following the first. We judge the following ansatz to be intuitively clear from careful consideration of Equation (4).

Ansatz 3.1.

Under certain conditions, an increase in the magnitude of the input-output Jacobian of a deep neural network will cause an increase in the loss sharpness. Conversely, a decrease in the sharpness will cause a decrease in the magnitude of the input-output Jacobian.

Note that Ma and Ying [2021], Gamba et al. [2023] already contain a rigorous proofs of Ansatz 3.1 under special conditions: namely at critical points of the square cost. This restricted setting is not sufficient for our purposes. To understand the relationship between loss curvature and model behaviour adequately, we must invoke Ansatz 3.1 throughout training and most frequently with the cross-entropy cost function, where Ma and Ying [2021], Gamba et al. [2023] do not apply.

Importantly, we make no claim that Ansatz 3.1 holds unconditionally. Generally, the relationship proposed in Ansatz 3.1 is necessarily mediated by the following factors present in Equation (4): (1) the square root CC of the second derivative of the cost function; (2) the presence of the parameter derivatives DflDf_{l}; (3) the complete absence of the Jacobian of the first layer. These mediating factors prevent a simple upper bound of input-output Jacobian by loss Hessian, and we spent a significant amount of time and computational power exposing the importance of these mediating factors empirically (Figure 2, Appendix C). In all cases where Ansatz 3.1 appears not to apply, we are able to account for its inadequacy in terms of these mediating factors using Equation (4) and further measurements. Moreover, we are able to to explain all known exceptions to the “rules" of the superior generalisation of flat minima Granziol [2020], Ramasinghe et al. [2023] and of progressive sharpening Cohen et al. [2021] as consequences of unusual behaviour in these mediating factors. Ours is the only account we are aware of for these phenomena which can make this claim.

4 General theory

All of what follows is motivated by the following idea: the Lipschitz norm of a differentiable function is upper-bounded by the supremum, over the relevant data distribution, of the spectral norm of its Jacobian. Intuitively, this supremum will itself be approximated by the maximum Jacobian norm over a finite sample of points from the distribution, which by Ansatz 3.1 can be expected to relate to the loss Hessian of the model. It is this intuition we seek to formalise and, and whose practical relevance we seek to justify, in this section.

In what follows, we will use >0\mathbb{R}_{>0} to denote the positive real numbers. We will use \mathbb{P} to denote a probability measure on d\mathbb{R}^{d}, whose support supp()\text{supp}(\mathbb{P}) we assume to be a metric space with metric inherited from d\mathbb{R}^{d}. Given a Lipschitz function g:ddg:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}, we use gLip,\|g\|_{Lip,\mathbb{P}} to denote the Lipschitz norm of g|supp()g|_{\text{supp}(\mathbb{P})}. Note the dual meaning of g2\|g\|_{2}: when gg is vector-valued, it refers to the Euclidean norm, while when gg is operator-valued (e.g., when g=Jfg=Jf is a Jacobian), g2\|g\|_{2} refers to the spectral norm. We will frequently invoke pointwise evaluation of the derivative of Lipschitz functions, and in doing so always rely on the fact that our evaluations are probabilistic and Lipschitz functions are differentiable almost everywhere. We will use B(x,δ)dB(x,\delta)\subset\mathbb{R}^{d} to denote the closed Euclidean ball of radius δ\delta centered xx. Finally, given a locally bounded (but not necessarily continuous) function g:ddg:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}} and a compact set SdS\subset\mathbb{R}^{d}, we define the local variation of gg over SS by

VS(g):=supx,ySg(x)g(y)2.V_{S}(g):=\sup_{x,y\in S}\|g(x)-g(y)\|_{2}. (5)

Thus, for instance, if gg is Lipschitz and SS is a ball of radius δ\delta, then VS(g)2δg|SLipV_{S}(g)\leq 2\delta\|g|_{S}\|_{Lip}. All proofs are deferred to the appendix.

We begin by specifying the data distributions with which we will be concerned.

Definition 4.1.

Let δ:×(0,1)>0\delta:\mathbb{N}\times(0,1)\rightarrow\mathbb{R}_{>0} be a function such that for each ϵ(0,1)\epsilon\in(0,1), the function Nδ(N,ϵ)N\mapsto\delta(N,\epsilon) is decreasing and vanishes as NN\rightarrow\infty. We say that a distribution \mathbb{P} is δ\delta-good if for every NN\in\mathbb{N} and ϵ(0,1)\epsilon\in(0,1), with probability at least 1ϵ1-\epsilon over i.i.d. samples (x1,,xN)(x_{1},\dots,x_{N}) from \mathbb{P}, one has supp()i=1NB(xi,δ(N,ϵ))\text{supp}(\mathbb{P})\subset\bigcup_{i=1}^{N}B(x_{i},\delta(N,\epsilon)).

A similar assumption on the data distribution is adopted in Ma and Ying [2021], however no proof is given in Ma and Ying [2021] that such an assumption is characteristic of data distributions of practical interest. Our first theorem, derived from [Reznikov and Saff, 2016, Theorem 2.1], is that many data distributions of practical interest in deep learning are indeed examples of Definition 4.1.

Theorem 4.2.

Suppose that \mathbb{P} is the normalised Riemannian volume measure of a compact, connected, embedded, dd-dimensional submanifold of Euclidean space (possibly with boundary and corners). Then there exists δ:×(0,1)>0\delta:\mathbb{N}\times(0,1)\rightarrow\mathbb{R}_{>0}, with δ(N,ϵ)=O((log(Nϵ1)N1)1d)\delta(N,\epsilon)=O\big{(}(\log(N\epsilon^{-1})N^{-1})^{\frac{1}{d}}\big{)}, such that \mathbb{P} is δ\delta-good.

In particular, the uniform distribution on the unit hypercube [0,1]d[0,1]^{d} in d\mathbb{R}^{d} is δ[0,1]d\delta_{[0,1]^{d}}-good with

δ[0,1]d(N,ϵ)=4Γ(d2+1)1dπ(log(Nϵ1)N)1d,\delta_{[0,1]^{d}}(N,\epsilon)=\frac{4\,\Gamma(\frac{d}{2}+1)^{\frac{1}{d}}}{\sqrt{\pi}}\bigg{(}\frac{\log(N\epsilon^{-1})}{N}\bigg{)}^{\frac{1}{d}}, (6)

and, for 0<γ<10<\gamma<1 and NN sufficiently large, the uniform distribution on the dd-dimensional unit hypersphere Sdd+1S^{d}\subset\mathbb{R}^{d+1} is δSd\delta_{S^{d}}-good with

δSd(N,ϵ)=21+1d(1γ)12(log(Nϵ1)N)1d.\delta_{S^{d}}(N,\epsilon)=2^{1+\frac{1}{d}}(1-\gamma)^{-\frac{1}{2}}\bigg{(}\frac{\log(N\epsilon^{-1})}{N}\bigg{)}^{\frac{1}{d}}. (7)

Moreover, if \mathbb{P} is any δ\delta-good distribution, then the pushforward of \mathbb{P} by any Lipschitz function gg is gLip,δ\|g\|_{Lip,\mathbb{P}}\delta-good.

Theorem 4.2 says in particular that the distribution generated by any GAN whose latent space is the uniform distribution on a hypercube or on a sphere is an example of a good distribution, as is any distribution generated by an implicit neural function Sitzmann et al. [2020]. Since a high dimensional isotropic Gaussian is close to a uniform distribution on a sphere, which is good by Theorem 4.2, we believe it likely that our theory can be strengthened to include Gaussian distributions, however we have not attempted to prove this and leave it to future work.

Our next theorem formalises our intuition that the maximum Jacobian is an approximate upper bound on the Lipschitz constant of a model throughout training. Its proof is implicit in the proofs of both of our following theorems which characterise progressive sharpening and generalisation in terms of the empirical Jacobian norm.

Theorem 4.3.

Suppose that \mathbb{P} is δ\delta-good, and let NN\in\mathbb{N} and 0<ϵ<10<\epsilon<1. Then with probability at least 1ϵ1-\epsilon over i.i.d. samples (x1,,xN)(x_{1},\dots,x_{N}) from \mathbb{P}, for any Lipschitz function f:ddf:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}, one has:

fLip,maxi(Jf(xi)2+VB(xi,δ(N,ϵ))(Jf)).\|f\|_{Lip,\mathbb{P}}\leq\max_{i}\big{(}\|Jf(x_{i})\|_{2}+V_{B(x_{i},\delta(N,\epsilon))}(Jf)\big{)}. (8)

5 Progressive sharpening

In this section we give our account of progressive sharpening. We will assume that training of a model f:ddf:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}} is undertaken using a cost function c:d×dc:\mathbb{R}^{d^{\prime}}\times\mathbb{R}^{d^{\prime}}\rightarrow\mathbb{R}, whose global minimum we assume to be zero, and for which we assume that there is α>0\alpha>0 such that c(z1,z2)αz1z222c(z_{1},z_{2})\geq\alpha\|z_{1}-z_{2}\|^{2}_{2} for all z1,z2dz_{1},z_{2}\in\mathbb{R}^{d^{\prime}}. This is clearly always the case for the square cost; by Pinsker’s inequality, it also holds for the Kullback-Leibler divergence on softmax outputs, and hence also for the cross-entropy cost provided one subtracts label entropy.

Theorem 5.1.

Suppose that \mathbb{P} is δ\delta-good, and let NN\in\mathbb{N} and 0<ϵ<10<\epsilon<1. Let f:supp()df^{*}:\text{supp}(\mathbb{P})\rightarrow\mathbb{R}^{d^{\prime}} be a target function and fix a real number >0\ell>0. Then with probability at least 1ϵ1-\epsilon over i.i.d. samples (x1,,xN)(x_{1},\dots,x_{N}) drawn from \mathbb{P}, for any Lipschitz function f:ddf:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}} satisfying c(f(xi),f(xi))2αc(f(x_{i}),f^{*}(x_{i}))\leq\ell^{2}\alpha for all ii, one has

maxiJf(xi)2maxijf(xi)f(xj)22xixj2maxiVB(xi,δ(N,ϵ))(Jf).\max_{i}\|Jf(x_{i})\|_{2}\geq\max_{i\neq j}\frac{\|f^{*}(x_{i})-f^{*}(x_{j})\|_{2}-2\ell}{\|x_{i}-x_{j}\|_{2}}-\max_{i}V_{B(x_{i},\delta(N,\epsilon))}(Jf). (9)

Progressive sharpening: Theorem 4.3 tells us that any training procedure that reduces the loss over all data points will thus also increase the sample-maximum Jacobian norm from a low starting point. Invoking Ansatz 3.1, this increase in the sample-maximum Jacobian norm can be expected to cause a corresponding increase in the magnitude of the loss Hessian (see Appendix A.1): progressive sharpening.

Theorem 5.1 thus tells us that sharpening can be made more or less severe by scaling the distances between target values: indeed this is what we observe (Figure 1, see also Appendix B.1). One might also expect that for non-batch-normalised networks, scaling the inputs closer together would increase sharpening too. However, this is not the case, due to the mediating factors in Ansatz 3.1. Specifically, while it is true that such scaling increases the growth rate of the Jacobian norm, this increase in Jacobian norm does not necessarily increase sharpening, since it coincides with a decrease in the magnitude of the parameter derivatives, which are key factors in relating the model Jacobian to the loss Hessian (see Appendix C.2).

Edge of stability: Although the edge of stability mechanism explicated in Damian et al. [2023] can be expected to put some downward pressure on the model Jacobian, due to the presence of the mediating factors discussed following Ansatz 3.1 it need not cause the Jacobian to plateau in the same way that loss sharpness does. Nonetheless, this downward pressure on Jacobian is important: it is to this that we attribute the better generalisation of models trained with large learning rate, even if that learning rate is decayed towards the end of training. We validate this empirically in the next section and Appendix D.

Wide networks and linear models: It has been observed empirically that wide networks exhibit less severe sharpening, and linear (kernel) models exhibit none at all Cohen et al. [2021]. It might be thought, since these models nonetheless must increase in Jacobian norm during training by Theorem 5.1, that such models are therefore counterexamples to our explanation. They are in fact consistent: Theorem 5.1 only implies sharpening insofar as Ansatz 3.1 holds. Importantly, since the Jacobian of the first linear layer does not appear in Equation (4), an increase in the input-output Jacobian norm of a linear (kernel) model will not cause an increase in sharpness according to Ansatz 3.1. The same is true of wide networks, which approximate kernel models increasingly well as width is sent to infinity Lee et al. [2019]; thus wide networks will also be expected to sharpen less severely during training, as has been observed empirically.

Refer to caption
(a) Loss
Refer to caption
(b) Sharpness
Refer to caption
(c) Jacobian norm (log yy axis)
Figure 1: Plots of cross entropy loss (with label entropies subtracted), loss sharpness and softmaxed-model Jacobian norm with varying degrees of label smoothing, for VGG11 trained with gradient descent on CIFAR10, with a learning rate of 0.08. Increasing label smoothing (indicated by line style) brings targets closer together, meaning less growth is necessary in the Jacobian norm to reduce loss. This coincides with less sharpening of the Hessian during training, in line with Ansatz 3.1. Similar plots at different learning rates, and with ResNet18, are in Appendix B.1. Note log yy axis on Jacobian norm for ease of distinction.

6 Flat minima and generalisation

We argue in this section that loss flatness implies good generalisation only via encouraging smaller Jacobian norm, through Ansatz 3.1. Indeed, our final theorem, whose proof is inspired by that of [Ma and Ying, 2021, Theorem 6], assures us rigorously that on models that fit training data, sufficiently small empirical Jacobian norm and sufficiently large sample size suffice to guarantee good generalisation, as has long been suspected in the literature Drucker and Le Cun [1992].

Theorem 6.1.

Let \mathbb{P} be δ\delta-good and let f:supp()df^{*}:\text{supp}(\mathbb{P})\rightarrow\mathbb{R}^{d^{\prime}} be a target function, which we assume to be Lipschitz. Let NN\in\mathbb{N} and let 0<ϵ<10<\epsilon<1. Then with probability at least 1ϵ1-\epsilon over all i.i.d. samples (x1,,xN)(x_{1},\dots,x_{N}) from \mathbb{P}, any Lipschitz function f:ddf:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}} which coincides with ff^{*} on (x1,,xN)(x_{1},\dots,x_{N}), satisfies:

𝔼xf(x)f(x)2δ(N,ϵ)(fLip,+maxi(Jf(xi)2+VB(xi,δ(N,ϵ))(Jf))).\mathbb{E}_{x\sim\mathbb{P}}\|f(x)-f^{*}(x)\|_{2}\leq\delta(N,\epsilon)\big{(}\|f^{*}\|_{Lip,\mathbb{P}}+\max_{i}\big{(}\|Jf(x_{i})\|_{2}+V_{B(x_{i},\delta(N,\epsilon))}(Jf)\big{)}\big{)}. (10)

Our proof of Theorem 6.1 is similar to that of [Ma and Ying, 2021, Theorem 6]. Our bound is an improvement on that of Ma and Ying [2021] in having better decay in NN. Additionally, in bounding in terms of the empirical Jacobian norm at convergence instead of training hyperparameters at convergence, our bound is sensitive to implicit Jacobian regularisation throughout training in a way that the bound of Ma and Ying [2021] is not. Our bound is thus in principle sensitive to the superior generalisation of networks trained with an initially high learning rate that is gradually decayed to a small learning rate (such networks experience more implicit Jacobian regularisation at the edge of stability), over networks trained with a small learning rate from initialisation. In contrast, the bound of Ma and Ying [2021] is not sensitive to this distinction.

Like [Ma and Ying, 2021, Theorem 6], however, our bound exhibits inferior O((log(N)N1)1d)O\big{(}(\log(N)N^{-1})^{\frac{1}{d}}\big{)} decay in NN, where dd is the intrinsic dimension of the data distribution, when compared with the more common O(N12)O(N^{-\frac{1}{2}}) rates in the literature. The reason for this is that all bounds of which we are aware in the literature, with the exception of [Ma and Ying, 2021, Theorem 6], derive from consideration of hypothesis complexity, rather than data complexity as we and Ma and Ying [2021] consider. Although the method we adopt implies this worse decay in NN, our method has the advantage of not needing to invoke the large hypothesis complexity terms, such as Rademacher complexity and KL-divergence, that make bounds based on such terms so loose when applied to deep learning Zhang et al. [2017]. Moreover, since standard datasets in deep learning are intrinsically low dimensional Miyato et al. [2018], the O((log(N)N1)1d)O\big{(}(\log(N)N^{-1})^{\frac{1}{d}}\big{)} rate in our bounds can still be expected to be nontrivial in practice.

In what follows we present experimental results measuring the generalisation gap (absolute value of train loss minus test loss), sharpness and Jacobian norm at the end of training with varying degrees of various regularisation techniques. Our results confirm that while loss sharpness is neither necessary nor sufficient for good generalisation, empirical Jacobian norm consistently correlates better with generalisation than does sharpness for all regularisation measures we studied. Moreover, whenever loss sharpness does correlate with generalisation, this correlation tends to go hand-in-hand with smaller Jacobian norm, as predicted by Ansatz 3.1. Note that the Jacobians we plot were all computed in evaluation mode, meaning all BN layers had their statistics fixed: that the relationship with the train mode loss Hessian can still be expected to hold is justified in Appendix A.1.

Refer to caption
Figure 2: The impact of weight decay (xx axis) on Jacobian norm, sharpness and generalisation gap when training ResNet18 with cross-entropy loss at the end of training (90 epochs). Line style indicates learning rate. Without weight decay, SGD finds a solution with near-zero loss and vanishing sharpness, but for which Jacobian norm and generalisation gap are relatively large.

Cross-entropy and weight decay: It was observed in Granziol [2020] that when training with the cross-entropy cost, networks trained with weight decay generalised better than those trained without, despite converging to sharper minima. We confirm this observation in Figure 2. Using Ansatz 3.1 and Theorem 6.1, we are able to provide a new explanation for this fact. For the cross-entropy cost, both terms D2γYD^{2}\gamma_{Y} and DγYD\gamma_{Y} appearing in the Hessian (Equation (3)) vanish at infinity in parameter space. By preventing convergence of the parameter vector to infinity in parameter space, weight-decay therefore encourages convergence to a sharper minimum than would otherwise be the case. However, since the Frobenius norm of a matrix is an upper bound on the spectral norm, weight decay also implicitly regularises the Jacobian of the model, leading to better generalisation by Theorem 6.1.

Since training with weight decay is the norm in practice, and since any correlations between vanishing Hessian and generalisation will likely be unreliable, we used weight decay in all of what follows.

Learning rate and batch size: It is commonly understood that larger learning rates and smaller batch sizes serve to encourage convergence to flatter minima Keskar et al. [2017], Wu et al. [2018]. By Theorem 6.1 and Ansatz 3.1, such hyperparameter choice can be expected to lead to better generalisation. We report on experiments testing this idea in Figures 3, 4 (ResNet18 on CIFAR10) and Appendix B.2 (ResNet18 and VGG11 on CIFAR10 and CIFAR100). The results are mostly consistent with expectations, with the exception of Jacobian norm being reduced with larger batch size at the highest learning rate while generalisation gap increases. The phenomenon appears to be data-agnostic (the same occurs with ResNet18 on CIFAR100), but architecture-dependent (we do not observe this problem with VGG11 on either CIFAR10 or CIFAR100). This is not necessarily inconsistent with Theorem 6.1, which allows for the empirical Jacobian norm to be a poor estimator of the Lipschitz constant of the model via the local variation in the model’s Jacobian. Our results indicate that this local variation term is increased during (impractical) large batch training of skip-connected architectures using a constant high learning rate.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 3: The impact of differing (constant) learning rates (xx axis) at end of training with ResNet18 on CIFAR10 trained with SGD (five trials). Line style indicates batch size. In line with conventional wisdom, larger learning rates typically have smaller sharpness and generalise better. The downward pressure on sharpness and generalisation gap correlates with smaller Jacobian norms, as anticipated by Ansatz 3.1 and Theorem 6.1. Models trained with weight decay (0.0001).
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: The impact of differing batch sizes (xx axis) at the end of training with ResNet18 on CIFAR10 trained with SGD (five trials). Line style indicates (constant) learning rate. At least for the smaller learning rates, increasing batch size typically increases sharpness, Jacobian norm and generalisation gap in line with Theorem 6.1. At the largest learning rate, the relationship between Jacobian norm and generalisation gap is the inverse of what is expected, suggesting that larger learning rates tend to increase the local variation of the model’s Jacobian, causing it to underestimate the Lipschitz constant of the model (Theorem 6.1). Note log scale on yy axis in Sharpness plot for ease of distinction.

Other regularisation techniques: We also test to see the extent that other common regularisation techniques, including label smoothing, data augmentation, mixup Zhang et al. [2018] and sharpness-aware minimisation (SAM) Foret et al. [2021] regularise Hessian and Jacobian to result in better generalisation. We find that while Jacobian norm is regularised at least initially across all techniques, only in SAM is sharpness regularised. This validates our proposal that Jacobian norm is a key mediating factor between sharpness and generalisation, as well as the position of Dinh et al. [2017] that flatness is not necessary for good generalisation. Note that at least some of the gradual increase in Jacobian norm for mixup and data augmentation as the xx-axis parameter is increased is to be expected due to our measurement scheme: see the caption in Figure 5. See Appendix B.3 for experimental details and similar results for VGG11 and CIFAR100.

Refer to caption

Figure 5: The impact of different regularisation strategies (xx axis) when training a ResNet18 model with batch-norm on CIFAR10 (five trials). The norm of the input-output Jacobian is much better correlated with the generalisation gap than that of the Hessian; all regularisation strategies were effective in controlling the Jacobian norm without necessarily controlling the sharpness. Line style indicates initial learning rate. Besides the regularisation strategy being studied, all experiments include weight decay (0.0005). Note that the decorrelation between Jacobian and generalisation for Mixup and Data Aug as xx-axis parameter is increased is to be expected. In both cases, we measure the Jacobian norm over only the pure training set, which makes up less of a percentage of the total training set and is therefore subject to less implicit regularisation during training as the xx-axis parameter is increased. Refer to the appendix for extended results.

7 Discussion

One limitation of our work is that Ansatz 3.1 is not mathematically rigorous. A close theoretical analysis would ideally provide rigorous conditions under which one can upper bound the model Jacobian norm in terms of loss sharpness throughout training. These conditions would need to be compatible with the empirical counterexmaples we give in Figure 2 and Appendix C.

A second limitation of our work is that we do not numerically evaluate our generalisation bound. Numerical evaluation of the bound would require both an estimate of the Lipschitz constant of a GAN which has generated the data, as well as the local variation of the Jacobian of the model defined in Equation (5). While existing techniques may be able to aid the first of these Fazlyab et al. [2019], we are unaware of any work studying the second, which is outside the scope of this paper. We leave this important evaluation to future work.

8 Conclusion

We proposed a new relationship between the loss Hessian of a deep neural network and the input-output Jacobian of the model. We proved several theorems allowing us to leverage this relationship in providing new explanations for progressive sharpening and the link between loss geometry and generalisation. An experimental study was conducted which validates our proposal.

References

  • Cohen et al. [2021] J. Cohen, S. Kaur, Y. Li, J. Zico Kolter, and A. Talwalkar. Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability. In ICLR, 2021.
  • Hochreiter and Schmidhuber [1997] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.
  • Keskar et al. [2017] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, P. Tak, and P. Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. In ICLR, 2017.
  • Chaudhari et al. [2017] P. Chaudhari, A. Choromanska, S. Soatto, Y. LeCun, C. Baldassi, C. Borgs, J. Chayes, L. Sagun, and R. Zecchina. Entropy-SGD: Biasing Gradient Descent into Wide Valleys. In ICLR, 2017.
  • Foret et al. [2021] P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur. Sharpness-Aware Minimization for Efficiently Improving Generalization. In ICLR, 2021.
  • Drucker and Le Cun [1992] Harris Drucker and Yann Le Cun. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks, 3(6):991–997, 1992.
  • Bartlett et al. [2017] P. Bartlett, D. J. Foster, and M. Telgarsky. Spectrally-normalized margin bounds for neural networks. In NeurIPS, 2017.
  • Neyshabur et al. [2017] B. Neyshabur, S. Bhojanapalli, D. Mcallester, and N. Srebro. Exploring Generalization in Deep Learning. In NeurIPS, 2017.
  • Hoffman et al. [2019] J. Hoffman, D. A. Roberts, and S. Yaida. Robust Learning with Jacobian Regularization. arXiv:1908.02729, 2019.
  • Bubeck and Sellke [2021] S. Bubeck and M. Sellke. A Universal Law of Robustness via Isoperimetry . In NeurIPS, 2021.
  • Gouk et al. [2021] H. Gouk, E. Frank, B. Pfahringer, and M. J. Cree. Regularisation of neural networks by enforcing Lipschitz continuity. Machine Learning, 110:393–416, 2021.
  • Novak et al. [2018] Roman Novak, Yasaman Bahri, Daniel A Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. Sensitivity and generalization in neural networks: an empirical study. In International Conference on Learning Representations, 2018.
  • Ma and Ying [2021] C. Ma and L. Ying. On linear stability of sgd and input-smoothness of neural networks. NeurIPS, 2021.
  • Papyan [2018] V. Papyan. The Full Spectrum of Deepnet Hessians at Scale: Dynamics with SGD Training and Sample Size. arXiv:1811.07062, 2018.
  • Papyan [2019] V. Papyan. Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet hessians. In ICML, 2019.
  • Ghorbani et al. [2019] B. Ghorbani, S. Krishnan, and Y. Xiao. An Investigation into Neural Net Optimization via Hessian Eigenvalue Density. In ICML, 2019.
  • Jacot et al. [2018] A. Jacot, F. Gabriel, and C. Hongler. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. In NeurIPS, pages 8571–8580, 2018.
  • Chen et al. [2020] X. Chen, C.-J. Hsieh, and B. Gong. When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations. In ICLR, 2020.
  • Wu et al. [2018] L. Wu, C. Ma, and W. E. How SGD Selects the Global Minima in Over-parameterized Learning: A Dynamical Stability Perspective. In NeurIPS, 2018.
  • Zhu et al. [2019] Z. Zhu, J. Wu, B. Yu, L. Wu, and J. Ma. The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects. In ICML, 2019.
  • Mulayoff et al. [2021] R. Mulayoff, T. Michaeli, and D. Soudry. The Implicit Bias of Minima Stability: A View from Function Space. In NeurIPS, 2021.
  • Dinh et al. [2017] L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. Sharp minima can generalize for deep nets. In ICML, pages 1019 –1028, 2017.
  • Lyu et al. [2022] K. Lyu, Z. Li, and S. Arora. Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction. In NeurIPS, 2022.
  • Jang et al. [2022] C. Jang, S. Lee, F. Park, and Y.-K. Noh. A reparametrization-invariant sharpness measure based on information geometry. In NeurIPS, 2022.
  • Granziol [2020] D. Granziol. Flatness is a False Friend. arXiv:2006.09091, 2020.
  • Ramasinghe et al. [2023] S. Ramasinghe, L. E. MacDonald, M. Farazi, H. Saratchandran, and S. Lucey. How much does Initialization Affect Generalization? In ICML, 2023.
  • Dziugaite and Roy [2017] G. K. Dziugaite and D. M. Roy. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data. In UAI, 2017.
  • F. He and T. Liu and D. Tao [2019] F. He and T. Liu and D. Tao. Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence . In NeurIPS, 2019.
  • Petzka et al. [2021] Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, and Mario Boley. Relative flatness and generalization. Advances in Neural Information Processing Systems, 34:18420–18432, 2021.
  • Gamba et al. [2023] M. Gamba, H. Azizpour, and M. Bjorkman. On the lipschitz constant of deep networks and double descent. arXiv:2301.12309, 2023.
  • Lee et al. [2023] S. Lee, J. Park, and J. Lee. Implicit jacobian regularization weighted with impurity of probability output. ICML, 2023.
  • Muthukumar and Sulam [2023] R. Muthukumar and J. Sulam. Sparsity-aware generalization theory for deep neural networks. PMLR, 195:1–31, 2023.
  • Wei and Ma [2019] C. Wei and T. Ma. Data-dependent sample complexity of deep neural networks via lipschitz augmentation. NeurIPS, 2019.
  • Bousquet and Elisseeff [2002] O. Bousquet and A. Elisseeff. Stability and Generalization. Journal of Machine Learning Research, 2:499–526, 2002.
  • Hardt et al. [2016] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. In ICML, 2016.
  • Charles and Papailiopoulos [2018] Z. Charles and D. Papailiopoulos. Stability and Generalization of Learning Algorithms that Converge to Global Optima. In ICML, 2018.
  • Kuzborskij and Lampert [2018] I. Kuzborskij and C. H. Lampert. Data-Dependent Stability of Stochastic Gradient Descent. In ICML, 2018.
  • Bassily et al. [2020] R. Bassily, V. Feldman, C. Guzmán, and K. Talwar. Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses. In NeurIPS, 2020.
  • Allen-Zhu et al. [2019] Z. Allen-Zhu, Y. Li, and Z. Song. A Convergence Theory for Deep Learning via Over-Parameterization. In ICML, pages 242–252, 2019.
  • Du et al. [2019a] S. S. Du, X. Zhai, B. Poczos, and A. Singh. Gradient Descent Provably Optimizes Over-parameterized Neural Networks. In ICLR, 2019a.
  • Du et al. [2019b] S. S. Du, J. Lee, H. Li, L. Wang, and X. Zhai. Gradient Descent Finds Global Minima of Deep Neural Networks. In ICML, pages 1675–1685, 2019b.
  • MacDonald et al. [2022] L. E. MacDonald, H. Saratchandran, J. Valmadre, and S. Lucey. A global analysis of global optimisation. arXiv:2210.05371, 2022.
  • Lee and Jang [2023] S. Lee and C. Jang. A new characterization of the edge of stability based on a sharpness measure aware of batch gradient distribution. In ICLR, 2023.
  • Arora et al. [2022] S. Arora, Z. Li, and A. Panigrahi. Understanding Gradient Descent on Edge of Stability in Deep Learning. In ICML, 2022.
  • Wang et al. [2022] Z. Wang, Z. Li, and J. Li. Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability. In NeurIPS, 2022.
  • Zhu et al. [2023] X. Zhu, Z. Wang, X. Wang, M. Zhou, and R. Ge. Understanding Edge-of-Stability Training Dynamics with a Minimalist Example. In ICLR, 2023.
  • Ahn et al. [2022] K. Ahn, J. Zhang, and S. Sra. Understanding the unstable convergence of gradient descent. In ICML, 2022.
  • Damian et al. [2023] A. Damian, E. Nichani, and J. Lee. Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability. In ICLR, 2023.
  • Reznikov and Saff [2016] A. Reznikov and E. B. Saff. The covering radius of randomly distributed points on a manifold. International Mathematics Research Notices, 2016(19):6065–6094, 2016.
  • Sitzmann et al. [2020] V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein. Implicit neural representations with periodic activation functions. In NeurIPS, 2020.
  • Lee et al. [2019] J. Lee, L. Xiao, S. Schoenholtz, Y. Bahri, R. Novak, J. Sohl-Dickstein, and J. Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. NeurIPS, 2019.
  • Zhang et al. [2017] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017.
  • Miyato et al. [2018] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral Normalization for Generative Adversarial Networks. In ICML, 2018.
  • Zhang et al. [2018] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz. mixup: Beyond Empirical Risk Minimization . In ICLR, 2018.
  • Fazlyab et al. [2019] M. Fazlyab, A. Robey, H. Hassani, M. Morari, and G. Pappas. Efficient and accurate estimation of lipschitz constants for deep neural networks. NeurIPS, 2019.
  • Li [2011] S. Li. Concise formulas for the area and volume of a hyperspherical cap. Asian Journal of Mathematics and Statistics, 4(1):66–70, 2011.

Computational resources: Exploratory experiments were conducted using a desktop machine with two Nvidia RTX A6000 GPUs. The extended experimental results in the appendix were obtained on a shared HPC system, where a total of 3500 GPU hours were used across the lifetime of the project (Nvidia A100 GPUs).

Appendix A Proofs

Here we give proofs of the theorems stated in the main body of the paper. To prove our first theorem, Theorem 4.2, we require the following differential-geometric lemma.

Lemma A.1.

Let MM be any compact, embedded, dd-dimensional submanifold of n\mathbb{R}^{n}, possibly with boundary and corners. Equip MM with the Riemannian structure inherited from this embedding and let \mathbb{P} be the normalised volume measure of MM induced by its Riemannian structure. Then there exists κ>0\kappa>0 and C>0C>0 such that

[B(x,r)]Crd\mathbb{P}[B(x,r)]\geq C\,r^{d}

for all rκr\leq\kappa and xMx\in M, where B(x,r)B(x,r) denotes the closed Euclidean ball around xx of radius rr.

Proof.

Since the Riemannian structure on MM is inherited from its embedding into n\mathbb{R}^{n}, the embedding is bi-Lipschitz and hence there exists a constant cc such that

xy2c1g(x,y)\|x-y\|_{2}\leq c^{-1}g(x,y)

for all x,yMx,y\in M, where g(x,y)g(x,y) denotes the geodesic distance induced on MM by the Riemannian structure. It follows that for all r>0r>0, one has Bg(x,cr)B(x,r)B_{g}(x,cr)\subset B(x,r), where Bg(x,r)B_{g}(x,r) is the closed geodesic ball of radius rr about xx. Hence

[B(x,r)](Bg(x,cr))\mathbb{P}[B(x,r)]\geq\mathbb{P}(B_{g}(x,cr)) (11)

and it remains only to lower-bound Bg(x,cr)B_{g}(x,cr) for rr sufficiently small. This we achieve by considering the geometry of MM.

Given xMx\in M, let expx:TxMM\exp_{x}:T_{x}M\rightarrow M be the Riemannian exponential map and let (x,0)TxM(x,0)\in T_{x}M denote the zero tangent vector. Recall that the injectivity radius of MM is the largest number RR for which the restriction of expx\exp_{x} to the RR-ball about zero in TxMT_{x}M is a diffeomorphism onto its image for all xMx\in M. The injectivity radius exists and is strictly positive by compactness of MM. Let κ>0\kappa>0 be any number that is strictly smaller than c1Rc^{-1}R.

Let us now fix xMx\in M. Since expx\exp_{x} is a radial isometry, it maps B((x,0),r)B((x,0),r) onto Bg(x,r)B_{g}(x,r) for any rκr\leq\kappa. Let dξd\xi denote the standard Euclidean volume element in B((x,0),r)B((x,0),r) and ω\omega the volume form on MM. Since expx|B((x,0),r)\exp_{x}|_{B((x,0),r)} is a diffeomorphism, there exists a nonvanishing, smooth function ff on B((x,0),r)B((x,0),r) for which expxω=fxdξ\exp_{x}^{*}\omega=f_{x}\,d\xi. Letting cxc^{\prime}_{x} denote the minimum of fxf_{x}, we then have

[Bg(x,r)]=Bg(x,r)ω=B((x,0),r)expxωcxB((x,0),r)𝑑ξ=cxc′′rd.\mathbb{P}[B_{g}(x,r)]=\int_{B_{g}(x,r)}\omega=\int_{B((x,0),r)}\exp_{x}^{*}\omega\geq c^{\prime}_{x}\int_{B((x,0),r)}d\xi=c^{\prime}_{x}\,c^{\prime\prime}r^{d}. (12)

Here c′′c^{\prime\prime} is the usual scaling factor one sees in the volume of a Euclidean dd-ball. Using compactness of MM, we define c:=infxMcx>0c^{\prime}:=\inf_{x\in M}c^{\prime}_{x}>0 and C:=cc′′cdC:=c^{\prime}\,c^{\prime\prime}\,c^{d}. It then follows from the estimates (11) and (12) that

[B(x,r)]Crd\mathbb{P}[B(x,r)]\geq C\,r^{d}

for all xMx\in M and 0<rκ0<r\leq\kappa as claimed. ∎

Proof of Theorem 4.2.

The proof we give is derived from [Reznikov and Saff, 2016, Theorem 2.1], however we wish to be more precise with our constants than is the case in Reznikov and Saff [2016]. We thus give a full proof here.

First, using the fact that supp()\text{supp}(\mathbb{P}) satisfies the hypotheses of Lemma A.1, fix C>0C>0 and κ>0\kappa>0 for which [B(x,r)]Crd\mathbb{P}[B(x,r)]\geq C\,r^{d} for all (x,r)M×[0,κ](x,r)\in M\times[0,\kappa], and define Φ(r):=Crd\Phi(r):=C\,r^{d}. This function Φ\Phi will play a key role in the proof.

Now fix ϵ>0\epsilon>0, let XN:=(x1,,xN)X_{N}:=(x_{1},\dots,x_{N}) be a set of i.i.d. points sampled from \mathbb{P}, and let ρ(XN):=supxminixxi\rho(X_{N}):=\sup_{x}\min_{i}\|x-x_{i}\| denote the covering radius of XNX_{N}. Suppose that ρ(XN)>2t\rho(X_{N})>2t for some real number tt, and let EtE_{t} be any maximal set of points in supp()\text{supp}(\mathbb{P}) whose distinct pairwise distances are all bounded below by tt. Then we can find xEtx\in E_{t} such that B(x,t)XN=B(x,t)\cap X_{N}=\emptyset. Indeed, by hypothesis on ρ(XN)\rho(X_{N}), there exists ysupp()y\in\text{supp}(\mathbb{P}) such that XNB(y,2t)=X_{N}\cap B(y,2t)=\emptyset. There also exists xEtx\in E_{t} such that xy<t\|x-y\|<t, since otherwise we could add yy to EtE_{t} and contradict the maximality of EtE_{t}. One then has B(x,t)B(y,2t)B(x,t)\subset B(y,2t), so that B(x,t)B(x,t) does not intersect XNX_{N}.

It has thus been shown that

[ρ(XN)>2t][xEt:B(x,t)XN=].\mathbb{P}[\rho(X_{N})>2t]\leq\mathbb{P}[\exists x\in E_{t}:B(x,t)\cap X_{N}=\emptyset]. (13)

Since (B(x,t))Φ(t)\mathbb{P}(B(x,t))\geq\Phi(t) for all xx, (1Φ(t))(1-\Phi(t)) is an upper bound on the probability that a randomly selected xx^{\prime} will not lie within tt of a given xx. Letting #(Et)\#(E_{t}) denote the cardinality of EtE_{t}, the independence of the xix_{i} therefore permits an upper bound

[ρ(XN)>2t]#(Et)(1Φ(t))N.\mathbb{P}[\rho(X_{N})>2t]\leq\#(E_{t})\,(1-\Phi(t))^{N}. (14)

The cardinality of EtE_{t} can be further bounded via

1xEt[B(x,t)]#(Et)Φ(t),1\geq\sum_{x\in E_{t}}\mathbb{P}[B(x,t)]\geq\#(E_{t})\Phi(t), (15)

so that one has

[ρ(XN)>2t]Φ(t)1(1Φ(t))N.\mathbb{P}[\rho(X_{N})>2t]\leq\Phi(t)^{-1}(1-\Phi(t))^{N}. (16)

Using the invertibility of Φ\Phi, one now sets Φ(t)=αlog(N)N1\Phi(t)=\alpha\log(N)N^{-1}, with α=(1logN(ϵ))\alpha=(1-\log_{N}(\epsilon)). The McLaurin series for log(1y)\log(1-y) with y=αlog(N)N1y=\alpha\log(N)N^{-1} reveals that

(1αlog(N)N1)NNα,(1-\alpha\log(N)N^{-1})^{N}\leq N^{-\alpha}, (17)

so that the bound becomes

[ρ(XN)>2Φ1(αlog(N)N1)]1αNlog(N)NαN1α.\mathbb{P}[\rho(X_{N})>2\Phi^{-1}(\alpha\log(N)N^{-1})]\leq\frac{1}{\alpha}\frac{N}{\log(N)}N^{-\alpha}\leq N^{1-\alpha}. (18)

Substituting α=(1logN(ϵ))\alpha=(1-\log_{N}(\epsilon)) yields

[ρ(XN)>2Φ1(log(Nϵ1)N)]ϵ,\mathbb{P}\bigg{[}\rho(X_{N})>2\Phi^{-1}\bigg{(}\frac{\log(N\epsilon^{-1})}{N}\bigg{)}\bigg{]}\leq\epsilon, (19)

from which the general claim follows.

The result is specialised to the unit hypercube [0,1]d[0,1]^{d} simply by observing that in this case, one can take

Φ(r):=πd/22dΓ(d/2+1)rd,\Phi(r):=\frac{\pi^{d/2}}{2^{d}\Gamma(d/2+1)}r^{d}, (20)

which is the volume of the intersection of the ball of radius rr, centered at a corner, with the cube.

For the unit hypersphere Sdd+1S^{d}\subset\mathbb{R}^{d+1}, fix xSdx\in S^{d} and consider the Euclidean ball B(x,r)d+1B(x,r)\subset\mathbb{R}^{d+1} centered at xx. It is clear that the intersection Br:=SdB(x,r)B_{r}:=S^{d}\cap B(x,r) is a geodesic for the inherited Riemannian metric on SdS^{d}. In particular, for r<2r<2, BrB_{r} is a hyperspherical cap. Elementary trigonometry shows that the angle subtended by the line connecting xx to 0 and the line connecting any point on the boundary of BrB_{r} to 0 is given by 2sin1(r/2)2\sin^{-1}(r/2). It then follows from [Li, 2011, p. 67] and the elementary trigonometric identity sin(2θ)=2sin(θ)cos(θ)\sin(2\theta)=2\sin(\theta)\cos(\theta) that one has

[B(x,r)]=C1πd+12Γ(d+12)Ir2r4/4(d2,12),\mathbb{P}[B(x,r)]=C^{-1}\frac{\pi^{\frac{d+1}{2}}}{\Gamma\big{(}\frac{d+1}{2}\big{)}}I_{r^{2}-r^{4}/4}\bigg{(}\frac{d}{2},\frac{1}{2}\bigg{)},

where

Ix(a,b)=Γ(a+b)Γ(a)Γ(b)0xta1(1t)b1𝑑tI_{x}(a,b)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\int_{0}^{x}t^{a-1}(1-t)^{b-1}\,dt

is the regularised incomplete beta function and C=2πd2Γ(d/2)1C=2\pi^{\frac{d}{2}}\Gamma(d/2)^{-1} is the volume of the unit dd-sphere. For any 0<t<10<t<1 one has (1t)1121(1-t)^{1-\frac{1}{2}}\geq 1, thus one has the estimate

Ir2r4/4(d2,12)Γ(d+12)Γ(d2)Γ(12)(r2r4/4)d2.I_{r^{2}-r^{4}/4}\bigg{(}\frac{d}{2},\frac{1}{2}\bigg{)}\geq\frac{\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2}\big{)}\Gamma\big{(}\frac{1}{2}\big{)}}(r^{2}-r^{4}/4)^{\frac{d}{2}}.

Now, fixing 0<γ<10<\gamma<1 and assuming r2/4<γr^{2}/4<\gamma, we therefore have

(B(x,r))12(1γ)d2rd.\mathbb{P}(B(x,r))\geq\frac{1}{2}(1-\gamma)^{\frac{d}{2}}r^{d}.

We can then use Φ(r)=12(1γ)d2rd\Phi(r)=\frac{1}{2}(1-\gamma)^{\frac{d}{2}}r^{d} in defining δ(N,ϵ)=2Φ1(log(Nϵ1)N1)\delta(N,\epsilon)=2\Phi^{-1}(\log(N\epsilon^{-1})N^{-1}) as in Equation (19) provided only that the assumption δ2/4<γ\delta^{2}/4<\gamma is satisfied. Setting δ=21+1d(1γ)12(log(Nϵ1)N1)1d\delta=2^{1+\frac{1}{d}}(1-\gamma)^{-\frac{1}{2}}(\log(N\epsilon^{-1})N^{-1})^{\frac{1}{d}}, this requirement reduces to having NN sufficiently large that 22d(log(Nϵ1)N1)2d<γγ22^{\frac{2}{d}}(\log(N\epsilon^{-1})N^{-1})^{\frac{2}{d}}<\gamma-\gamma^{2}.

For the final result, when pushing forward a good distribution by a Lipschitz function, the stated result follows from the Lipschitz identity. ∎

Proof of Theorem 4.3.

Fix NN\in\mathbb{N} and 0<ϵ<10<\epsilon<1. By δ\delta-goodness of the data distribution, with probability at least 1ϵ1-\epsilon over i.i.d. samples (x1,,xN)(x_{1},\dots,x_{N}), the balls {B(xi,δ(N,ϵ))}i=1N\{B(x_{i},\delta(N,\epsilon))\}_{i=1}^{N} cover supp()\text{supp}(\mathbb{P}). For notational convenience, denote B(xi,δ(N,ϵ))B(x_{i},\delta(N,\epsilon)) by simply BiB_{i}. Conditioning on the event that the BiB_{i} cover supp()\text{supp}(\mathbb{P}), one has

fLip,\displaystyle\|f\|_{Lip,\mathbb{P}} supxi=1NBiJf(x)2=maxisupxBiJf(x)2\displaystyle\leq\sup_{x\in\bigcup_{i=1}^{N}B_{i}}\|Jf(x)\|_{2}=\max_{i}\,\sup_{x\in B_{i}}\|Jf(x)\|_{2}
maxisupxBiJf(xi)Jf(xi)+Jf(x)2\displaystyle\leq\max_{i}\,\sup_{x\in B_{i}}\|Jf(x_{i})-Jf(x_{i})+Jf(x)\|_{2}
maxi(Jf(xi)2+supxBiJf(x)Jf(xi)2)\displaystyle\leq\max_{i}\bigg{(}\|Jf(x_{i})\|_{2}+\sup_{x\in B_{i}}\|Jf(x)-Jf(x_{i})\|_{2}\bigg{)}
maxi(Jf(xi)2+VBi(Jf))\displaystyle\leq\max_{i}\big{(}\|Jf(x_{i})\|_{2}+V_{B_{i}}(Jf)\big{)}

The result follows. ∎

Proof of Theorem 5.1.

Recall that by hypothesis on cc, one has c(z1,z2)αz1z222c(z_{1},z_{2})\geq\alpha\|z_{1}-z_{2}\|_{2}^{2} for all z1,z2z_{1},\,z_{2} in the domain of cc. Thus, since c(f(xi),f(xi))<2αc(f(x_{i}),f^{*}(x_{i}))<\ell^{2}\alpha for all ii, we therefore have

f(xi)f(xi)2\|f(x_{i})-f^{*}(x_{i})\|_{2}\leq\ell (21)

for all ii. That is, for each ii, f(xi)f(x_{i}) is contained in the \ell-ball centred on the target value f(xi)f^{*}(x_{i}).

Since the smallest distance between any two f(xi)f(x_{i}) and f(xj)f(x_{j}) is realised when these points lie on the straight line between f(xi)f^{*}(x_{i}) and f(xj)f^{*}(x_{j}), one has the lower bound

f(xi)f(xj)2f(xi)f(xj)22\|f(x_{i})-f(x_{j})\|_{2}\geq\|f^{*}(x_{i})-f^{*}(x_{j})\|_{2}-2\ell (22)

for all i,ji,\,j. Invoking the Lipschitz identity, and using the fact that xi=xjx_{i}=x_{j} for iji\neq j with probability zero, then gives

fLip,maxijf(xi)f(xj)22xixj2.\|f\|_{Lip,\mathbb{P}}\geq\max_{i\neq j}\frac{\|f^{*}(x_{i})-f^{*}(x_{j})\|_{2}-2\ell}{\|x_{i}-x_{j}\|_{2}}. (23)

Conditioning now on the event that supp()i=1NB(xi,δ(ϵ,N))\text{supp}(\mathbb{P})\subset\bigcup_{i=1}^{N}B(x_{i},\delta(\epsilon,N)), which occurs with probability at least 1ϵ1-\epsilon by δ\delta-goodness of \mathbb{P}, and invoking Theorem 4.3 together with the subadditivity of the component-wise maximum function, gives the result. ∎

Proof of Theorem 6.1.

Fix NN\in\mathbb{N} and 0<ϵ<10<\epsilon<1. Since \mathbb{P} is δ\delta-good, with probability at least 1ϵ1-\epsilon over i.i.d. samples (x1,,xN)(x_{1},\dots,x_{N}), for each xsupp()x\in\text{supp}(\mathbb{P}) there exists jj such that xxj2δ(N,ϵ)\|x-x_{j}\|_{2}\leq\delta(N,\epsilon). Conditioning on this event and fixing xsupp()x\in\text{supp}(\mathbb{P}) with corresponding nearby xjx_{j}, the triangle inequality applies to yield

f(x)f(x)2f(x)f(xj)2+f(xj)f(xj)2+f(xj)f(x)2,\|f(x)-f^{*}(x)\|_{2}\leq\|f(x)-f(x_{j})\|_{2}+\|f(x_{j})-f^{*}(x_{j})\|_{2}+\|f^{*}(x_{j})-f^{*}(x)\|_{2}, (24)

which gives

f(x)f(x)2δ(N,ϵ)(fLip,+fLip)\|f(x)-f^{*}(x)\|_{2}\leq\delta(N,\epsilon)\big{(}\|f\|_{Lip,\mathbb{P}}+\|f^{*}\|_{Lip}\big{)} (25)

after applying the Lipschitz identities and invoking the hypothesis that ff and ff^{*} agree on training data. The proof of Theorem 4.3 now applies to show that, conditioned on this same event, one has

f(x)f(x)2δ(N,ϵ)(fLip+maxi(Jf(xi)2+VBi(Jf))).\|f(x)-f^{*}(x)\|_{2}\leq\delta(N,\epsilon)\big{(}\|f^{*}\|_{Lip}+\max_{i}\big{(}\|Jf(x_{i})\|_{2}+V_{B_{i}}(Jf)\big{)}\big{)}. (26)

Taking the \mathbb{P}-expectation of both sides yields the result. ∎

A.1 A note on Jacobian computations

In our experiments, the Jacobian norm and sharpnesss are estimated using the power method and the Hessian/Jacobian-vector product functions in PyTorch. Recalling that we use d×N\mathbb{R}^{d\times N} to denote the set of data matrices, consisting of a batch of data column vectors concatenated together, there are two different kinds of functions f:d0×Nd1×Nf:\mathbb{R}^{d_{0}\times N}\rightarrow\mathbb{R}^{d_{1}\times N} whose Jacobians we would like to compute. The first of these are functions defined columnwise by some other map g:d0d1g:\mathbb{R}^{d_{0}}\rightarrow\mathbb{R}^{d_{1}}: that is

f(X)j=g(Xj),f(X)_{j}=g(X_{j}), (27)

where lower indices index columns so that Xjd0X_{j}\in\mathbb{R}^{d_{0}} for all jj. In this case, flattening (columns first) shows that Jf(X)Jf(X) is just a block-diagonal matrix, whose jthj^{th} block is Jf(Xj)Jf(X_{j}). Hence Jf(X)2=maxjJf(Xj)2\|Jf(X)\|_{2}=\max_{j}\|Jf(X_{j})\|_{2}. For a network in evaluation mode, every layer Jacobian is of this form.

On the other hand, for a network in train mode (as is the case when evaluating the Hessian of the loss), batch normalisation (BN) layers do not have this form. BN layers in train mode are nonlinear transformations defined by

bn(X):=X𝔼X(ϵ+σ2X)12,bn(X):=\frac{X-\mathbb{E}X}{(\epsilon+\sigma^{2}X)^{\frac{1}{2}}}, (28)

where 𝔼\mathbb{E} and σ2\sigma^{2} denote mean and variance computed across the column index (again we omit the parameters as they are not essential to the discussion). The skeptical reader would rightly question whether the Jacobian of BN in train mode (to which Equation (4) applies) is sufficiently close to the Jacobian of BN in evaluation mode (to which Theorem 6.1 applies) for Ansatz 3.1 to be valid for BN networks. Our next and final theorem says that for sufficiently large datasets, this is not a problem.

Theorem A.2.

Let bnTbn_{T} and bnVbn_{V} denote a batch normalisation layer in train and evaluation mode respectively. Given a data matrix Xd×NX\in\mathbb{R}^{d\times N}, assume the coordinates of XX are O(1)O(1). Then JbnT=JbnV+O(N1)Jbn_{T}=Jbn_{V}+O(N^{-1}).

Proof of Theorem A.2.

Given a data matrix XX, let its row-wise mean and variance, thought of as constant vectors, be denoted by 𝔼\mathbb{E} and σ2\sigma^{2}. The evaluation mode BN map bnV:d×Nd×Nbn_{V}:\mathbb{R}^{d\times N}\rightarrow\mathbb{R}^{d\times N} is simply an affine transformation, with Jacobian coordinates given by

(bnV)jixlk=δkiδli(ϵ+σ2)12.\frac{\partial(bn_{V})^{i}_{j}}{\partial x^{k}_{l}}=\frac{\delta^{i}_{k}\delta^{i}_{l}}{(\epsilon+\sigma^{2})^{\frac{1}{2}}}. (29)

By [MacDonald et al., 2022, Lemma 8.3], the train mode BN map bnT:d×Nd×Nbn_{T}:\mathbb{R}^{d\times N}\rightarrow\mathbb{R}^{d\times N} can be thought of as the composite vmv\circ m, where m,v:d×Nd×Nm,v:\mathbb{R}^{d\times N}\rightarrow\mathbb{R}^{d\times N} are defined by

m(X):=X1NX1N×Nm(X):=X-\frac{1}{N}X1_{N\times N} (30)

and

v(Y):=(ϵ+N1Yrow2)12Yv(Y):=(\epsilon+N^{-1}\|Y\|^{2}_{row})^{-\frac{1}{2}}Y (31)

respectively. Here 1N×N1_{N\times N} denotes the N×NN\times N matrix of 1s, and Yrow\|Y\|_{row} denotes the vector of row-wise norms of YY.

As in [MacDonald et al., 2022, Lemma 8.3], one has

vjiylk(Y)=δki(ϵ+N1Yi2)12(δljyliyji(ϵ+N1Yi2)),mjixlk(Y)=δki(δliN1)\frac{\partial v^{i}_{j}}{\partial y^{k}_{l}}(Y)=\frac{\delta^{i}_{k}}{(\epsilon+N^{-1}\|Y^{i}\|^{2})^{\frac{1}{2}}}\bigg{(}\delta^{j}_{l}-\frac{y^{i}_{l}y^{i}_{j}}{(\epsilon+N^{-1}\|Y^{i}\|^{2})}\bigg{)},\qquad\frac{\partial m^{i}_{j}}{\partial x^{k}_{l}}(Y)=\delta^{i}_{k}(\delta^{i}_{l}-N^{-1}) (32)

for any data matrix YY. Thus, invoking the chain rule and simplifying, we therefore get

(bnT)jixlk(X)=δkiδli(ϵ+σ2)121Nδki(xli𝔼i)(xli𝔼i)(ϵ+σ2)32+O(N1)\frac{\partial(bn_{T})^{i}_{j}}{\partial x^{k}_{l}}(X)=\frac{\delta^{i}_{k}\delta^{i}_{l}}{(\epsilon+\sigma^{2})^{\frac{1}{2}}}-\frac{1}{N}\frac{\delta^{i}_{k}(x^{i}_{l}-\mathbb{E}^{i})(x^{i}_{l}-\mathbb{E}^{i})}{(\epsilon+\sigma^{2})^{\frac{3}{2}}}+O(N^{-1}) (33)

Since the components xx are O(1)O(1) by hypothesis, the result follows. ∎

Appendix B Additional experimental results and details

B.1 Progressive sharpening

We trained ResNet18 and VGG11 (superficially modifying https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py and https://github.com/chengyangfu/pytorch-vgg-cifar10/blob/master/vgg.py respectively, with VGG11 in addition having its dropout layers removed, but BN layers retained) with full batch gradient descent on CIFAR10 with learning rates 0.080.08, 0.040.04 and 0.020.02, using label smoothings of 0.00.0, 0.50.5 and 0.750.75. The models with no label smoothing were trained to 99 percent accuracy, and the number of iterations required to do this were then used as the number of iterations to train the models using nonzero label smoothing. Sharpness and Jacobian norm were computed every 5, 10, or 20 iterations depending on the number of iterations required for convergence of the non-label-smoothed model. Five trials were conducted in each case, with mean and standard deviation plotted. Line style indicates degree of label smoothing.

The dataset was standardised so that each vector component is centered, with unit standard deviation. Full batch gradient descent was approximated by averaging the gradients over 10 “ghost batches” of size 5000 each, as in Cohen et al. [2021]. This is justified with BN networks by Theorem A.2. The Jacobian norm we plot is for the Jacobian of the softmaxed model, as required by our derivation of Equation (9). The sharpness and Jacobian norms were estimated using a randomly chosen subset of 2500 data points. In all cases, the smoother labels are associated with less severe increase in Jacobian and sharpness during training, as predicted by Equation (9). Refer to fullbatch.py in the supplementary material for the code we used to run these experiments.

Refer to caption
(a) Loss
Refer to caption
(b) Sharpness
Refer to caption
(c) Jacobian norm
Figure 6: VGG11, learning rate 0.08
Refer to caption
(a) Loss
Refer to caption
(b) Sharpness
Refer to caption
(c) Jacobian norm
Figure 7: VGG11, learning rate 0.04
Refer to caption
(a) Loss
Refer to caption
(b) Sharpness
Refer to caption
(c) Jacobian norm
Figure 8: VGG11, learning rate 0.02
Refer to caption
(a) Loss
Refer to caption
(b) Sharpness
Refer to caption
(c) Jacobian norm
Figure 9: ResNet18, learning rate 0.08
Refer to caption
(a) Loss
Refer to caption
(b) Sharpness
Refer to caption
(c) Jacobian norm
Figure 10: ResNet18, learning rate 0.04
Refer to caption
(a) Loss
Refer to caption
(b) Sharpness
Refer to caption
(c) Jacobian norm
Figure 11: ResNet18, learning rate 0.02

B.2 Batch size and learning rate

We trained a VGG11 and ResNet18 (superficially modifying https://github.com/chengyangfu/pytorch-vgg-cifar10/blob/master/vgg.py and https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py respectively, additionally removing dropout and retaining BN in the VGG11) on CIFAR10 and CIFAR100 at constant learning rates 0.1, 0.01, and 0.001, with batch sizes of 64, 128, 256, 512 and 1024. The data were normalised by their RGB-channelwise mean and standard deviation, computed across all pixel coordinates and all elements of the training set. Five trials of each were conducted. Training loss on the full data set was computed every 100 iterations, and training was terminated when the average of the most recent 10 such losses was smaller than 0.01 for CIFAR10 and 0.02 for CIFAR100. Weight decay regularisation of 0.0001 was used throughout. Refer to batch_lr.py in the supplementary material.

At the termination of training, training loss and test loss were computed, with estimates of the Jacobian norm and sharpness computed using a randomly selected subset of size 2000 from the training set. The same seed was used to generate the parameters for a given trail as was used to generate the random subset of the training set.

We see that, on the whole, the claim that smaller batch sizes and larger learning rates lead to flatter minima and better generalisation is supported by our experiments, with Jacobian norm in particular being well-correlated to generalisation gap as anticipated by Theorem 6.1 and Ansatz 3.1. As noted in the main body of the paper, the notable exception to this is ResNet models trained with the largest learning rate, where Jacobian norm appears to underestimate the Lipschitz constant of the model and hence the generalisation gap. We speculate that this is due to large learning rate training of skip connected models leading to larger Jacobian Lipschitz constant, making the Jacobian norm a poorer estimate of the Lipschitz constant of the model and hence of generalisation (Theorems 4.3 and 6.1).

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 12: ResNet18 on CIFAR10, learning rate on xx axis. Line style indicates batch size.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 13: ResNet18 on CIFAR100, learning rate on xx axis. Line style indicates batch size.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 14: VGG11 on CIFAR10, learning rate on xx axis. Line style indicates batch size.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 15: VGG11 on CIFAR100, learning rate on xx axis. Line style indicates batch size.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 16: ResNet18 on CIFAR10, batch size on xx axis. Line style indicates learning rate.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 17: ResNet18 on CIFAR100, batch size on xx axis. Line style indicates learning rate.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 18: VGG11 on CIFAR10, batch size on xx axis. Line style indicates learning rate.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 19: VGG11 on CIFAR100, batch size on xx axis. Line style indicates learning rate.

B.3 Extended practical results

Here we provide the experimental details and additional experimental results for Section 6. In addition to the generalisation gap, sharpness, and (input-output) Jacobian norm, these plots include the operator norm of the Gauss-Newton matrix, the loss on the training set, and the accuracy on the validation set. Note that the figures in this section present the final metrics at the conclusion of training as a function of the regularisation parameter, and hence progressive sharpening (as a function of time) will not be observed directly. Rather, the purpose is to examine the impact of different regularisation strategies on the Hessian and model Jacobian.

In all cases, the model Jacobian is correlated with the generalisation gap, whereas the Hessian is often uncorrelated. Moreover, the norm of the Gauss-Newton matrix is (almost) always very similar to that of Hessian, in line with previous empirical observation Papyan [2018, 2019], Cohen et al. [2021]. The results support the arguments that (i) the model Jacobian is strongly related to the generalisation gap and (ii) while the Hessian is connected to the model Jacobian, and forcing the Hessian to be sufficiently small likewise appears to force the Jacobian to be smaller (see the SAM plots), the Hessian is also influenced by other factors and can be large even without the Jacobian being large (see other plots). All of these phenomena are consistent with our ansatz.

The figures in this section present results for {CIFAR10, CIFAR100} ×\times {VGG11, ResNet18} ×\times {batch-norm, no batch-norm}. Line colour and style denote different initial learning rates. The models were trained for 90 epochs with a minibatch size of 128 examples using Polyak momentum of 0.9. The learning rate was decayed by a factor of 10 after 50 and 80 epochs. Models were trained using the softmax cross-entropy loss with weight decay of 0.0005. (If weight decay were disabled, we would often observe the Hessian to collapse to zero as the training loss went to zero, due to the vanishing second gradient of the cost function in this region [Cohen et al., 2021, Appendix C]. This would make the correlation between sharpness and generalisation gap even worse.) Refer to train_jax.py and slurm/launch_wd_sweep.sh for the default configuration and experiment configuration, respectively.

For each regularisation strategy, the degree of regularisation is parametrised as follows. Label smoothing considers smoothed labels (1α)y+α(1/n)(1-\alpha)y+\alpha(1/n) with α[0,1]\alpha\in[0,1]. Mixup takes a convex combination of two examples (1θ)(x,y)+θ(x,y)(1-\theta)(x,y)+\theta(x^{\prime},y^{\prime}). The coefficient θ\theta is drawn from a symmetric beta distribution θBeta(β,β)\theta\sim\operatorname{Beta}(\beta,\beta) with β[0,)\beta\in[0,\infty), which corresponds to a Bernoulli distribution when β=0\beta=0 and a uniform distribution when β=1\beta=1. Data augmentation modifies the inputs xx with probability p[0,1]p\in[0,1]; the image transforms which we adopt are the standard choices for CIFAR (four-pixel padding, random crop, random horizontal flip). Sharpness Aware Minimisation (SAM) restricts the distance ρ\rho between the current parameter vector and that which is used to compute the update, which simplifies to SGD when ρ=0\rho=0.

For all results presented in this section, the loss and generalisation gap were computed using “clean” training examples; that is, without mixup or data augmentation in the respective experiments (despite this requiring an additional evaluation of the model for each step). The Hessian, Gauss-Newton and input-output Jacobian norms were similarly computed using a random batch of 1000 clean training examples. The losses and matrix norms in this section similarly exclude weight decay. Label smoothing was interpreted as a modification of the loss function rather than the training distribution, and therefore was included in the calculation of the losses and matrix norms. Batch-norm layers were used in “train mode” (statistical moments computed from batch) for the Hessian and Gauss-Newton matrix, and in “eval mode” (statistical moments are constants) for the Jacobian matrix, although this difference was observed to have negligible effect on the Jacobian norm.

B.3.1 CIFAR10

Refer to caption

Figure 20: VGG11 without batch-norm on CIFAR10.

Refer to caption

Figure 21: VGG11 with batch-norm on CIFAR10.

Refer to caption

Figure 22: ResNet18 without batch-norm on CIFAR10.

Refer to caption

Figure 23: ResNet18 with batch-norm on CIFAR10.

B.3.2 CIFAR100

Refer to caption

Figure 24: VGG11 without batch-norm on CIFAR100.

Refer to caption

Figure 25: VGG11 with batch-norm on CIFAR100.

Refer to caption

Figure 26: ResNet18 without batch-norm on CIFAR100.

Refer to caption

Figure 27: ResNet18 with batch-norm on CIFAR100.

Appendix C On mediating factors in the ansatz

In this section we provide classification and regression experiments to show that the relationship between loss curvature and input-output Jacobians is not always simple. Recall again Equation (4):

CDFXDFXTCT=C(l=1L(JfLJfl+1DflDflTJfl+1TJfLT))CT.C\,DF_{X}\,DF_{X}^{T}\,C^{T}=C\,\bigg{(}\sum_{l=1}^{L}\bigg{(}Jf_{L}\cdots Jf_{l+1}\,Df_{l}\,Df_{l}^{T}\,Jf_{l+1}^{T}\cdots Jf_{L}^{T}\bigg{)}\bigg{)}C^{T}. (34)

Due to the presence of the parameter derivatives DflDf_{l} and the square root CC of the Hessian of the cost function, as well as the absence of the first layer Jacobian in Equation (4), it is not always true that the magnitude of the Hessian and that of the model’s input-output Jacobian are correlated.

C.1 The cost function

We trained a VGG11, again using https://github.com/chengyangfu/pytorch-vgg-cifar10/blob/master/vgg.py with drouput layers removed and BN layers retained, on CIFAR10 using SGD with a batch size of 128, using differing degrees of label smoothing. Data was standardised so that each RGB channel has zero mean and unit standard deviation over all pixel coordinates and training samples. In Figure 28 we plot the sharpness and Jacobian norm every five iterations in the first epoch, observing exactly the same relationship between label smoothing, Jacobian norm and sharpness as predicted in Section 5. Refer to vgg_sgd.py in the supplementary material for the code to run these experiments.

Refer to caption
(a) Sharpness
Refer to caption
(b) Jacobian norm
Refer to caption
(c) Output norm
Figure 28: Effect of label smoothing on sharpness, Jacobian norm and output Frobenius norm during the first epoch of SGD. Line style indicates label smoothing. Exactly as in the full batch GD case, the smoother labels are associated with less severe increase of the Jacobian and less severe progressive sharpening.

Zooming out, however, to the full training run over 90 epochs, Figure 29 shows that the Hessian ultimately behaves markedly differently.

Refer to caption
(a) Sharpness
Refer to caption
(b) Jacobian norm
Refer to caption
(c) Output norm
Figure 29: Effect of label smoothing on sharpness, Jacobian norm and output Frobenius norm during 90 epochs of SGD. Line style indicates label smoothing. When the output norm gets too large, the decay of the CC terms in Equation (4) begins to overtake growth in Jacobian norm, ultimately causing the sharpness with unsmoothed labels to collapse to zero, where as sharpness values for smoothed labels plateau at nonzero values. Note also the the Jacobian norm presented is the Jacobian norm of the softmaxed model: the growth in output norm therefore also drives the Jacobian norm to zero.

The reason for this is that the norm of the network output grows to infinity in the unsmoothed case (Figure 29), and this growth in output corresponds to a vanishing of the CC term [Cohen et al., 2021, Appendix C] corresponding to the cross-entropy cost in Equation (4). Note that this growth is also to blame for the eventual collapse of the norm of the Jacobian of the softmaxed model, since the derivative of softmax vanishes at infinity also. This vanishing of the Jacobian should not be interpreted as saying that the Lipschitz constant of the model has collapsed (since if this were the case the model would not be able to fit the data), but rather that the maximum Jacobian norm over training samples has ceased to be a good approximation of the Lipschitz constant of the model, due to the Jacobian itself having a large Lipschitz constant (Theorem 4.3).

C.2 The parameter derivatives

Recall Theorem 5.1. We have already shown that decreasing the distance between labels decreases the severity of Jacobian growth, and, in line with Ansatz 3.1, also reduces the severity of progressive sharpening. It seems that one could also manipulate this severity by increasing the distance between input data points, for instance by scaling all data by a constant. We test this training simple, fully-connected, three layer networks, of width 200, with gradient descent on the first 5000 data points of CIFAR10. The data is standardised to have componentwise zero mean and unit standard deviation (measured across the whole dataset). The networks are initialised using the uniform distribution on the interval with endpoints ±1/infeatures\pm 1/\sqrt{in\,features} (default PyTorch initialisation) and trained with a learning rate of 0.20.2 for 300 iterations using the cross-entropy cost. Both ReLU and tanh activations are considered. Refer to small_network.py in the supplementary material for the code we used to run these experiments.

We scale the inputs by factors of 0.5, 1.0 and 1.5, bringing the data closer together at 0.5 and further apart at 1.5. From Theorem 5.1, we anticipate the Jacobian growth to go from most to least severe on these scalings respectively, with sharpness growth behaving similarly according to Ansatz 3.1. Surprisingly, we find that while Jacobian growth is more severe for the smaller scalings, the sharpness growth is less.

The reason for this is the effect of data scaling on the parameter derivatives DflDf_{l} in Equation (4). It is easily computed (cf. MacDonald et al. [2022]) that the parameter derivative DflDf_{l} is simply determined by the matrix fl1(X)f_{l-1}(X) of features from the previous layer. In the experimental settings we examined, these matrices are larger for the larger scaling values, which pushes the sharpness upwards despite Jacobian norm being smaller. Figures 30 and 31 show this phenomenon on ReLU and tanh activated networks respectively.

Refer to caption
(a) Sharpness
Refer to caption
(b) Jacobian norm
Refer to caption
(c) Layer 1 feature norm
Refer to caption
(d) Layer 2 feature norm
Figure 30: Effect of input scaling on sharpness and Jacobian norm for a ReLU network (5 trials). Line style denotes input scaling factor. Scaling data closer together does cause more severe increase in Jacobian norm, but corresponds to less severe increase in sharpness. This is due to the data scaling increasing the spectral norm of the feature maps.
Refer to caption
(a) Sharpness
Refer to caption
(b) Jacobian norm
Refer to caption
(c) Layer 1 feature norm
Refer to caption
(d) Layer 2 feature norm
Figure 31: Effect of input scaling on sharpness and Jacobian norm for a tanh network (5 trials). Line style denotes input scaling factor. Scaling data closer together does cause more severe increase in Jacobian norm, but corresponds to less severe increase in sharpness. This is due to the data scaling increasing the spectral norm of the feature maps.

C.3 The absence of the first layer Jacobian

We replicate the experiments of Section 8 in Ramasinghe et al. [2023], where it is demonstrated that flatness of minimum is not correlated with model smoothness in a simple regression task, and point to a possible explanation of this within our theory.

In detail, a four layer MLP with either Gaussian or ReLU activations is trained to regress 8 points in 2\mathbb{R}^{2}. Each network is trained from a high frequency initialisation (obtained from default PyTorch initialisation on the Gaussian network, and by pretraining on sin(6πx)\sin(6\pi x) for the ReLU network) to yield a non-smooth fit of the target data, and a low frequency initialisation (a wide Gaussian distribution for the Gaussian-activated network, and the default PyTorch initialisation for the ReLU network) to achieve a smooth fit of the data (see Figure 32). Refer to coordinate_network.py in the supplementary material for the code we used to run this experiment.

Refer to caption
(a) Gaussian, high freq.
Refer to caption
(b) Gaussian, low freq.
Refer to caption
(c) ReLU, high freq.
Refer to caption
(d) ReLU, low freq.
Figure 32: Interpolations of 8 points by networks started from high frequency and low frequency initialisations. The smoother interpolations have lower Jacobian norm over the training data.

The models were trained with gradient descent using a learning rate of 1e41e-4 and momentum of 0.9, for 10000 epochs for the Gaussian networks and 100000 epochs for the ReLU networks. The pretraining for the high frequency ReLU initialisation was achieved using Adam with a learning rate of 1e41e-4 and default PyTorch settings.

Table 1: Norm values for regression networks, replicating result of Ramasinghe et al. [2023]
Gaussian activated network
Norm High freq. Low freq.
Jacobian norm 326.80±182.51326.80\pm 182.51 84.07±49.3084.07\pm 49.30
sharpness 13517.61±3871.9713517.61\pm 3871.97 18353.54±5751.9818353.54\pm 5751.98
First layer weight norm 9.09±0.349.09\pm 0.34 0.56±0.020.56\pm 0.02
Table 2: Norm values for regression networks, replicating result of Ramasinghe et al. [2023]
ReLU activated network
Norm High freq. Low freq.
Jacobian norm 121.59±80.79121.59\pm 80.79 42.96±7.8242.96\pm 7.82
sharpness 14211.05±4767.2614211.05\pm 4767.26 9922.45±3330.379922.45\pm 3330.37
First layer weight norm 9.61±0.289.61\pm 0.28 9.56±0.169.56\pm 0.16

Tables 1 and 2 record the means and standard deviations of loss Hessian and Jacobian norms of the Gaussian and ReLU networks over 10 trials. As in Ramasinghe et al. [2023], we find that while the smoothly interpolating ReLU models on average land in flatter minima than the non-smoothly interpolating models, the opposite is true for the Gaussian networks.

Keeping in mind that the high variances in these numbers make it difficult to come to a conclusion about the trend, we propose a possible explanation for why the Gaussian activated networks do not appear to behave according to Ansatz 3.1. From Equation (4), the Hessian cannot be expected to relate to the Jacobian of the first layer. Note now the discrepancy between the first layer weight norms in the low frequency versus high frequency fits with the Gaussian networks, which does not occur for the ReLU networks. We believe this to be the cause of the larger Hessian for the low-frequency Gaussian models: with such a small weight matrix in the first layer, all higher layers must be relied upon to fit the data, making their Jacobians higher than they would otherwise need to be. These larger Jacobians would then push the Hessian magnitude higher by Equation (4).

Appendix D Relationship to the parameter-output Jacobian

In Lee et al. [2023], the parameter-output Jacobian is advocated as the mediating link between loss sharpness and generalisation. Extensive experiments are provided to support this claim. In particular, it is shown that regularising by the Frobenius norm of the parameter-output Jacobian during training is sufficient to alleviate the poor generalisation of (toy) networks trained using a small learning rate. However, no theoretical reasons are given for why the parameter-output Jacobian should be related to generalisation.

We expect that our Theorem 6.1 could provide the theoretical link missing in Lee et al. [2023]. Indeed, the term in brackets in our Equation (4) is precisely the Gram matrix of the parameter-output Jacobian. Thus, our Ansatz 3.1 states that regularisation of the parameter-output Jacobian will also regularise the input-output Jacobian. If this is true, then we should see similar improvements in network performance when training with a small learning rate using an input-output Jacobian regulariser to those seen in Lee et al. [2023] when training with the parameter-output Jacobian regulariser. Moreover, one should see that these improvements are correlated to input-output Jacobian norm in all cases. Figure 33 demonstrates that this is indeed the case, using the same settings on data and models as were used for the regulariser experiments in Section 6.

Refer to caption
(a) CIFAR10
Refer to caption
(b) CIFAR100
Figure 33: All models trained for 90 epochs. xx-axis denotes regularisation coefficient, and “Jacobian norm" refers to spectral norm. “ref 1e-1" and “ref 1e-3” are means over 5 trials of unregularised networks (regularisation coefficient set to zero) trained at learning rates of 1e-1 and 1e-3 respectively, shown for reference. The unregularised models trained with learning rate 1e-1 had their learning rate decayed by a factor of 10 every 30 epochs, so that their final learning rates were 1e-3. Learning rate scheduling was not used for any other trials.“Param." is trained with the parameter-output regulariser of Lee et al, while “Input" is trained by regularising the minibatch Jacobian Frobenius norm using a similar approximation as Lee et al, namely JFuTJ2\|J\|_{F}\approx\|u^{T}J\|_{2} for uu sampled uniformly from the NC1NC-1-sphere, where NN is the batch size and CC is the number of channels. 5 trials shown for the regularised trials, as well as means and 1 standard deviation shaded. Note the consistent downwards trend of both Jacobian norm and generalisation gap in all cases, as is consistent with our theory. Small Jacobian norm with large generalisation gap suggest, according to our theory, that the Jacobian of the model has ceased to be a good approximation of the Lipschitz constant of the model.