algorithmic
G-TRACER: Expected Sharpness Optimization
Abstract
We propose a new regularization scheme for the optimization of deep learning architectures, G-TRACER ("Geometric TRACE Ratio"), which promotes generalization by seeking flat minima, and has a sound theoretical basis as an approximation to a natural-gradient descent based optimization of a generalized Bayes objective. By augmenting the loss function with a TRACER, curvature-regularized optimizers (eg SGD-TRACER and Adam-TRACER) are simple to implement as modifications to existing optimizers and don’t require extensive tuning. We show that the method converges to a neighborhood (depending on the regularization strength) of a local minimum of the unregularized objective, and demonstrate competitive performance on a number of benchmark computer vision and NLP datasets, with a particular focus on challenging low signal-to-noise ratio problems.
1 Introduction
1.1 Problem setting
The connection between generalization performance and the loss-surface geometry of deep-learning architectures in the neighborhood of local minima has long been the subject of interest and speculation, dating back to the MDL-based arguments of (Hinton & van Camp, 1993) and (Hochreiter & Schmidhuber, 1997). The connection is an intuitively appealing one, in that the sharp local minima of the highly nonlinear, non-convex optimization problems associated with modern overparameterized deep learning architectures are more likely to be brittle and sensitive to perturbations in the parameters and training data, and thus lead to worse performance on unseen data. We can build some intuition for this from a probabilistic modelling perspective, given a dataset consisting of independent input random variables with distribution and corresponding targets (or labels) with distribution and treating the parameters of a deep neural network (DNN) as a random variable. Given a loss function our goal is to find a that minimizes the expected loss: . Writing the finite-sample version of this expected loss as , we can form a generalized posterior distribution (Bissiri et al., 2016) (with normalizer ) over the weights, which coincides with the Bayesian posterior in the special case that the loss is the negative log-likelihood and then, together with an output (conditional predictive) probability distribution , we can form the predictive distribution by marginalization:
(1) |
At a local maximum (or mode) of ) we can can form the Laplace approximation (valid asymptotically, for large ):
(2) |
where (assuming, for simplicity, a flat prior) and the normalizer (which for the negative log-likelihood loss is the evidence, or marginal likelihood, and which we will also refer to as the pseudo-marginal likelihood) is given by . Thus, in the neighborhood of each local maximum of , we approximate the posterior by a multivariate Gaussian with covariance given by the inverse Hessian of the negative loss: . Modern DNNs are characterized by multimodal losses (Wilson & Izmailov, 2020), and so, informally, we can decompose the posterior predictive distribution into disjoint contributions from each of the local maxima, the sum of which dominate the overall integral:
(3) |
where , which is an expectation with respect to a probability measure with density given by: , and which, by writing:
(4) |
can be viewed as a Gaussian mixture model (GMM) with mixing coefficients:
(5) |
Thus the relative contribution of each component is given by the relative size of the pseudo-marginal likelihoods . For very high-dimensional (modern DNN architectures often have billions or even trillions of parameters) even small differences in the width of the Gaussian approximation will have exponentially large effects on the magnitude of (which can be thought of as the the volume associated with the local maximum). How does all this relate to flatness? The Gaussian curvature , providing an intrinsic (and thus coordinate-free) measure of curvature is given by:
(6) |
and contributions to the mixture thus scale inversely with . In other words, the flatter the solution, the more it contributes to the mixture model against which the output probability distribution is integrated, in order to form the posterior predictive distribution. In a typical high-dimensional setting, the effect of small differences in curvature (or flatness) is exponentially magnified. To see this, we can consider two local minima i) with Hessian , and ii) an -flattened minimum () with Hessian , (so that each eigenvalue of is simply shrunk by a constant factor ). We have , so that the -flattened minimum with curvature has exponentially lower Gaussian curvature. The corresponding Gaussian approximations have covariances and , and the ratio of the corresponding pseudo-marginal likelihoods scales as:
(7) |
Thus we can see that the contribution from the flatter minimum dominates in the high-dimensional limit. In an empirical study, Huang et al. (2019) train a ResNet18 architecture on the Street View House Number (SVHN) dataset and estimate the volume around local minima using Monte-Carlo integration, finding that the volumes of basins surrounding minima that generalize well are at least 10,000 orders of magnitude larger than those of minima that generalize poorly.
A complementary approach is to characterize the loss-surface Hessian, since, at such local minimum of the loss, for a perturbation , we have:
(8) |
There has therefore been a large literature attempting to characterize the loss-surface Hessian and to relate these characteristics to generalization. In many practically relevant cases, multiple minima are associated with zero (or close to zero) training error, and explicit or implicit regularization is needed to find solutions with the best generalization error. Overparameterization is associated with the bulk of the Hessian spectrum lying close to zero and thus to highly degenerate minima (Sagun et al., 2017). Wei & Schwab (2020) further show that given a degenerate valley in the loss surface, SGD on average decreases the trace of the Hessian, which is strongly suggestive of a connection between locally flat minima, overaparameterization and generalization.
1.2 Sharpness-Aware Minimization
Despite the intuitive appeal and plausible justifications for flat solutions to be a goal of DNN optimization algorithms, there have been few practical unqualified successes in exploiting this connection to improve generalization performance. A notable exception is a recent algorithm, Sharpness Aware Minimization (SAM) (Foret et al., 2020), which seeks to improve generalization by optimizing a saddle-point problem of the form:
(9) |
An approximate solution to this problem is obtained by differentiating through the inner maximization, so that, given an approximate solution to the inner maximization (dual norm) problem:
(10) |
the gradient of the SAM objective is approximated as follows:
(11) |
While the method has gained widespread attention, and state-of-the-art performance has been demonstrated on several benchmark datasets, it remains relatively poorly understood, and the motivation and connection to sharpness is questionable given that the Euclidian norm-ball isn’t invariant to changes in coordinates. Given a 1-1 mapping we can reparameterize our DNN using the "pullback" under which, crucially, the underlying prediction function (and therefore the loss) itself is invariant, since, for , we have . Under this coordinate transformation, however, the Hessian at a critical point transforms as (Dinh et al., 2017):
(12) |
In particular, Dinh et al. (2017) explicitly show, using layer-wise transformations , that deep rectifier feedforward networks possess large numbers of symmetries which can be exploited to control sharpness without changing the network output. The existence of these symmetries in the loss function, under which the geometry of the local loss can be substantially modified (and in particular, the spectral norm and trace of the Hessian) means that the relationship between the local flatness of the loss landscape and generalization is a subtle one.
It’s instructive to consider the PAC Bayes generalization bound that motivates SAM, the derivation of which starts from a PAC-Bayesian generalization bound (McAllester, 1999; Dziugaite & Roy, 2017):
Theorem 1.
For any distribution and prior over the parameters , with probability over the choice of the training set , and for any posterior over the parameters:
(13) |
where the KL divergence:
(14) |
defines a statistical distance (though not a metric, as it’s symmetric only to second order) on the space of probability distributions. Assuming an isotropic prior for some , an isotropic posterior , so that , applying the covering approach of Langford & Caruana (2001) to select the best (closest to in the sense of KL divergence) from a set of pre-defined data-independent prior distributions satisfying the PAC generalization bound, Foret et al. (2020) show that the bound in theorem 1 can be written in the following form:
(15) |
(for a monotone function ) and then, crucially, apply a well-known tail-bound for a chi-square random variable to bound thus bounding the expectation over , with probability , by the maximum value over a Euclidian norm-ball ball and deriving the following generalization bound:
Theorem 2.
For any and any distribution , with probability over the choice of the training set ,
(16) |
where , , and is the number of parameters.
This bound justifies and motivates the SAM objective:
(17) |
and resulting algorithm. While the bound in Theorem 2 suggests that the ridge-penalty should vary with the radius of the perturbation, in practice (Foret et al., 2020) the penalty term is fixed (or simply set to zero) even when different perturbation radii are searched over. Subsequent refinements of SAM (Kim et al., 2022) ignore the ridge penalty term altogether, and the choice of an optimal perturbation radius is what drives the success of the method. It is not clear, however, why this adversarial parameter-space perturbation should help generalization more than evaluating (and approximating) the expectation in the very bound which motivates the SAM procedure in the first place, which would lead instead to an objective (ignoring, for now, the ridge penalty term) of the following form:
(18) |
Moreover, the worst-case adversarial perturbation used by SAM is likely to be noisier and is also naturally a significantly looser bound than the expectation-based bound.
2 Generalized variational posterior
Our starting point is a similar, but more general, optimization objective, which arises in the variational optimization of a generalized posterior distribution, , over the space of probability measures on the parameter space (Bissiri et al., 2016) given by:
(19) |
to which, when , the solution is given by the generalized posterior:
(20) |
The terms are to be interpreted as quasi-likelihoods, and for the particular choice , we recover the standard Bayesian posterior. As this infinite dimensional optimization is, in general, intractable, it is usual to assume that the posterior belongs to a parametric family :
(21) |
which, for the choice , is the same objective (up to a constant factor) as the evidence lower bound (ELBO) used in variational Bayes.
In practice, it is often found that tempering the KL divergence term by a positive factor produces optimal performance, giving rise to:
(22) |
2.1 TRACER: flatness-inducing regularization
Ignoring for simplicity the contribution from the prior term (which would correspond to a ridge-regularization term under the assumption ), leads to following objective, which we seek to minimize over :
(23) |
where is the entropy of . For the choice , the optimization problem associated with the variational objective becomes (absorbing some constants into ):
(24) |
so that we can see that determines the variance of Gaussian perturbation over which the loss is averaged. More generally, choosing leads to the following variational objective:
(25) |
so that large values of will correspond to distributions with larger volume, since for , lies within the ellipsoid with probability , with the volume of the ellipsoid proportional to (Anderson, 2003). We show in section 2.3 that expanding the expectation under to second order, we have:
(26) |
so that, in the neighborhood of a local minimum, where the Hessian is positive-definite, the curvature of the loss-surface is penalized over a region whose volume is determined by . While intuitively appealing, this flatness inducing penalty is not invariant to coordinate transformations, so that scale changes (such as occur, for example, when applying batch-normalization or weight-normalization) which have no effect on the output of the learned probability distribution, can nevertheless still result in arbitrary changes to the penalty. More generally, as discussed above, any geometric notion of loss surface flatness must be independent of arbitrary rescaling of the network parameters. Motivated by these considerations, we apply steepest descent in the KL-metric (also known as natural gradient descent (Amari, 1998)) to our variational objective, under the assumption in order to find solutions to:
(27) |
where is a positive real-valued regularization parameter.
We show in the sequel that, assuming an isotropic Gaussian prior, , performing gradient descent w.r.t. the natural gradient then leads to the following iterative update equations:
(28) |
(29) |
where and are the learning rates for the mean and precision updates, respectively, and is the precision matrix. Approximating the expectations to second order and further simplifying leads to the following update equations (see below for a detailed derivation):
(30) |
(31) |
where is the Hessian. The update rule for is an exponential smoothing and the update for the mean consists of a preconditioned (by the inverse smoothed Hessian) gradient, together with, crucially, a penalty term proportional to the (affine-invariant) ratio of the Hessian and the smoothed Hessian. Finally, via an empirical Fisher (diagonal) Hessian approximation (see below for details and a discussion of alternatives) and dropping the preconditioner, we arrive at a modified SGD-type update which we call SGD-TRACER.
2.2 SGD-TRACER
SGD-TRACER is given by Algorithm (1) in which the usual stochastic gradient update is modified with a term which penalizes the trace of the ratio between the diagonal of the Empirical FIM and an exponentially weighted average the of the Empirical FIM diagonal. By augmenting the loss with a TRACER term and maintaining a smoothed squared-gradient estimate, in principle, any optimization scheme can be modified in the same way. In our experiments we use SGD with momentum for vision tasks and Adam-TRACER for NLP tasks, based on standard practice in each problem domain.
2.3 Derivation of the TRACER flatness-inducing regularizer
Following Khan & Rue and Zhang et al., we make the assumption and seek to optimize the variational objective in Equation (22) w.r.t. the variational parameters using natural gradient descent, which allows us to derive an algorithm that respects the intrinsic geometry of the parameter space, and thus derive an algorithm that seeks sharp minima in an approximately coordinate-independent way.
Thus we aim to minimize:
(32) |
where is a positive real-valued regularization parameter. The negative gradient corresponds to the steepest descent direction in the Euclidian metric:
(33) |
and thus depends on the chosen coordinates . In contrast, the so-called natural gradient update corresponds to steepest descent in the KL-divergence metric:
(34) |
where is the Fisher Information Matrix (FIM):
(35) |
which defines a Riemannian metric on the parameter manifold where . Expanding to second order in a small neighbourhood of we have:
(36) |
and since:
(37) |
the FIM (under certain regularity conditions) can be seen to be the Hessian (or curvature) of the K-L divergence:
(38) |
The following proposition gives an expression for the natural gradient vector (for proof see Appendix A.5):
Proposition 1.
For a probability distribution with pdf with the parameterization , the natural gradient of is given by:
(39) |
where
(40) |
(41) |
Assuming an isotropic Gaussian prior, , performing gradient descent w.r.t. this natural gradient then leads to the following iterative update equations:
(42) |
(43) |
where and are the learning rates for the mean and precision updates, respectively. We work with each of these update equations in turn. Starting with the update equation for the mean , the key observation is that the expectation is taken with respect to the distribution which is an exponential moving average of the expected Hessian . This updating happens naturally as a consequence taking natural gradient steps, and is what leads to an approximately coordinate free algorithm in the sequel. Applying Bonnet’s theorem Khan & Rue (2021) and forming the second-order approximation to the loss:
(44) |
Writing where we have:
(45) |
where we used the fact that , and where is the Hessian . We therefore have have that:
(46) |
Choosing the prior variance to be infinite and thus ignoring terms involving , corresponding to an improper prior (and consistent with the discussion above), leads to the following update for the mean:
(47) |
Thus, in order to blur the loss with multivariate Gaussian noise in a way that aligns with the intrinsic geometry of the parameter space, we can (to second order) augment the loss with a term involving the Trace of the Hessian. Taking now the update equation for the covariance, we can simplify by Price’s theorem (Khan & Rue, 2021) together with a Taylor expansion, to get, to second order: (see Appendix for details) :
(48) |
We next substitute, as is common in the literature on approximate second order approximation Martens (2020), the Generalized Gauss-Newton matrix (GGN) for the Hessian, given by:
(49) |
where is the Jacobian of the output function and is the Hessian of the loss w.r.t. the output distribution. The GGN is a positive definite approximation to the Hessian which converges to the Hessian as the fitted residuals go to zero, (Kunstner et al., 2019). The most practically relevant losses, cross-entropy (classification), and squared error (regression) correspond to exponential family output distributions with natural parameters given by , together with for the log-loss , and for these choices, the GGN is equivalent to the Fisher Information Matrix. While the evaluation of the GGN matrix, in particular the matrix multiplies involving the Jacobians , can be relatively costly, the FIM can be expressed as an expectation of outer products of gradients w.r.t. the output distribution :
(50) |
which, following Martens (2020), can be estimated using a single Monte Carlo sample from the output distribution: . Using this biased Fisher approximation in our setting thus requires gradients to be calculated through an expectation approximated using a Monte Carlo sample from the model’s output distribution. Since the expectation is taken w.r.t. a distribution which depends on , it is necessary to reparameterize so that the discrete Monte Carlo sample is expressed as the deterministic transformation of a (depending on ) of a sample from a distribution not depending on , so that . In the discrete case (corresponding to classification), since the argmax function is non-differentiable, the standard approach is the Gumbel-Softmax reparameterization Jang et al. (2016), which uses the softmax function as a continuous relaxation of the argmax function together with i.i.d. samples distributed as Gumbel(0,1).
It’s important to note that is different from simply evaluating on the training labels, a widely-used approximation known as the empirical Fisher :
(51) |
which, despite lacking the same convergence guarantees, performs competitively in many settings (Kunstner et al., 2019). We find empirically in our experiments that the empirical Fisher performs competitively with the MC approximation to the GGN (Khan et al., 2018; Kingma & Ba, 2014) and has the advantage of being straightforward and cheap to compute from the already computed gradients (in the case of Adam-TRACER, the smooth squared gradients are already computed and maintained for use as a preconditioner). Given the conceptual and computational simplicity of this approach, and despite its known suboptimality, in the following, we substitute the empirical Fisher for the Hessian. Recent advances in approximate second-order methods in optimization, notably Yao et al. (2020), suggest avenues for improvement, and we leave investigations of alternatives, such as the smoothed (Hessian-free) Hessian diagonal sketch used in AdaHessian, for future work.
Substituting the empirical Fisher approximation for the Hessian in the update equation for the precision, we have the following update:
(52) |
Rewriting the update equations in terms of this exponentially smoothed FIM , absorbing a factor in to , and writing the iteration in terms of the parameter , we obtain:
(53) |
(54) |
Crucially, the penalty term can be seen to be invariant to affine coordinate transformations, since it is the trace of the ratio of two (0,2) tensors which transform in the same way. Indeed under an affine coordinate transformation with Jacobian we have and so that:
(55) |
By penalizing the ratio of the (squared) gradients and the exponentially smoothed gradients, the trace ratio penalty in effect is penalizing the change in (squared) gradient, in a coordinate-free way. More generally, given a coordinate change given by a diffemorphism and with Jacobian , then given the exponential decay in the update equation for the Fisher, subject to having sufficient regularity, and for sufficiently small , the penalty term is approximately coordinate free under general smooth diffeomorphisms (see SM for details).
We now make two simplifications. First, we use a mean-field approximation of the FIM by its diagonal, as is done in Adam (Kingma & Ba, 2014) and Adagrad (Duchi et al., 2011), thus:
(56) |
Secondly, it is standard practice to add Tikhonov regularization or damping by a small positive real constant when using 2nd-order optimization methods, giving in this case the preconditioner: . This is justified by recognizing that the local quadratic model from which the second-order update is ultimately derived is a second-order approximation to the KL divergence and is thus only valid locally. For directions corresponding to small eigenvalues, parameter updates can lie outside the region where the approximation is reasonable (Martens, 2020). This is true, a fortiori, when diagonal approximations are used, as is the case here. As our emphasis here is on geometric regularization, we drop the preconditioner entirely by choosing to be sufficiently large that the preconditioner is equal to the identity (up to a constant, which is absorbed into the learning rate).
Finally, as most current deep learning frameworks don’t straightforwardly support access to per-example gradients, which can in principle be achieved with negilible additional cost (see, for example, BackPACK Dangel et al. (2020) second-order Pytorch extensions), for simplicity and efficiency, we use the gradient magnitude (GM) approximation (Bottou et al., 2016), as used in standard optimizers Adam and RMSprop, replacing the sum of squared gradients with the square of summed gradients:
(57) |
Writing the resulting FIM diagonal as , we finally end up with the following simple update equations:
(58) |
which are summarized in Algorithm 1. We show in appendix A.3 that the algorithm converges to a neighborhood of a local minimum of of size . We note in passing that, in this simplest form (after applying the gradient magnitude approximation), the update equations amount to regularizing with a (scale-adjusted) gradient norm. In principle (particularly for the large batch case) we would expect to see significant improvements by moving to per-gradient calculations (which are in principle no more expensive but require additional work in most current ML frameworks).
2.4 Results
We first examine a challenging variant on a standard benchmark in computer vision, CIFAR-100. We compare SGD, SAM and SGD-Tracer using none of the standard regularizations (no data augmentation, no weight-decay) and a standard training protocol (200 epochs, initial learning rate set to , cosine learning-rate decay). Further, we randomly flip 50% of the labels so that 50% of examples are incorrectly labeled. The results in Table 3 show that GTRACER significantly improves on SAM in this challenging setting. In Figure 1 we highlight results for the same problem over different values of the regularization parameter .

In Figure 2 we compare the training curves on this problem.

No aug | |
SGD | 17.5% (2.41) |
SAM | 34.63% (1.85) |
SGD-TRACER | 47.55% (1.51) |
We next run SGD-Tracer on CIFAR-100 with and without label noise, with and without augmentation, and with random label flipping, with a standard ridge penalty . The results in Table 2 show that SGD-TRACER performs consistently well, with a particularly strong advantage in the the presence of noise and/or without additional regularization in the form of data augmentation.
no aug | with aug | 50% noise & no aug | |
SGD | 51.43 % (0.41) | 70.02% (0.36) | 21.96% (0.36) |
SAM | 58.98 % (0.52) | 70.33% (0.22) | 49.89% (0.32) |
SGD-TRACER | 63.47% (0.32) | 70.71% (0.36) | 51.62% (0.18) |
For NLP tasks we use the Huggingface Bert-base-uncased checkpoint together with Adam-TRACER. We fine-tune using Adam-Tracer, using a standard protocol of 5 epochs with initial learning rate . Each run is repeated 20 times. Performance is uniformly strong across the 3 benchmark tasks (taken from the challenging SuperGlue benchmark), and Adam-TRACER has the additional property of producing more stable results across runs (as reflected in the standard errors). See the SM for details of (standard) experiment hyperparameters.
BOOLQ | WIC | RTE | |
Adam | 73.84% (0.14) | 69.36% (0.08) | 69.18% (0.33) |
SAM | 73.95% (0.13) | 69.06% (0.07) | 69.54% (0.28) |
Adam-TRACER | 75.09% (0.04) | 70.01% (0.06) | 70.13% (0.18) |
3 Conclusion
Motivated by the notable empirical success of SAM, a prior that flat (in expectation, and in an intrinsic, geometric sense) minima should generalize better than sharp minima, and noting the connections between the generalized Bayes objective and SAM, we have derived a new algorithm that is simple to implement and understand, cheap to evaluate, provably convergent, naturally scale-independent (and approximately coordinate-free) and which is competitive with SAM on key benchmark problems. Performance is particularly strong for challenging low signal-to-noise ratio and large batch problems. Crucially the algorithm is straightforwardly derived from an approximate natural gradient optimization of an ELBO-type objective and doesn’t rely on "m-sharpness" (Foret et al., 2020) or other poorly understood (and expensive to compute) heuristics.
Broader Impact Statement
We present a novel method with sound theoretical motivation which delivers competitive results on challenging benchmark and low signal-to-noise-ratio problems in vision and NLP.
Appendix A Appendix
A.1 Multivariate Gaussian Fisher Information Matrix, parameterization
For a probability distribution with density with parameters , the Fisher Information Matrix (FIM) can be written as the expected negative log-likelihood Hessian:
(59) |
In particular, for a multivariate Gaussian with pdf: , parameterized by the negative log-likelihood is, up to constant terms:
(60) |
Taking gradients w.r.t. , we have: and therefore . Taking gradients w.r.t. the covariance, and using since and we have:
(61) |
Finally, writing as and we have:
(62) |
so that the FIM is given by:
(63) |
A.2 Approximate expected Hessian
Lemma 1.
To second order, we can approximate the expected Hessian w.r.t. a multivariate Gaussian with pdf: by its value at the mean:
(64) |
Proof.
Following Khan & Rue (2021), by Price’s theorem, we have:
(65) |
which is equal to, expanding the r.h.s. to second order using a Taylor series:
(66) |
Finally, noting that , we have, to second order:
(67) |
∎
A.3 Convergence analysis
With , as , the iterates will converge to those of SGD. For , the algorithm is biased away from a pure descent direction, and convergence then depends on the magnitude of . The key assumption in the following convergence proof is , which controls the bias and which follows from the standard assumption of twice-differentiability of and the Lipschitz continuity of which imply that the Hessian has a bounded spectral norm:
(68) |
so that depends on the Lipshitz constant and the ratio .
Theorem 3.
Let , and assume the objective (loss) is Lipschitz continuous, twice differentiable, and has Lipshitz-continuous gradient. Let us assume, following Bottou et al. (2016) and Ajalloeian & Stich (2021) that we have a stochastic direction which has the following properties, :
(69) |
and further assuming that there exist , such that, ,
(70) |
and the following bound on the bias:
(71) |
then the iteration:
(72) |
converges to a neighbourhood of a stationary point with .
Proof.
By the Lipschitz continuity of the objective function we have the quadratic bound:
(73) |
By the quadratic upper bound, the iterates generated by the algorithm satisfy:
(74) |
Taking expectations and applying the variance bound we have:
(75) |
So that, choosing and applying the bound on we have:
(76) |
Taking the total expectation, for a fixed , we then have:
(77) |
Finally giving:
(78) |
∎
A.4 Objective function gradient
Lemma 2.
Proof.
Taking the negative gradient of the objective wrt to , and applying Bonnet’s theorem (Khan & Rue, 2021), and the fact that the expectation of the score is 0, we have:
(81) |
Taking the gradient w.r.t. , applying Price’s theorem, we have:
(82) |
and since:
(83) |
We obtain
(84) |
(85) |
∎
A.5 Objective function natural gradient
Proposition 2.
(86) |
(87) |
Proof.
By Lemma 2, the gradients of the objective w.r.t. are given by:
(88) |
and
(89) |
The gradient then follows immediately from the definition of the natural gradient operator. Using the chain rule for matrix derivatives we can also show that:
(90) |
So that
(91) |
Thus, the gradients are thus given by:
(92) |
The Fisher Information Matrix is given by 63 :
(93) |
Therefore
(94) |
where we used the identities and . Since , we have the required updates. ∎
References
- Ajalloeian & Stich (2021) Ahmad Ajalloeian and Sebastian U. Stich. On the convergence of sgd with biased gradients, 2021.
- Amari (1998) S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276, 1998.
- Anderson (2003) T. W. Anderson. An introduction to multivariate statistical analysis. Wiley Series in Probability and Statistics, 2003.
- Bissiri et al. (2016) P. G. Bissiri, C. C. Holmes, and S. G. Walker. A general framework for updating belief distributions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(5):1103–1130, feb 2016. doi: 10.1111/rssb.12158. URL https://doi.org/10.1111%2Frssb.12158.
- Bottou et al. (2016) Léon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. 2016. doi: 10.48550/ARXIV.1606.04838. URL https://arxiv.org/abs/1606.04838.
- Dangel et al. (2020) Felix Dangel, Frederik Kunstner, and Philipp Hennig. Backpack: Packing more into backprop, 2020.
- Dinh et al. (2017) Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. CoRR, abs/1703.04933, 2017. URL http://arxiv.org/abs/1703.04933.
- Duchi et al. (2011) John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(61):2121–2159, 2011. URL http://jmlr.org/papers/v12/duchi11a.html.
- Dziugaite & Roy (2017) Gintare Karolina Dziugaite and Daniel M. Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. 2017. doi: 10.48550/ARXIV.1703.11008. URL https://arxiv.org/abs/1703.11008.
- Foret et al. (2020) Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. CoRR, abs/2010.01412, 2020. URL https://arxiv.org/abs/2010.01412.
- Hinton & van Camp (1993) Geoffrey E. Hinton and Drew van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, COLT ’93, pp. 5–13, New York, NY, USA, 1993. Association for Computing Machinery. ISBN 0897916115. doi: 10.1145/168304.168306. URL https://doi.org/10.1145/168304.168306.
- Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Flat Minima. Neural Computation, 9(1):1–42, 01 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.1.1. URL https://doi.org/10.1162/neco.1997.9.1.1.
- Huang et al. (2019) W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, Justin K. Terry, Furong Huang, and Tom Goldstein. Understanding generalization through visualizations. CoRR, abs/1906.03291, 2019. URL http://arxiv.org/abs/1906.03291.
- Jang et al. (2016) Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with Gumbel-Softmax, 2016. URL https://arxiv.org/abs/1611.01144.
- Khan & Rue (2021) Mohammad Emtiyaz Khan and Håvard Rue. The Bayesian learning rule, 2021. URL https://arxiv.org/abs/2107.04562.
- Khan et al. (2018) Mohammad Emtiyaz Khan, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, and Akash Srivastava. Fast and scalable Bayesian deep learning by weight-perturbation in adam. 2018. doi: 10.48550/ARXIV.1806.04854. URL https://arxiv.org/abs/1806.04854.
- Kim et al. (2022) Minyoung Kim, Da Li, Shell Xu Hu, and Timothy M. Hospedales. Fisher SAM: Information geometry and sharpness aware minimisation. 2022. doi: 10.48550/ARXIV.2206.04920. URL https://arxiv.org/abs/2206.04920.
- Kingma & Ba (2014) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014. URL https://arxiv.org/abs/1412.6980.
- Kunstner et al. (2019) Frederik Kunstner, Lukas Balles, and Philipp Hennig. Limitations of the empirical fisher approximation. CoRR, abs/1905.12558, 2019. URL http://arxiv.org/abs/1905.12558.
- Langford & Caruana (2001) John Langford and Rich Caruana. (not) bounding the true error. In T. Dietterich, S. Becker, and Z. Ghahramani (eds.), Advances in Neural Information Processing Systems, volume 14. MIT Press, 2001. URL https://proceedings.neurips.cc/paper/2001/file/98c7242894844ecd6ec94af67ac8247d-Paper.pdf.
- Martens (2020) James Martens. New insights and perspectives on the natural gradient method. Journal of Machine Learning Research, 21(146):1–76, 2020. URL http://jmlr.org/papers/v21/17-678.html.
- McAllester (1999) David A. McAllester. Some PAC-Bayesian theorems. 1999. URL https://doi.org/10.1023/A:1007618624809.
- Sagun et al. (2017) Levent Sagun, Utku Evci, V. Ugur Güney, Yann N. Dauphin, and Léon Bottou. Empirical analysis of the Hessian of over-parameterized neural networks. CoRR, abs/1706.04454, 2017. URL http://arxiv.org/abs/1706.04454.
- Wei & Schwab (2020) Ming-Wei Wei and David Schwab. Implicit regularization of SGD via thermophoresis. In Advances in Neural Information Processing Systems, Machine Learning for Physical Sciences Workshop, 2020.
- Wilson & Izmailov (2020) Andrew Gordon Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. CoRR, abs/2002.08791, 2020. URL https://arxiv.org/abs/2002.08791.
- Yao et al. (2020) Zhewei Yao, Amir Gholami, Sheng Shen, Kurt Keutzer, and Michael W. Mahoney. ADAHESSIAN: an adaptive second order optimizer for machine learning. CoRR, abs/2006.00719, 2020. URL https://arxiv.org/abs/2006.00719.
- Zhang et al. (2017) Guodong Zhang, Shengyang Sun, David Duvenaud, and Roger B. Grosse. Noisy natural gradient as variational inference. CoRR, abs/1712.02390, 2017. URL http://arxiv.org/abs/1712.02390.