This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes

Sebastian W. Ober    Laurence Aitchison
Abstract

We consider the optimal approximate posterior over the top-layer weights in a Bayesian neural network for regression, and show that it exhibits strong dependencies on the lower-layer weights. We adapt this result to develop a correlated approximate posterior over the weights at all layers in a Bayesian neural network. We extend this approach to deep Gaussian processes, unifying inference in the two model classes. Our approximate posterior uses learned “global” inducing points, which are defined only at the input layer and propagated through the network to obtain inducing inputs at subsequent layers. By contrast, standard, “local”, inducing point methods from the deep Gaussian process literature optimise a separate set of inducing inputs at every layer, and thus do not model correlations across layers. Our method gives state-of-the-art performance for a variational Bayesian method, without data augmentation or tempering, on CIFAR-10 of 86.7%, which is comparable to SGMCMC without tempering but with data augmentation (88% in Wenzel et al. 2020).111Reference implementation at: github.com/LaurenceA/bayesfunc


1 Introduction

Deep models, formed by stacking together many simple layers, give rise to extremely powerful machine learning algorithms, from deep neural networks (DNNs) to deep Gaussian processes (DGPs) (Damianou & Lawrence, 2013). One approach to reason about uncertainty in these models is to use variational inference (VI) (Jordan et al., 1999). VI in Bayesian neural networks (BNNs) requires the user to specify a family of approximate posteriors over the weights, with the classical approach being independent Gaussian distributions over each individual weight (Hinton & Van Camp, 1993; Graves, 2011; Blundell et al., 2015). Later work has considered more complex approximate posteriors, for instance using a Matrix-Normal distribution as the approximate posterior for a full weight-matrix (Louizos & Welling, 2016; Ritter et al., 2018) and hierarchical variational inference (Louizos & Welling, 2017; Dusenberry et al., 2020). By contrast, DGPs use an approximate posterior defined over functions – the standard approach is to specify the inputs and outputs at a finite number of “inducing” points (Damianou & Lawrence, 2013; Salimbeni & Deisenroth, 2017).

Critically, these classical BNN and DGP approaches define approximate posteriors over functions that are independent across layers. An approximate posterior that factorises across layers is problematic, because what matters for a deep model is the overall input-output transformation for the full model, not the input-output transformation for individual layers. This raises the question of what family of approximate posteriors should be used to capture correlations across layers. One approach for BNNs would be to introduce a flexible “hypernetwork”, used to generate the weights (Krueger et al., 2017; Pawlowski et al., 2017). However, this approach is likely to be suboptimal as it does not sufficiently exploit the rich structure in the underlying neural network. For guidance, we consider the optimal approximate posterior over the top-layer units in a deep network for regression. Remarkably, the optimal approximate posterior for the last-layer weights given the earlier weights can be obtained in closed form without choosing a restrictive family of distributions. In particular, the optimal approximate posterior is given by propagating the training inputs through lower layers to compute the top-layer representation, then using Bayesian linear regression to map from the top-layer representation to the outputs.

Inspired by this result, we use Bayesian linear regression to define a generic family of approximate posteriors for BNNs. In particular, we introduce learned “pseudo-outputs” at every layer, and compute the posterior over the weights by performing linear regression from the inputs (propagated from lower layers) onto the pseudo-outputs. We reduce the burden of working with many training inputs by summarising the posterior using a small number of “inducing” points. We find that these approximate posteriors give excellent performance in the non-tempered, no-data-augmentation regime, with performance on datasets such as CIFAR-10 reaching 86.7%86.7\%, comparable to SGMCMC witout tempering but with data augmentation (88%) (Wenzel et al., 2020). Our approach can be extended to DGPs, and we explore connections to the inducing point GP literature, showing that inference in the two classes of models can be unified.

Concretely, our contributions are:

  • We propose an approximate posterior for BNNs based on Bayesian linear regression that naturally induces correlations between layers (Sec. 2.1).

  • We provide an efficient implementation of this posterior for convolutional layers (Sec. 2.2).

  • We introduce new BNN priors that allow for more flexibility with inferred hyperparameters (Sec. 2.3).

  • We show how our approximate posterior can be naturally extended to DGPs, resulting in a unified approach for inference in BNNs and DGPs (Sec. 2.4).

2 Methods

To motivate our approximate posterior, we first consider the optimal top-layer posterior for a Bayesian neural network in the regression case. We consider neural networks with lower-layer weights {𝐖}=1L\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}, 𝐖N1×N\mathbf{W}_{\ell}\in\mathbb{R}^{N_{\ell-1}\times N_{\ell}}, and output weights, 𝐖L+1NL×NL+1\mathbf{W}_{L+1}\in\mathbb{R}^{N_{L}\times N_{L+1}}, where the activity, 𝐅\mathbf{F}_{\ell}, at layer \ell is given by,

𝐅1\displaystyle\mathbf{F}_{1} =𝐗𝐖1,\displaystyle=\mathbf{X}\mathbf{W}_{1},
𝐅\displaystyle\mathbf{F}_{\ell} =ϕ(𝐅1)𝐖 for {2,,L+1},\displaystyle=\phi\left(\mathbf{F}_{\ell-1}\right)\mathbf{W}_{\ell}\quad\textrm{ for }\ell\in\left\{2,\dotsc,L+1\right\}, (1)

where ϕ()\phi(\cdot) is an elementwise nonlinearity. The targets, 𝐘\mathbf{Y}, depend on the neural-network outputs, 𝐅L+1P×NL+1\mathbf{F}_{L+1}\in\mathbb{R}^{P\times N_{L+1}}, according to a likelihood, P(𝐘|𝐅L+1)\operatorname{P}\left(\mathbf{Y}|\mathbf{F}_{L+1}\right). In the following derivations, we will focus on >1\ell>1; corresponding expressions for the input layer can be obtained by replacing ϕ(𝐅0)\phi(\mathbf{F}_{0}) with the inputs, 𝐗P×N0\mathbf{X}\in\mathbb{R}^{P\times N_{0}}. The prior over weights is independent across layers and output units (see Sec. 2.3 for the form of 𝐒\mathbf{S}_{\ell}),

P(𝐖)\displaystyle\operatorname{P}\left(\mathbf{W}_{\ell}\right) =λ=1NP(𝐰λ),\displaystyle={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\operatorname{P}\left(\mathbf{w}_{\lambda}^{\ell}\right),
P(𝐰λ)\displaystyle\operatorname{P}\left(\mathbf{w}_{\lambda}^{\ell}\right) =𝒩(𝐰λ| 0,1N1𝐒),\displaystyle=\mathcal{N}\left(\mathbf{w}_{\lambda}^{\ell}\middle|\,{\bf{0}},\tfrac{1}{N_{\ell-1}}\mathbf{S}_{\ell}\right), (2)

where 𝐰λ\mathbf{w}_{\lambda}^{\ell} is a column of 𝐖\mathbf{W}_{\ell} representing all the input weights to unit λ\lambda in layer \ell. To fit the parameters of the approximate posterior, Q({𝐖}=1L+1)\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L+1}\right), we maximise the evidence lower bound (ELBO),

=𝔼Q({𝐖}=1L+1)[logP(𝐘|𝐗,{𝐖}=1L+1)+logP({𝐖}=1L+1)logQ({𝐖}=1L+1)]\mathcal{L}=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L+1}\right)}}\big{[}\log\operatorname{P}\left(\mathbf{Y}|\mathbf{X},\{\mathbf{W}\}_{\ell=1}^{L+1}\right)\\ +\log\operatorname{P}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L+1}\right)-\log\operatorname{Q}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L+1}\right)\big{]} (3)

To build intuition about how to parameterise Q({𝐖}=1L+1)\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L+1}\right), we consider the optimal Q(𝐖L+1|{𝐖}=1L)\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right) for any given Q({𝐖}=1L)\operatorname{Q}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right), i.e. the optimal top-layer posterior conditioned on the lower layers. We begin by simplifying the ELBO by incorporating terms that do not depend on 𝐖L+1\mathbf{W}_{L+1} into c({𝐖}=1L)c(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}),

=𝔼Q({𝐖}=1L+1)[logP(𝐘,𝐖L+1|𝐗,{𝐖}=1L)logQ(𝐖L+1|{𝐖}=1L)+c({𝐖}=1L)].\mathcal{L}=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L+1}\right)}}\big{[}\log\operatorname{P}\left(\mathbf{Y},\mathbf{W}_{L+1}|\,\mathbf{X},\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)\\ -\log\operatorname{Q}\left(\mathbf{W}_{L+1}|\,\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)+c(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L})\big{]}. (4)

Rearranging these terms (App. A), we find that all 𝐖L+1\mathbf{W}_{L+1} dependence can be written in terms of the KL divergence between the approximate posterior of interest and the true posterior,

=𝔼Q({𝐖}=1L)[DKL(Q(𝐖L+1|{𝐖}=1L)||P(𝐖L+1|𝐘,𝐗,{𝐖}=1L))+c({𝐖}=1L)].\mathcal{L}=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)}}\big{[}\\ {-}\operatorname{D}_{\text{KL}}\!\big{(}\!\operatorname{Q}(\mathbf{W}_{L{+}1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L})||\!\operatorname{P}(\mathbf{W}_{L{+}1}|\mathbf{Y},\!\mathbf{X},\!\{\mathbf{W}_{\ell}\}_{\ell=1}^{L})\!\big{)}\\ +c(\{\mathbf{W}\}_{\ell=1}^{L})\big{]}. (5)

Thus, the optimal approximate posterior is the true last-layer posterior conditioned on the previous layers’ weights,

Q(𝐖L+1|{𝐖}=1L)=P(𝐖L+1|𝐘,𝐗,{𝐖}=1L)\displaystyle\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)=\operatorname{P}\left(\mathbf{W}_{L+1}|\,\mathbf{Y},\mathbf{X},\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)
P(𝐘|𝐖L+1,𝐅L)P(𝐖L+1),\displaystyle\;\;\;\;\;\;\propto\operatorname{P}\left(\mathbf{Y}|\mathbf{W}_{L+1},\mathbf{F}_{L}\right)\operatorname{P}\left(\mathbf{W}_{L+1}\right), (6)

where the final proportionality comes by applying Bayes theorem and exploiting the model’s conditional independencies. For regression, the likelihood is Gaussian,

P(𝐘|𝐖L+1,𝐅L)=λ=1NL+1𝒩(𝐲λ|ϕ(𝐅L)𝐰λL+1,𝚲L+11),\operatorname{P}\left(\mathbf{Y}|\mathbf{W}_{L+1},\mathbf{F}_{L}\right)=\\ {\textstyle\prod}_{\lambda=1}^{N_{L+1}}\mathcal{N}\left(\mathbf{y}_{\lambda}\middle|\phi\left(\mathbf{F}_{L}\right)\mathbf{w}^{L+1}_{\lambda},\mathbf{\Lambda}^{-1}_{L+1}\right), (7)

where 𝐲λ\mathbf{y}_{\lambda} is the value of a single output channel for all training inputs, and 𝚲L+1\mathbf{\Lambda}_{L+1} is a precision matrix. Thus, the posterior is given in closed form by Bayesian linear regression (Rasmussen & Williams, 2006):

Q(𝐖L+1|{𝐖}=1L)=λ=1NL+1𝒩(𝐰λL+1|𝚺ϕ(𝐅L)T𝚲L+1𝐲λ,𝚺),\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)=\\ {\textstyle\prod}_{\lambda=1}^{N_{L+1}}\mathcal{N}\left(\mathbf{w}^{L+1}_{\lambda}\middle|\mathbf{\Sigma}\phi\left(\mathbf{F}_{L}\right)^{T}\mathbf{\Lambda}_{L+1}\mathbf{y}_{\lambda},\mathbf{\Sigma}\right), (8)

where

𝚺=(NL𝐒L+11+ϕ(𝐅L)T𝚲L+1ϕ(𝐅L))1.\displaystyle\mathbf{\Sigma}=(N_{L}\mathbf{S}_{L+1}^{-1}+\phi\left(\mathbf{F}_{L}\right)^{T}\mathbf{\Lambda}_{L+1}\phi(\mathbf{F}_{L}))^{-1}. (9)

While this result may be neither particularly surprising nor novel, it neatly highlights our motivation for the rest of the paper. In particular, it shows that for regression, we can always obtain the optimal conditional top-layer posterior for regression, which to the best of our knowledge has not been used before in BNN inference. Moreover, doing top-layer Bayesian linear regression based on the propagated features from the previous layers naturally introduces correlations between layers.

2.1 Defining the full approximate posterior with global inducing points and pseudo-outputs

We adapt the optimal top-layer approximate posterior above to give a scalable approximate posterior over the weights at all layers. To avoid propagating all training inputs through the network, which is intractable for large datasets, we instead propagate MM global inducing locations, 𝐔0M×N0\mathbf{U}_{0}\in\mathbb{R}^{M\times N_{0}},

𝐔1\displaystyle\mathbf{U}_{1} =𝐔0𝐖1,\displaystyle=\mathbf{U}_{0}\mathbf{W}_{1},
𝐔\displaystyle\mathbf{U}_{\ell} =ϕ(𝐔1)𝐖 for =2,,L+1.\displaystyle=\phi\left(\mathbf{U}_{\ell-1}\right)\mathbf{W}_{\ell}\quad\text{ for }\ell=2,\dots,L+1. (10)

Next, the optimal posterior requires outputs, 𝐘\mathbf{Y}. However, no outputs are available at inducing locations for the output layer, let alone for intermediate layers. We thus introduce learned variational parameters to mimic the form of the optimal posterior. In particular, we use the product of the prior over weights and an “inducing-likelihood”, 𝒩(𝐯λ;𝐮λ,𝚲1)\mathcal{N}\left(\mathbf{v}_{\lambda}^{\ell};\mathbf{u}_{\lambda}^{\ell},\mathbf{\Lambda}_{\ell}^{-1}\right), representing noisy “pseudo-outputs” of the outputs of the linear layer at the inducing locations, 𝐮λ=ϕ(𝐔1)𝐰λ\mathbf{u}_{\lambda}^{\ell}=\phi\left(\mathbf{U}_{\ell-1}\right)\mathbf{w}_{\lambda}^{\ell}. Substituting 𝐮λ\mathbf{u}_{\lambda}^{\ell} into the inducing-likelihood the approximate posterior becomes,

Q(𝐖|\displaystyle\operatorname{Q}\big{(}\mathbf{W}_{\ell}| {𝐖}=11)\displaystyle\left\{\mathbf{W}_{\ell^{\prime}}\right\}_{\ell^{\prime}=1}^{\ell-1}\big{)}
λ=1N𝒩(𝐯λ;ϕ(𝐔1)𝐰λ,𝚲1)P(𝐰λ),\displaystyle\propto{\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{v}_{\lambda}^{\ell};\phi\left(\mathbf{U}_{\ell-1}\right)\mathbf{w}_{\lambda}^{\ell},\mathbf{\Lambda}_{\ell}^{-1}\right)\operatorname{P}\left(\mathbf{w}_{\lambda}^{\ell}\right),
Q(𝐖\displaystyle\operatorname{Q}\big{(}\mathbf{W}_{\ell} |{𝐖}=11)\displaystyle|\left\{\mathbf{W}_{\ell^{\prime}}\right\}_{\ell^{\prime}=1}^{\ell-1}\big{)}
=λ=1N𝒩(𝐰λ|𝚺𝐰ϕ(𝐔1)T𝚲𝐯λ,𝚺𝐰),\displaystyle={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{w}^{\ell}_{\lambda}\middle|\mathbf{\Sigma}^{\mathbf{w}}_{\ell}\phi\left(\mathbf{U}_{\ell-1}\right)^{T}\mathbf{\Lambda}_{\ell}\mathbf{v}^{\ell}_{\lambda},\mathbf{\Sigma}^{\mathbf{w}}_{\ell}\right),
𝚺𝐰\displaystyle\mathbf{\Sigma}^{\mathbf{w}}_{\ell} =(N1𝐒1+ϕ(𝐔1)T𝚲ϕ(𝐔1))1.\displaystyle=\left(N_{\ell-1}\mathbf{S}^{-1}_{\ell}+\phi\left(\mathbf{U}_{\ell-1}\right)^{T}\mathbf{\Lambda}_{\ell}\phi\left(\mathbf{U}_{\ell-1}\right)\right)^{-1}. (11)

where 𝐯λ\mathbf{v}_{\lambda}^{\ell} and 𝚲\mathbf{\Lambda}_{\ell} are variational parameters. For clarity, we have: 𝐮λ,𝐯λM\mathbf{u}_{\lambda}^{\ell},\mathbf{v}_{\lambda}^{\ell}\in\mathbb{R}^{M}, so that 𝐔1M×N1\mathbf{U}_{\ell-1}\in\mathbb{R}^{M\times N_{\ell-1}} and 𝐕M×N\mathbf{V}_{\ell}\in\mathbb{R}^{M\times N_{\ell}} are formed by stacking these vectors, and 𝐰λN1\mathbf{w}_{\lambda}^{\ell}\in\mathbb{R}^{N_{\ell-1}}, with 𝐒,𝚺𝐰N1×N1\mathbf{S}_{\ell},\mathbf{\Sigma}^{\mathbf{w}}_{\ell}\in\mathbb{R}^{N_{\ell-1}\times N_{\ell-1}} and 𝚲M×M\mathbf{\Lambda}_{\ell}\in\mathbb{R}^{M\times M}. Therefore, our full approximate posterior factorises as

Q({𝐖}=1L+1)\displaystyle\operatorname{Q}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L+1}\right) ==1L+1Q(𝐖|{𝐖}=11).\displaystyle=\prod_{\ell=1}^{L+1}\operatorname{Q}\left(\mathbf{W}_{\ell}\middle|\left\{\mathbf{W}_{\ell^{\prime}}\right\}_{\ell^{\prime}=1}^{\ell-1}\right). (12)

Substituting this approximate posterior and the factorised prior into the ELBO (Eq. 3), the full ELBO can be written,

=𝔼Q({𝐖}=1L+1)[logP(𝐘,|𝐗,{𝐖}=1L+1)+=1L+1logP(𝐖)Q(𝐖|{𝐖}=11)].\mathcal{L}=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L+1}\right)}}\bigg{[}\log\operatorname{P}\left(\mathbf{Y},|\mathbf{X},\{\mathbf{W}\}_{\ell=1}^{L+1}\right)\\ +\sum_{\ell=1}^{L+1}\log\frac{\operatorname{P}\left(\mathbf{W}_{\ell}\right)}{\operatorname{Q}\left(\mathbf{W}_{\ell}|\left\{\mathbf{W}_{\ell}\right\}_{\ell=1}^{\ell-1}\right)}\bigg{]}. (13)

where P(𝐖)\operatorname{P}\left(\mathbf{W}_{\ell}\right) is given by Eq. (2) and Q(𝐖|{𝐖}=11)\operatorname{Q}\left(\mathbf{W}_{\ell}|\left\{\mathbf{W}_{\ell}\right\}_{\ell=1}^{\ell-1}\right) is given by Eq. (11). The forms of the ELBO and approximate posterior suggest a sequential procedure to evaluate and subsequently optimise it: we alternate between sampling the weights using Eq. (11) and propagating the data and inducing points (Eq. 1 and Eq. 10; see Alg. 1). In summary, the parameters of the approximate posterior are the global inducing inputs, 𝐔0\mathbf{U}_{0}, and the pseudo-outputs and precisions at all layers, {𝐕,𝚲}=1L+1\{\mathbf{V}_{\ell},\mathbf{\Lambda}_{\ell}\}_{\ell=1}^{L+1}. As each factor Q(𝐖|{𝐖}=11)\operatorname{Q}\left(\mathbf{W}_{\ell}\middle|\left\{\mathbf{W}_{\ell^{\prime}}\right\}_{\ell^{\prime}=1}^{\ell-1}\right) is Gaussian, these parameters can be optimised using standard reparameterised variational inference (Kingma & Welling, 2013; Rezende et al., 2014) in combination with the Adam optimiser (Kingma & Ba, 2014) (Appendix B). Importantly, by placing inducing inputs on the training data (i.e. 𝐔0=𝐗\mathbf{U}_{0}=\mathbf{X}), and setting 𝐯λ=𝐲λ\mathbf{v}_{\lambda}^{\ell}=\mathbf{y}_{\lambda} this approximate posterior matches the optimal top-layer posterior (Eq. 6). Finally, we note that while this posterior is conditionally Gaussian, the full posterior over all {𝐖}=1L+1\{\mathbf{W}_{\ell}\}_{\ell=1}^{L+1} is non-Gaussian, and is thus potentially more flexible than a full-covariance Gaussian defined jointly over all weights at all layers.

Algorithm 1 Global inducing points for neural networks
  Parameters: 𝐔0\mathbf{U}_{0}, {𝐕,𝚲}=1L\{\mathbf{V}_{\ell},\mathbf{\Lambda}_{\ell}\}_{\ell=1}^{L}.
  Neural network inputs: 𝐅0\mathbf{F}_{0}
  Neural network outputs: 𝐅L+1\mathbf{F}_{L+1}
  0\mathcal{L}\leftarrow 0
  for \ell in {1,,L+1}\{1,\dotsc,L+1\} do
     Compute the mean and cov. for weights at this layer
     𝚺𝐰=(N1𝐒1+ϕ(𝐔1)T𝚲ϕ(𝐔1))1\mathbf{\Sigma}^{\mathbf{w}}_{\ell}=\left(N_{\ell-1}\mathbf{S}^{-1}_{\ell}+\phi\left(\mathbf{U}_{\ell-1}\right)^{T}\mathbf{\Lambda}_{\ell}\phi\left(\mathbf{U}_{\ell-1}\right)\right)^{-1}
     𝐌=𝚺𝐰ϕ(𝐔1)T𝚲𝐕\mathbf{M}_{\ell}=\mathbf{\Sigma}^{\mathbf{w}}_{\ell}\phi\left(\mathbf{U}_{\ell-1}\right)^{T}\mathbf{\Lambda}_{\ell}\mathbf{V}_{\ell}
     Sample the weights and compute the ELBO
     𝐖𝒩(𝐌,𝚺𝐰)=Q(𝐖|{𝐖}=11)\mathbf{W}_{\ell}\sim\mathcal{N}\left(\mathbf{M}_{\ell},\mathbf{\Sigma}^{\mathbf{w}}_{\ell}\right)=\operatorname{Q}\left(\mathbf{W}_{\ell}\middle|\left\{\mathbf{W}_{\ell^{\prime}}\right\}_{\ell^{\prime}=1}^{\ell-1}\right)
     +logP(𝐖)log𝒩(𝐖|𝐌,𝚺𝐰)\mathcal{L}\leftarrow\mathcal{L}+\log\operatorname{P}\left(\mathbf{W}_{\ell}\right)-\log\mathcal{N}\left(\mathbf{W}_{\ell}|\mathbf{M}_{\ell},\mathbf{\Sigma}^{\mathbf{w}}_{\ell}\right)
     Propagate the inputs and inducing points using sampled weights,
     𝐔=ϕ(𝐔1)𝐖\mathbf{U}_{\ell}=\phi\left(\mathbf{U}_{\ell-1}\right)\mathbf{W}_{\ell}
     𝐅=ϕ(𝐅1)𝐖\mathbf{F}_{\ell}=\phi\left(\mathbf{F}_{\ell-1}\right)\mathbf{W}_{\ell}
  end for
  +logP(𝐘|𝐅L+1)\mathcal{L}\leftarrow\mathcal{L}+\log\operatorname{P}\left(\mathbf{Y}|\mathbf{F}_{L+1}\right)

2.2 Efficient convolutional Bayesian linear regression

The previous sections were valid for a fully connected network. The extension to convolutional networks is straightforward in principle: we transform the convolution into a matrix multiplication by treating each patch as a separate input feature-vector, flattening the spatial and channel dimensions together into a single vector. Thus, the feature-vectors have length in_channels ×\times kernel_width ×\times kernel_height, and the matrix 𝐔\mathbf{U}_{\ell} contains patches_per_image ×\times minibatch patches. Likewise, we now have inducing outputs, 𝐯λ\mathbf{v}_{\lambda}^{\ell}, at each location in all the inducing images, so this again has length patches_per_image ×\times minibatch. After explicitly extracting the patches, we can straightforwardly apply standard Bayesian linear regression.

However, explicitly extracting image patches is very memory intensive in a DNN. If we consider a standard convolution with a 3×33\times 3 convolutional kernel, then there is a 3×33\times 3 patch centred at each pixel in the input image, meaning a factor of 99 increase in memory consumption. Instead, we note that computing the matrices required for linear regression, ϕ(𝐔1)T𝚲ϕ(𝐔1)\phi\left(\mathbf{U}_{\ell-1}\right)^{T}\mathbf{\Lambda}_{\ell}\phi\left(\mathbf{U}_{\ell-1}\right) and ϕ(𝐔)T𝚲𝐕\phi\left(\mathbf{U}_{\ell}\right)^{T}\mathbf{\Lambda}_{\ell}\mathbf{V}_{\ell}, does not require explicit extraction of image-patches. Instead, these matrices can be computed by taking the autocorrelation of the image/feature map, i.e. a convolution operation where we treat the image/feature map, as both the inputs and the weights (Appendix C for details).

2.3 Priors

We consider four priors in this work, which we refer to using the class names in the bayesfunc library published alongside this paper. We are careful to ensure that all parameters in the model have a prior and approximate posterior, which is necessary to ensure that ELBOs are comparable across models.

First, we consider a Gaussian prior with fixed scale, NealPrior, so named because it is necessary to obtain meaningful results when considering infinite networks (Neal, 1996),

𝐒\displaystyle\mathbf{S}_{\ell} =𝐈,\displaystyle=\mathbf{I}, (14)

though it bears strong similarities to the “He” initialisation (He et al., 2015). NealPrior is defined so as to ensure that the activations retain a sensible scale as they propagate through the network. We compare this to the standard 𝒩(0,1)\mathcal{N}(0,1) (StandardPrior), which causes the activations to increase exponentially as they propagate through network layers (see Eq. 2):

𝐒\displaystyle\mathbf{S}_{\ell} =N1𝐈.\displaystyle=N_{\ell-1}\mathbf{I}. (15)

Next, we consider ScalePrior, which defines a prior and approximate posterior over the scale,

𝐒=1s𝐈\displaystyle\mathbf{S}_{\ell}=\tfrac{1}{s_{\ell}}\mathbf{I} (16)
P(s)=Gamma(s;2,2)\displaystyle\operatorname{P}\left(s_{\ell}\right)=\text{Gamma}\left(s_{\ell};2,2\right) (17)
Q(s)=Gamma(s;2+α,2+β)\displaystyle\operatorname{Q}\left(s_{\ell}\right)=\text{Gamma}\left(s_{\ell};2+\alpha_{\ell},2+\beta_{\ell}\right) (18)

where here we parameterise the Gamma distribution with the shape and rate parameters, and α\alpha_{\ell} and β\beta_{\ell} are non-negative learned parameters of the approximate posterior over ss_{\ell}. Finally, we consider SpatialIWPrior, which allows for spatial correlations in the weights (e.g. see Fortuin et al., 2021, for a more restrictive spatial prior over weights). In particular, we take the covariance to be the Kronecker product of an identity matrix over channel dimensions, and a Wishart-distributed matrix, 𝐋1\mathbf{L}^{-1}_{\ell}, over the spatial dimensions,

𝐒\displaystyle\mathbf{S}_{\ell} =𝐈𝐋1\displaystyle=\mathbf{I}\otimes\mathbf{L}^{-1}_{\ell}
P(𝐋)\displaystyle\operatorname{P}\left(\mathbf{L}_{\ell}\right) =𝒲1(𝐋;(N1+1)𝐈,N1+1)\displaystyle=\mathcal{W}^{-1}\left(\mathbf{L}_{\ell};\left(N_{\ell{-}1}{+}1\right)\mathrlap{\mathbf{I}}\phantom{\mathbf{I}+\mathbf{\Psi}},N_{\ell{-}1}{+}1\right)
Q(𝐋)\displaystyle\operatorname{Q}\left(\mathbf{L}_{\ell}\right) =𝒲1(𝐋;(N1+1)𝐈+𝚿,N1+1+ν)\displaystyle=\mathcal{W}^{-1}\left(\mathbf{L}_{\ell};\left(N_{\ell{-}1}{+}1\right)\mathbf{I}+\mathbf{\Psi},N_{\ell{-}1}{+}1{+}\nu\right) (19)

where 𝒲1\mathcal{W}^{-1} is the inverse-Wishart distribution, and the non-negative real number, ν\nu, and the positive definite matrix, 𝚿\mathbf{\Psi}, are learned parameters of the approximate posterior (see Appendix D).

Refer to caption
Figure 1: Predictive distributions on the toy dataset. Shaded regions represent one standard deviation.

2.4 Extension to DGPs

It is a remarkable but underappreciated fact that BNNs are special cases of DGPs, with a particular choice of kernel (Louizos & Welling, 2016; Aitchison, 2019). Combining Eqs. (1) and (2),

P(𝐅|𝐅1)\displaystyle\operatorname{P}\left(\mathbf{F}_{\ell}\middle|\mathbf{F}_{\ell-1}\right) =λ=1N𝒩(𝐟λ|𝟎,𝐊(𝐅1))\displaystyle={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{f}_{\lambda}^{\ell}\middle|{\bf{0}},\mathbf{K}\left(\mathbf{F}_{\ell-1}\right)\right)
𝐊(𝐅1)\displaystyle\mathbf{K}\left(\mathbf{F}_{\ell-1}\right) =1N1ϕ(𝐅1)𝐒ϕ(𝐅1)T.\displaystyle=\frac{1}{N_{\ell-1}}\phi\left(\mathbf{F}_{\ell-1}\right)\mathbf{S}_{\ell}\phi\left(\mathbf{F}_{\ell-1}\right)^{T}. (20)

Here, we generalise our approximate posterior to the DGP case and link to the DGP literature. In a DGP there are no weights; instead we work directly with inducing outputs {𝐔}=1L+1\{\mathbf{U}_{\ell}\}_{\ell=1}^{L+1},

P(𝐔|𝐔1)\displaystyle\operatorname{P}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right) =λ=1N𝒩(𝐮λ|𝟎,𝐊(𝐔1)).\displaystyle={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{u}_{\lambda}^{\ell}\middle|{\bf{0}},\mathbf{K}\left(\mathbf{U}_{\ell-1}\right)\right). (21)

Note that here, we take the “global” inducing approach of using the inducing outputs from the previous layer, 𝐔1\mathbf{U}_{\ell-1} as the inducing inputs for the next layer. In this case, we need only learn the original inducing inputs, 𝐔0\mathbf{U}_{0}. This contrasts with the standard “local” inducing formulation, (as in Salimbeni & Deisenroth, 2017), which learns separate inducing inputs at every layer, 𝐙1\mathbf{Z}_{\ell-1}, giving P(𝐔|𝐙1)=λ=1N𝒩(𝐮λ|𝟎,𝐊(𝐙1))\operatorname{P}\left(\mathbf{U}_{\ell}\middle|\mathbf{Z}_{\ell-1}\right)={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{u}_{\lambda}^{\ell}\middle|{\bf{0}},\mathbf{K}\left(\mathbf{Z}_{\ell-1}\right)\right).

As usual in DGPs (Salimbeni & Deisenroth, 2017), the approximate posterior over 𝐔\mathbf{U}_{\ell} induces an approximate posterior on 𝐅\mathbf{F}_{\ell} through the prior correlations. However, it is important to remember that underneath the tractable distributions in Eqs. (20) and (21), there is an infinite dimensional GP-distributed function, \mathcal{F}_{\ell}, such that 𝐅=(𝐅1)\mathbf{F}_{\ell}=\mathcal{F}_{\ell}(\mathbf{F}_{\ell-1}). Standard local inducing point methods specify a factorised approximate posterior over \mathcal{F}_{\ell} by specifying the function’s inducing outputs, 𝐔=(𝐙1)\mathbf{U}_{\ell}=\mathcal{F}_{\ell}\left(\mathbf{Z}_{\ell-1}\right), at a finite number of inducing input locations, 𝐙1\mathbf{Z}_{\ell-1}. Importantly, the approximate posterior over a function, \mathcal{F}_{\ell}, depends only on 𝐙1\mathbf{Z}_{\ell-1}, and 𝐔\mathbf{U}_{\ell}. Thus, standard, local inducing, DGP approaches (e.g.  Salimbeni & Deisenroth, 2017), give a layerwise-independent approximate posterior over {}=1L+1\{\mathcal{F}_{\ell}\}_{\ell=1}^{L+1}, as they treat the inducing inputs, {𝐙1}=1L+1\{\mathbf{Z}_{\ell-1}\}_{\ell=1}^{L+1}, as fixed, learned parameters and use a layerwise-independent approximate posterior over {𝐔}=1L+1\{\mathbf{U}_{\ell}\}_{\ell=1}^{L+1} (Appendix G).

Next, we need to choose the approximate posterior on {𝐔}=1L+1\{\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}. However, if our goal is to introduce dependence across layers, it seems inappropriate to use the standard layerwise-independent approximate posterior over {𝐔}=1L+1\{\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}. Indeed, in Appendix G, we show that such a posterior implies functions in non-adjacent layers (e.g. \mathcal{F}_{\ell} and +2)\mathcal{F}_{\ell+2}) are marginally independent, even with global inducing points.

To obtain more appropriate approximate posteriors, we consider the optimal top-layer posterior for DGPs, which involves GP regression from activations propagated from lower layers onto the output data (Appendix F). Inspired by the form of the optimal posterior we again define an approximate posterior by taking the product of the prior and a “inducing-likelihood”,

Q(𝐔|𝐔1)\displaystyle\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right) λ=1NQ(𝐮λ)𝒩(𝐯λ|𝐮λ,𝚲1)\displaystyle\propto{\textstyle\prod_{\lambda=1}^{N_{\ell}}}\operatorname{Q}\left(\mathbf{u}_{\lambda}^{\ell}\right)\mathcal{N}\left(\mathbf{v}_{\lambda}^{\ell}\middle|\mathbf{u}_{\lambda}^{\ell},\mathbf{\Lambda}_{\ell}^{-1}\right)
Q(𝐔|𝐔1)\displaystyle\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right) =λ=1N𝒩(𝐮λ|𝚺𝐮𝚲𝐯λ,𝚺𝐮),\displaystyle={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{u}_{\lambda}^{\ell}\middle|\mathbf{\Sigma}^{\mathbf{u}}_{\ell}\mathbf{\Lambda}_{\ell}\mathbf{v}_{\lambda}^{\ell},\mathbf{\Sigma}^{\mathbf{u}}_{\ell}\right),
𝚺𝐮\displaystyle\mathbf{\Sigma}^{\mathbf{u}}_{\ell} =(𝐊1(𝐔1)+𝚲)1,\displaystyle=\left(\mathbf{K}^{-1}\left(\mathbf{U}_{\ell-1}\right)+\mathbf{\Lambda}_{\ell}\right)^{-1}, (22)

where 𝐯λ\mathbf{v}_{\lambda}^{\ell} and 𝚲1\mathbf{\Lambda}_{\ell}^{-1} are learned parameters, and in our global inducing method, the inducing inputs, 𝐔1\mathbf{U}_{\ell-1}, are propagated from lower layers (Eq. 21). Importantly, setting the inducing inputs to the training data and 𝐯λL+1=𝐲λ\mathbf{v}_{\lambda}^{L+1}=\mathbf{y}_{\lambda}, the approximate posterior captures the optimal top-layer posterior for regression (Appendix F). Under this approximate posterior, dependencies in 𝐔\mathbf{U}_{\ell} naturally arise across all layers, and hence there are dependencies between functions \mathcal{F}_{\ell} at all layers (Appendix G).

In summary, we propose an approximate posterior over inducing outputs that takes the form

Q({𝐔}=1L+1)=l=1L+1Q(𝐔|𝐔1).\operatorname{Q}\left(\{\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}\right)=\prod_{l=1}^{L+1}\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right). (23)

As before, the parameters of this approximate posterior are the global inducing inputs, 𝐔0\mathbf{U}_{0}, and the pseudo-outputs and precisions at all layers, {𝐕,𝚲}=1L+1\{\mathbf{V}_{\ell},\mathbf{\Lambda}_{\ell}\}_{\ell=1}^{L+1}. The full ELBO, (see App.E for further details), takes the form

=𝔼Q({𝐅,𝐔}=1L+1|𝐗,𝐔0)[logP(𝐘|𝐅L+1)+=1L+1logP(𝐔|𝐔1)Q(𝐔|𝐔1)].\mathcal{L}=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{F}_{\ell},\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}\middle|\mathbf{X},\mathbf{U}_{0}\right)}}\bigg{[}\log\operatorname{P}\left(\mathbf{Y}\middle|\mathbf{F}_{L+1}\right)\\ +\sum_{\ell=1}^{L+1}\log\frac{\operatorname{P}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right)}{\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right)}\bigg{]}. (24)

where P(𝐔|𝐔1)\operatorname{P}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right) is given by Eq. (21) and Q(𝐔|𝐔1)\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right) is given by Eq. (22).

We provide a full description of our method as applied to DGPs in App. E.

2.5 Asymptotic complexity

Refer to caption
Figure 2: ELBO for different approximate posteriors as we change network depth/width on a dataset generated using a linear Gaussian model. The rand \rightarrow gi line lies behind the global inducing line in width =50=50 and width =250=250.

In the deep GP case, the complexity for global inducing is exactly that of standard inducing point Gaussian processes, i.e. 𝒪(M3+PM2)\mathcal{O}(M^{3}+PM^{2}) where MM is the number of inducing points, and PP can be taken to be the number of training inputs, or the size of a minibatch, as appropriate. The first term, M3M^{3}, comes from computing and sampling the posterior over 𝐔\mathbf{U}_{\ell} based on the inducing points (e.g. inverting the covariance). The second term, and PM2PM^{2} comes from computing the implied distribution over 𝐅\mathbf{F}_{\ell}.

In the fully-connected BNN case, we have three terms, 𝒪(N3+MN2+PN2)\mathcal{O}(N^{3}+MN^{2}+PN^{2}). The first term, N3N^{3}, where NN corresponds to the width of the network, arises from taking the inverse of the covariance matrix in Eq. (11), but is also the complexity e.g. for propagating the inducing points from layer to the next (Eq. 10). The second term, MN2MN^{2}, comes from computing that covariance in Eq. (11), by taking the product of input features with themselves. The third term PN2PN^{2} comes from multiplying the training inputs/minibatch by the sampled inputs (Eq. 1).

3 Results

We describe our experiments and results to assess the performance of global inducing points (‘gi’) against local inducing points (‘li’) and the fully factorised (‘fac’) approximation family. We additionally consider models where we use one method up to the last layer and another for the last layer, which may have computational advantages; we denote such models ‘method1 \rightarrow method2’.

3.1 Uncertainty in 1D regression

We demonstrate the use of local and global inducing point methods in a toy 1-D regression problem, comparing it with fully factorised VI and Hamiltonian Monte Carlo (HMC; (Neal et al., 2011)). Following Hernández-Lobato & Adams (2015), we generate 40 input-output pairs (x,y)(x,y) with the inputs xx sampled i.i.d. from 𝒰([4,2][2,4])\mathcal{U}([-4,-2]\cup[2,4]) and the outputs generated by y=x3+ϵy=x^{3}+\epsilon, where ϵ𝒩(0,32)\epsilon\sim\mathcal{N}(0,3^{2}). We then normalised the inputs and outputs. Note that we have introduced a ‘gap’ in the inputs, following recent work (Foong et al., 2019b; Yao et al., 2019; Foong et al., 2019a) that identifies the ability to express ‘in-between’ uncertainty as an important quality of approximate inference algorithms. We evaluated the inference algorithms using fully-connected BNNs with 2 hidden layers of 50 ReLU hidden units, using the NealPrior. For the inducing point methods, we used 100 inducing points per layer.

The predictive distributions for the toy experiment can be seen in Fig. 1. We observe that of the variational methods, the global inducing method produces predictive distributions closest to HMC, with good uncertainty in the gap. Meanwhile, factorised and local inducing fit the training data, but do not produce reasonable error bars, demonstrating an important limitation of methods lacking correlation structure between layers.

We provide additional experiments looking at the effect of the number of inducing points in Appendix I, and experiments looking at compositional uncertainty (Ustyuzhaninov et al., 2020) in both BNNs and DGPs for 1D regression in Appendix J.

3.2 Depth-dependence in deep linear networks

Refer to caption
Figure 3: Average test log likelihoods for BNNs on the UCI datasets (in nats). Error bars represent one standard error. Shading represents different priors. We connect the factorised models with the fac \rightarrow gi models with a thin grey line as an aid for easier comparison. Further to the right is better.

The lack of correlations between layers might be expected to become more problematic in deeper networks. To isolate the effect of depth on different approximate posteriors, we considered deep linear networks trained on data generated from a toy linear model: 5 input features were mapped to 1 output feature, where the 1000 training and 100 test inputs are drawn IID from a standard Gaussian, and the true outputs are drawn using a weight-vector drawn IID from a Gaussian with variance 1/51/5, and with noise variance of 0.10.1. We could evaluate the model evidence under the true data generating process which forms an upper bound (in expectation) on the model evidence and ELBO for all models.

We found that the ELBO for methods that factorise across layers — factorised and local inducing — drops rapidly as networks get deeper and wider (Fig. 2). This is undesirable behaviour, as we know that wide, deep networks are necessary for good performance on difficult machine learning tasks. In contrast, we found that methods with global inducing points at the last layer decay much more slowly with depth, and perform better as networks get wider. Remarkably, global-inducing points gave good performance even with lower-layer weights drawn at random from the prior, which is not possible for any method that factorises across layers. We believe that fac \rightarrow gi performed poorly at width =250=250 due to optimisation issues as rand \rightarrow gi performs better yet is a special case of fac \rightarrow gi.

3.3 Regression benchmark: UCI

We benchmark our methods on the UCI datasets in Hernández-Lobato & Adams (2015), popular benchmark regression datasets for BNNs and DGPs. Following the standard approach (Gal & Ghahramani, 2015), each dataset uses 20 train-test ‘splits’ (except for protein with 5 splits) and the inputs and outputs are normalised to have zero mean and unit standard deviation. We focus on the five smallest datasets, as we expect Bayesian methods to be most relevant in small-data settings (see App. K andM for all datasets). We consider two-layer fully-connected ReLU networks, using fully factorised and global inducing approximating families, as well as two- and five-layer DGPs with doubly-stochastic variational inference (DSVI) (Salimbeni & Deisenroth, 2017) and global inducing. For the BNNs, we consider the standard 𝒩(0,1)\mathcal{N}(0,1) prior and ScalePrior.

We display ELBOs and average test log likelihoods for the un-normalised data in Fig. 3, where the dots and error bars represent the means and standard errors over the test splits, respectively. We observe that global inducing obtains better ELBOs than factorised and DSVI in almost every case, indicating that it does indeed approximate the true posterior better (since the ELBO is the marginal likelihood minus the KL to the posterior). While this is the case for the ELBOs, this does not always translate to a better test log likelihood due to model misspecification, as we see that occasionally DSVI outperforms global inducing by a very small margin. The very poor results for factorised on ScalePrior indicate that it has difficulty learning useful prior hyperparameters for prediction (see also Blundell et al. (2015)), which is due to the looseness of its bound to the marginal likelihood. We provide experimental details, as well as additional results with additional architectures, priors, datasets, and RMSEs, in Appendices K and M, for BNNs and DGPs, respectively.

3.4 Convolutional benchmark: CIFAR-10

Table 1: CIFAR-10 classification accuracy. The first block shows our main results without data augmentation or tempering with SpatialIWPrior, (with ScalePrior in brackets). The next block shows our results with data augmentation and tempering on with a larger ResNet18 with SpatialIWPrior. The next block shows comparable past results, from GPs and BNNs. The final block show non-comparable (sampling-based) methods. Dashes indicate that the figures were either not reported, are not applicable. The time is reported per epoch with ScalePrior and for MNIST, rather than CIFAR-10 because of a known performance bug in the convolutions required in Sec. 2.2 with 32×3232\times 32 (and above) images https://github.com/pytorch/pytorch/issues/35603.
test log like. accuracy (%) ELBO time
factorised -0.58 (-0.66) 80.27 (77.65) -1.06 (-1.12) 19 s
no tempering or local inducing -0.62 (-0.60) 78.96 (79.46) -0.84 (-0.88) 33 s
data augmentation fac \rightarrow gi -0.49 (-0.56) 83.33 (81.72) -0.91 (-0.96) 25 s
global inducing -0.40 (-0.43) 86.70 (85.73) -0.68 (-0.75) 65 s
with tempering and factorised -0.39 87.52
data augmentation fac \rightarrow gi -0.24 92.41
Shi et al. (2019) 80.30%80.30\%
VI prior work Li et al. (2019) 81.40%81.40\%
Shridhar et al. (2019) 73%73\%
sampling prior work Wenzel et al. (2020) 0.35-0.35 88.50%88.50\%

For CIFAR-10, we considered a ResNet-inspired model consisting of conv2d-relu-block-avgpool2-block-avgpool2-block-avgpool-linear, where the ResNet blocks consisted of a shortcut connection in parallel with conv2d-relu-conv2d-relu, using 32 channels in all layers. In all our experiments, we used no data augmentation and 500 inducing points. Our training scheme (see App. O) ensured that our results did not reflect a ‘cold posterior’ (Wenzel et al., 2020). Our results are shown in Table 1. We achieved remarkable performance of 86.7%86.7\% predictive accuracy, with global inducing points used for all layers, and with a spatial inverse Wishart prior on the weights. These results compare very favourably with comparable Bayesian approaches, i.e. those without data augmentation or posterior sharpening: past work with deep GPs obtained 80.3%80.3\% (Shi et al., 2019), and work using infinite-width neural networks to define a GP obtained 81.4%81.4\% accuracy (Li et al., 2019). Remarkably, with only 500 inducing points we are approaching the accuracy of sampling-based methods (Wenzel et al., 2020), which are in principle able to more closely approximate the true posterior. Furthermore, we see that global inducing performs the best in terms of ELBO (per datapoint) by a wide margin, demonstrating that it gets far closer to the true posterior than the other methods. We provide additional results on uncertainty calibration and out-of-distribution detection in Appendix L.

Finally, while we have focused this work on achieving good results with a fully principled, Bayesian approach, we briefly consider training a full ResNet-18 (He et al., 2016) using more popular techniques such as data augmentation and tempering. While data augmentation and tempering are typically viewed as clouding the Bayesian perspective, there is work attempting to formalise both within the context of modified probabilistic generative models (Aitchison, 2020; Nabarro et al., 2021). Using a cold posterior with a factor of 20 reduction on the KL term, with horizontal flipping and random cropping, we obtained a test accuracy of 87.52%87.52\% and a test log likelihood of 0.39-0.39 nats with a standard fully factorised Gaussian posterior using SpatialIWPrior. Using fac \rightarrow gi, we obtain a significant improvement of 92.41%92.41\% test accuracy and a test log likelihood of 0.24-0.24 nats. We attempted to train a full global inducing model; however, we encountered difficulties in scaling the method to the large widths of ResNet-18 layers. We believe this presents an avenue for fruitful future work in scaling global inducing point posteriors.

4 Related work

Louizos & Welling (2016) attempted to use pseudo-data along with matrix variate Gaussians to form an approximate posterior for BNNs; however, they restricted their analysis to BNNs, and it is not clear how their method can be applied to DGPs. Their approach factorises across layers, thus missing the important layerwise correlations that we obtain. Moreover, they encountered an important limitation: the BNN prior implies that 𝐔\mathbf{U}_{\ell} is low-rank and it is difficult to design an approximate posterior capturing this constraint. As such, they were forced to use M<NM<N_{\ell} inducing points, which is particularly problematic in the convolutional, global-inducing case where there are many patches (points) in each inducing image input.

Note that some work on BNNs reports better performance on datasets such as CIFAR-10. However, to the best of our knowledge, no variational Bayesian method outperforms ours without modifying the BNN model or some form of posterior tempering (Wenzel et al., 2020), where the KL term in the ELBO is down-weighted relative to the likelihood (Zhang et al., 2017; Bae et al., 2018; Osawa et al., 2019; Ashukha et al., 2020), which often increases the test accuracy. However, tempering clouds the Bayesian perspective, as the KL to the posterior is no longer minimised and the resulting objective is no longer a lower bound on the marginal likelihood. By contrast, we use the untempered ELBO, thereby retaining the full Bayesian perspective. Dusenberry et al. (2020) report better performance on CIFAR-10 without tempering, but only perform variational inference over a rank-1 perturbation to the weights, and maximise over all the other parameters, which may risk overfitting (Ober et al., 2021). Our approach retains a full-rank parameterisation of the weight matrices.

Ustyuzhaninov et al. (2020) attempted to introduce dependence across layers in a deep GP by coupling inducing inputs to pseudo-outputs, which they term “inducing points as inducing locations”. However, as described above, the choice of approximate posterior over 𝐔\mathbf{U}_{\ell} is also critical. They used the standard approximate posterior that is independent across layers, meaning that while functions in adjacent layers were marginally dependent, the functions for non-adjacent layers were independent (Appendix G). By contrast, our approximate posteriors have marginal dependencies across 𝐔\mathbf{U}_{\ell} and functions at all layers, and are capable of capturing the optimal top-layer posterior.

Contemporaneous work has a similar motivation but is ultimately very different (Lindinger et al., 2020). They use a joint multivariate Gaussian (with structured covariance) for the approximate posterior for all inducing outputs at all layers (with local inducing inputs). In contrast, our approximate posterior is only Gaussian at a single layer (conditioned on the input to that layer). The approximate posterior over all layers is not Gaussian (see Eq. 22, where the covariance at layer \ell depends on 𝐔1\mathbf{U}_{\ell-1}), which allows us to capture the optimal top-layer posterior. Indeed, we have linear computational complexity in depth, whereas their approach is cubic (their Sec 3.2).

Karaletsos & Bui (2020) propose a very different hybrid GP-BNN which uses an RBF-GP prior over weights, whereas we use standard DGP models (which do not have any weights) and BNN models with IID weight priors. Our BNN and DGP map from global-inducing inputs defined only at the input layer onto activations, while their GPs map from inducing inputs in a “neuron-embedding” space onto weights. Unlike Karaletsos & Bui (2020), our approach scales directly to large ResNets, and allows us to capture the optimal top-layer posterior. Note they use “global” vs. “local” for different priors, while we use it for different approximate posteriors. While their models allow for the possibility of correlations between layers, these arise from modifying the prior structure, which are then reflected in the approximate posterior. In contrast, we use standard Gaussian priors over weights that are uncorrelated across layers, and our across-layer dependencies arise only in the approximate posterior.

Alternative approaches to introducing more flexibility and dependence in approximate posteriors are to use hierarchical variational inference (HVI), i.e. to introduce auxiliary random variables which can be integrated out to give a richer variational posterior (Ranganath et al., 2016; Louizos & Welling, 2017; Sobolev & Vetrov, 2019). This has been used in a number of prior works to introduce correlations between parameters (Dusenberry et al., 2020; Louizos & Welling, 2017; Ghosh & Doshi-Velez, 2017). As we do not introduce auxiliary variables, our approach is not HVI. However, our approximate posterior does include dependencies across latent variables and is thus an instance of structured stochastic variational inference (Hoffman & Blei, 2015).

The general idea of marginalising over the Gaussian process latent function in order to compute the marginal likelihood is common in the Gaussian process literature (e.g. Heinonen et al., 2016), and is related to, but very different from our approach of using the top-layer optimal posterior to inspire an approximate posterior for the weights of a Bayesian neural network, or the Gaussian process function.

Finally, very recent work has suggested an alternative approach to unifying inference in deep NN and DGP models, by noting that these models can be equivalently written in terms of distributions over positive-definite Gram matrices, rather than working with features or weights like here (Aitchison et al., 2020). Their motivation was explicitly to account for rotation/permutation symmetries in the posterior over neural network weights and over DGP features. Interestingly, our global inducing approximations partially account for these symmetries: the posteriors over features at the next layer: assuming isotropic kernels that depend only on distance, 𝐔\mathbf{U}_{\ell} are invariant to rotations in the input, 𝐔1\mathbf{U}_{\ell-1}, and this carries over to rotations of the inputs for BNNs. However, they are as of yet unable to scale to large convolutional architectures.

5 Conclusions

We derived optimal top-layer variational approximate posteriors for BNNs and deep GPs, and used them to develop generic, scalable approximate posteriors. These posteriors make use of global inducing points, which are learned only at the bottom layer and are propagated through the network. This leads to extremely flexible posteriors, which even allow the lower-layer weights to be drawn from the prior. We showed that these global inducing variational posteriors lead to improved performance with better ELBOs, and state-of-the-art performance for variational BNNs on CIFAR-10.

Acknowledgments

We would like to thank the anonymous reviewers for their valuable feedback which greatly improved the manuscript. We thank David R. Burt, Andrew Y. K. Foong, Siddharth Swaroop, Adrià Garriga-Alonso, Vidhi Lalchand, and Carl E. Rasmussen for valuable discussions and feedback. SWO acknowledges the support of the Gates Cambridge Trust in funding his doctoral studies. LA would like to thank the University of Bristol’s Advanced Computing Research Centre (ACRC) for computational resources.

References

  • Aitchison (2019) Aitchison, L. Why bigger is not always better: on finite and infinite neural networks. arXiv preprint arXiv:1910.08013, 2019.
  • Aitchison (2020) Aitchison, L. A statistical theory of cold posteriors in deep neural networks. arXiv preprint arXiv:2008.05912, 2020.
  • Aitchison et al. (2020) Aitchison, L., Yang, A. X., and Ober, S. W. Deep kernel processes. arXiv preprint arXiv:2010.01590, 2020.
  • Ashukha et al. (2020) Ashukha, A., Lyzhov, A., Molchanov, D., and Vetrov, D. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. arXiv preprint arXiv:2002.06470, 2020.
  • Bae et al. (2018) Bae, J., Zhang, G., and Grosse, R. Eigenvalue corrected noisy natural gradient. arXiv preprint arXiv:1811.12565, 2018.
  • Bartlett (1933) Bartlett, M. S. On the theory of statistical regression. Proceedings of the Royal Society of Edinburgh, 53:260–283, 1933.
  • Blundell et al. (2015) Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
  • Burt et al. (2020) Burt, D. R., Rasmussen, C. E., and van der Wilk, M. Convergence of sparse variational inference in Gaussian processes regression. Journal of Machine Learning Research, 21:1–63, 2020.
  • Chafaï (2015) Chafaï, D. Bartlett decomposition and other factorizations, 2015. URL http://djalil.chafai.net/blog/2015/10/20/bartlett-decomposition-and-other-factorizations/.
  • Damianou & Lawrence (2013) Damianou, A. and Lawrence, N. Deep Gaussian processes. In Artificial Intelligence and Statistics, pp.  207–215, 2013.
  • Dusenberry et al. (2020) Dusenberry, M. W., Jerfel, G., Wen, Y., Ma, Y.-a., Snoek, J., Heller, K., Lakshminarayanan, B., and Tran, D. Efficient and scalable Bayesian neural nets with rank-1 factors. arXiv preprint arXiv:2005.07186, 2020.
  • Dutordoir et al. (2019) Dutordoir, V., van der Wilk, M., Artemev, A., Tomczak, M., and Hensman, J. Translation insensitivity for deep convolutional Gaussian processes. arXiv preprint arXiv:1902.05888, 2019.
  • Farquhar et al. (2020) Farquhar, S., Smith, L., and Gal, Y. Try depth instead of weight correlations: Mean-field is a less restrictive assumption for deeper networks. arXiv preprint arXiv:2002.03704, 2020.
  • Foong et al. (2019a) Foong, A. Y., Burt, D. R., Li, Y., and Turner, R. E. Pathologies of factorised Gaussian and MC dropout posteriors in Bayesian neural networks. arXiv preprint arXiv:1909.00719, 2019a.
  • Foong et al. (2019b) Foong, A. Y., Li, Y., Hernández-Lobato, J. M., and Turner, R. E. ’in-between’ uncertainty in Bayesian neural networks. arXiv preprint arXiv:1906.11537, 2019b.
  • Fortuin et al. (2021) Fortuin, V., Garriga-Alonso, A., Wenzel, F., Rätsch, G., Turner, R., van der Wilk, M., and Aitchison, L. Bayesian neural network priors revisited. arXiv preprint arXiv:2102.06571, 2021.
  • Gal & Ghahramani (2015) Gal, Y. and Ghahramani, Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. arXiv preprint arXiv:1506.02142, 2015.
  • Ghosh & Doshi-Velez (2017) Ghosh, S. and Doshi-Velez, F. Model selection in Bayesian neural networks via horseshoe priors. arXiv preprint arXiv:1705.10388, 2017.
  • Graves (2011) Graves, A. Practical variational inference for neural networks. In Advances in neural information processing systems, pp. 2348–2356, 2011.
  • Guo et al. (2017) Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp.  1321–1330. JMLR. org, 2017.
  • He et al. (2015) He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the IEEE international conference on computer vision, pp.  1026–1034, 2015.
  • He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  770–778, 2016.
  • Heinonen et al. (2016) Heinonen, M., Mannerström, H., Rousu, J., Kaski, S., and Lähdesmäki, H. Non-stationary Gaussian process regression with Hamiltonian Monte Carlo. In Artificial Intelligence and Statistics, pp.  732–740. PMLR, 2016.
  • Hernández-Lobato & Adams (2015) Hernández-Lobato, J. M. and Adams, R. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning, pp. 1861–1869, 2015.
  • Hinton & Van Camp (1993) Hinton, G. E. and Van Camp, D. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pp.  5–13, 1993.
  • Hoffman & Blei (2015) Hoffman, M. D. and Blei, D. M. Structured stochastic variational inference. In Artificial Intelligence and Statistics, pp.  361–369, 2015.
  • Ioffe & Szegedy (2015) Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448–456. PMLR, 2015.
  • Jordan et al. (1999) Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999.
  • Karaletsos & Bui (2020) Karaletsos, T. and Bui, T. D. Hierarchical Gaussian process priors for Bayesian neural network weights. arXiv preprint arXiv:2002.04033, 2020.
  • Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Kingma & Welling (2013) Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • Kingma et al. (2015) Kingma, D. P., Salimans, T., and Welling, M. Variational dropout and the local reparameterization trick. In Advances in neural information processing systems, pp. 2575–2583, 2015.
  • Krizhevsky et al. (2009) Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
  • Krueger et al. (2017) Krueger, D., Huang, C.-W., Islam, R., Turner, R., Lacoste, A., and Courville, A. Bayesian hypernetworks. arXiv preprint arXiv:1710.04759, 2017.
  • Li et al. (2019) Li, Z., Wang, R., Yu, D., Du, S. S., Hu, W., Salakhutdinov, R., and Arora, S. Enhanced convolutional neural tangent kernels. arXiv preprint arXiv:1911.00809, 2019.
  • Lindinger et al. (2020) Lindinger, J., Reeb, D., Lippert, C., and Rakitsch, B. Beyond the mean-field: Structured deep Gaussian processes improve the predictive uncertainties. arXiv preprint arXiv:2005.11110, 2020.
  • Louizos & Welling (2016) Louizos, C. and Welling, M. Structured and efficient variational deep learning with matrix Gaussian posteriors. In International Conference on Machine Learning, pp. 1708–1716, 2016.
  • Louizos & Welling (2017) Louizos, C. and Welling, M. Multiplicative normalizing flows for variational Bayesian neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp.  2218–2227. JMLR. org, 2017.
  • Nabarro et al. (2021) Nabarro, S., Ganev, S., Garriga-Alonso, A., Fortuin, V., van der Wilk, M., and Aitchison, L. Data augmentation in Bayesian neural networks and the cold posterior effect. arXiv preprint, 2021.
  • Naeini et al. (2015) Naeini, M. P., Cooper, G., and Hauskrecht, M. Obtaining well calibrated probabilities using Bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
  • Neal (1996) Neal, R. M. Priors for infinite networks. In Bayesian Learning for Neural Networks, pp.  29–53. Springer, 1996.
  • Neal et al. (2011) Neal, R. M. et al. MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11):2, 2011.
  • Netzer et al. (2011) Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. 2011.
  • Ober et al. (2021) Ober, S. W., Rasmussen, C. E., and van der Wilk, M. The promises and pitfalls of deep kernel learning. arXiv preprint arXiv:2102.12108, 2021.
  • Osawa et al. (2019) Osawa, K., Swaroop, S., Khan, M. E. E., Jain, A., Eschenhagen, R., Turner, R. E., and Yokota, R. Practical deep learning with Bayesian principles. In Advances in Neural Information Processing Systems, pp. 4289–4301, 2019.
  • Pawlowski et al. (2017) Pawlowski, N., Brock, A., Lee, M. C., Rajchl, M., and Glocker, B. Implicit weight uncertainty in neural networks. arXiv preprint arXiv:1711.01297, 2017.
  • Pearl (1988) Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.
  • Ranganath et al. (2016) Ranganath, R., Tran, D., and Blei, D. Hierarchical variational models. In International Conference on Machine Learning, pp. 324–333. PMLR, 2016.
  • Rasmussen & Williams (2006) Rasmussen, C. E. and Williams, C. K. Gaussian Processes for Machine Learning. ISBN-13 978-0-262-18253-9, 2006.
  • Rezende et al. (2014) Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
  • Ritter et al. (2018) Ritter, H., Botev, A., and Barber, D. A scalable Laplace approximation for neural networks. In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings, volume 6. International Conference on Representation Learning, 2018.
  • Salimbeni & Deisenroth (2017) Salimbeni, H. and Deisenroth, M. Doubly stochastic variational inference for deep Gaussian processes. In Advances in Neural Information Processing Systems, pp. 4588–4599, 2017.
  • Shi et al. (2019) Shi, J., Titsias, M. K., and Mnih, A. Sparse orthogonal variational inference for Gaussian processes. arXiv preprint arXiv:1910.10596, 2019.
  • Shridhar et al. (2019) Shridhar, K., Laumann, F., and Liwicki, M. A comprehensive guide to Bayesian convolutional neural network with variational inference. arXiv preprint arXiv:1901.02731, 2019.
  • Sobolev & Vetrov (2019) Sobolev, A. and Vetrov, D. Importance weighted hierarchical variational inference. arXiv preprint arXiv:1905.03290, 2019.
  • Tieleman & Hinton (2012) Tieleman, T. and Hinton, G. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31, 2012.
  • Trippe & Turner (2018) Trippe, B. and Turner, R. Overpruning in variational Bayesian neural networks. arXiv preprint arXiv:1801.06230, 2018.
  • Ustyuzhaninov et al. (2020) Ustyuzhaninov, I., Kazlauskaite, I., Kaiser, M., Bodin, E., Campbell, N. D., and Ek, C. H. Compositional uncertainty in deep Gaussian processes. UAI, 2020.
  • Wenzel et al. (2020) Wenzel, F., Roth, K., Veeling, B. S., Świątkowski, J., Tran, L., Mandt, S., Snoek, J., Salimans, T., Jenatton, R., and Nowozin, S. How good is the Bayes posterior in deep neural networks really? arXiv preprint arXiv:2002.02405, 2020.
  • Yao et al. (2019) Yao, J., Pan, W., Ghosh, S., and Doshi-Velez, F. Quality of uncertainty quantification for Bayesian neural network inference. arXiv preprint arXiv:1906.09686, 2019.
  • Zhang et al. (2017) Zhang, G., Sun, S., Duvenaud, D., and Grosse, R. Noisy natural gradient as variational inference. arXiv preprint arXiv:1712.02390, 2017.

Appendix A Optimal approximate posterior derivation

Starting with Eq (3), we start by combining the prior and likelihood for P\operatorname{P},

=𝔼Q({𝐖}=1L+1)[logP(𝐘,{𝐖}=1L+1|𝐗)logQ({𝐖}=1L+1)],\displaystyle\mathcal{L}=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L+1}\right)}}\big{[}\log\operatorname{P}\left(\mathbf{Y},\{\mathbf{W}\}_{\ell=1}^{L+1}|\mathbf{X}\right)-\log\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L+1}\right)\big{]}, (25)

then split out 𝐖L+1\mathbf{W}_{L+1} from P\operatorname{P} and Q\operatorname{Q},

=𝔼Q({𝐖}=1L+1)[logP(𝐘,{𝐖}=1L|𝐗)+logP(𝐖L+1|𝐘,𝐗,{𝐖}=1L)logQ(𝐖L+1|{𝐖}=1L)logQ({𝐖}=1L)].\mathcal{L}=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L+1}\right)}}\big{[}\log\operatorname{P}\left(\mathbf{Y},\{\mathbf{W}\}_{\ell=1}^{L}|\mathbf{X}\right)+\log\operatorname{P}\left(\mathbf{W}_{L+1}|\mathbf{Y},\mathbf{X},\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)\\ -\log\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)-\log\operatorname{Q}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)\big{]}. (26)

We are interested in the optimal Q(𝐖L+1|{𝐖}=1L)\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right) for any setting of {𝐖}=1L\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}. Therefore, P(𝐘,{𝐖}=1L|𝐗)\operatorname{P}\left(\mathbf{Y},\{\mathbf{W}\}_{\ell=1}^{L}|\mathbf{X}\right) and Q({𝐖}=1L)\operatorname{Q}\left(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right) do not depend on 𝐖L+1\mathbf{W}_{L+1} and can be collected into c({𝐖}=1L)c(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}). We therefore obtain,

\displaystyle\mathcal{L} =𝔼Q({𝐖}=1L+1)[logP(𝐖L+1|𝐘,𝐗,{𝐖}=1L)logQ(𝐖L+1|{𝐖}=1L)+c({𝐖}=1L)].\displaystyle=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L+1}\right)}}\left[\log\operatorname{P}\left(\mathbf{W}_{L+1}|\mathbf{Y},\mathbf{X},\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)-\log\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)+c(\{\mathbf{W}_{\ell}\}_{\ell=1}^{L})\right]. (27)

We can nest the expectations,

\displaystyle\mathcal{L} =𝔼Q({𝐖}=1L)[𝔼Q(𝐖L+1|{𝐖}=1L)[logP(𝐖L+1|𝐘,𝐗,{𝐖}=1L)logQ(𝐖L+1|{𝐖}=1L)]+c({𝐖}=1L)].\displaystyle=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L}\right)}}\left[\mathbb{E}_{{\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)}}\left[\log\operatorname{P}\left(\mathbf{W}_{L+1}|\mathbf{Y},\mathbf{X},\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)-\log\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)\right]+c(\{\mathbf{W}\}_{\ell=1}^{L})\right]. (28)

Then notice that the inner expectation is a KL-divergence,

\displaystyle\mathcal{L} =𝔼Q({𝐖}=1L)[DKL(Q(𝐖L+1|{𝐖}=1L)||P(𝐖L+1|𝐘,𝐗,{𝐖}=1L))+c({𝐖}=1L)].\displaystyle=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{W}\}_{\ell=1}^{L}\right)}}\left[-\operatorname{D}_{\text{KL}}\left(\operatorname{Q}\left(\mathbf{W}_{L+1}|\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)||\operatorname{P}\left(\mathbf{W}_{L+1}|\mathbf{Y},\mathbf{X},\{\mathbf{W}_{\ell}\}_{\ell=1}^{L}\right)\right)+c(\{\mathbf{W}\}_{\ell=1}^{L})\right]. (29)

Appendix B Reparameterised variational inference

In variational inference, the ELBO objective takes the form,

(ϕ)\displaystyle\mathcal{L}(\phi) =𝔼Qϕ(𝐰)[logP(𝐘|𝐗,𝐰)+logP(𝐰)Qϕ(𝐰)]\displaystyle=\mathbb{E}_{{\operatorname{Q}_{\phi}\left(\mathbf{w}\right)}}\left[\log\operatorname{P}\left(\mathbf{Y}|\mathbf{X},\mathbf{w}\right)+\log\frac{\operatorname{P}\left(\mathbf{w}\right)}{\operatorname{Q}_{\phi}\left(\mathbf{w}\right)}\right] (30)

where 𝐰\mathbf{w} is a vector containing all of the elements of the weight matrices in the full network, {𝐖}=1L+1\{\mathbf{W}_{\ell}\}_{\ell=1}^{L+1}, and ϕ=(𝐔0,{𝐕,𝚲}=1L+1)\phi=(\mathbf{U}_{0},\{\mathbf{V}_{\ell},\mathbf{\Lambda}_{\ell}\}_{\ell=1}^{L+1}) are the parameters of the approximate posterior. This objective is difficult to differentiate wrt ϕ\phi, because ϕ\phi parameterises the distribution over which the expectation is taken. Following Kingma & Welling (2013) and Rezende et al. (2014), we sample ϵ\epsilon from a simple, fixed distribution (e.g. IID Gaussian), and transform them to give samples from Q(w)\operatorname{Q}\left(w\right),

𝐰(ϵ;ϕ)Qϕ(𝐰).\displaystyle\mathbf{w}(\epsilon;\phi)\sim\operatorname{Q}_{\phi}\left(\mathbf{w}\right). (31)

Thus, the ELBO can be written,

(ϕ)\displaystyle\mathcal{L}(\phi) =𝔼ϵ[logP(𝐘|𝐗,𝐰(ϵ;ϕ))+logP(𝐰(ϵ;ϕ))Qϕ(𝐰(ϵ;ϕ))]\displaystyle=\mathbb{E}_{{\epsilon}}\left[\log\operatorname{P}\left(\mathbf{Y}|\mathbf{X},\mathbf{w}(\epsilon;\phi)\right)+\log\frac{\operatorname{P}\left(\mathbf{w}(\epsilon;\phi)\right)}{\operatorname{Q}_{\phi}\left(\mathbf{w}(\epsilon;\phi)\right)}\right] (32)

As the distribution over which the expectation is taken is now independent of ϕ\phi, we can form unbiased estimates of the gradient of (ϕ)\mathcal{L}(\phi) by drawing one or a few samples of ϵ\epsilon.

Appendix C Efficient convolutional linear regression

Working in one dimension for simplicity, the standard form for a convolution in deep learning is

Yiu,c\displaystyle Y_{iu,c} =cδXi(u+δ)cWδc,c.\displaystyle=\sum_{c^{\prime}\delta}X_{i\left(u+\delta\right)c^{\prime}}W_{\delta c^{\prime},c}. (33)

where XX is the input image/feature-map, YY is the output feature-map, WW is the convolutional weights, ii indexes images, cc and cc^{\prime} index channels, uu indexes the location within the image, and δ\delta indexes the location within the convolutional patch. Later, we will swap the identity of the “patch location” and the “image location” and to facilitate this, we define them both to be centred on zero,

u\displaystyle u {(W1)/2,,(W1)/2}\displaystyle\in\left\{-(W-1)/2,\dotsc,(W-1)/2\right\} δ{(S1)/2,,(S1)/2}\displaystyle\delta\in\left\{-(S-1)/2,\dotsc,(S-1)/2\right\} (34)

where WW is the size of an image and SS is the size of a patch, such that, for example for a size 3 kernel, δ{1,0,1}\delta\in\left\{-1,0,1\right\}.

To use the expressions for the fully-connected case, we can form a new input, XX^{\prime} by cutting out each image patch,

Xiu,δc\displaystyle X^{\prime}_{iu,\delta c} =Xi(u+δ)c,\displaystyle=X_{i\left(u+\delta\right)c}, (35)

leading to

Yiu,c\displaystyle Y_{iu,c} =δcXiu,δcWδc,c.\displaystyle=\sum_{\delta c^{\prime}}X^{\prime}_{iu,\delta c^{\prime}}W_{\delta c^{\prime},c}. (36)

Note that we have used commas to group pairs of indices (iuiu and δc\delta c^{\prime}) that may be combined into a single index (e.g. using a reshape operation). Indeed, combining ii and uu into a single index and combining δ\delta and cc^{\prime} into a single index, this expression can be viewed as standard matrix multiplication,

𝐘\displaystyle\mathbf{Y} =𝐗𝐖.\displaystyle=\mathbf{X}^{\prime}\mathbf{W}. (37)

This means that we can directly apply the approximate posterior we derived for the fully-connected case in Eq. (11) to the convolutional case. To allow for this, we take

𝐗\displaystyle\mathbf{X}^{\prime} =𝚲1/2ϕ(𝐔1),\displaystyle=\mathbf{\Lambda}_{\ell}^{1/2}\phi\left(\mathbf{U}_{\ell-1}\right), 𝐘\displaystyle\mathbf{Y} =𝚲1/2𝐕.\displaystyle=\mathbf{\Lambda}_{\ell}^{1/2}\mathbf{V}_{\ell}. (38)

While we can explicitly compute 𝐗\mathbf{X}^{\prime} by extracting image patches, this imposes a very large memory cost (a factor of 99 for a 3×33\times 3 kernel, with stride 11, because there are roughly as many patches as pixels in the image, and a 3×33\times 3 patch requires 9 times the storage of a pixel. To implement convolutional linear regression with a more manageable memory cost, we instead compute the matrices required for linear regression directly as convolutions of the input feature-maps, 𝐗\mathbf{X} with themselves, and as convolutions of the XX with the output feature maps, 𝐘\mathbf{Y}, which we describe here.

For linear regression (Eq. 11), we first need to compute,

(ϕ(𝐔1)T𝚲𝐕)δc,c\displaystyle\left(\phi\left(\mathbf{U}_{\ell-1}\right)^{T}\mathbf{\Lambda}_{\ell}\mathbf{V}_{\ell}\right)_{\delta c,c^{\prime}} =(𝐗T𝐘)δc,c=iuXiu,δcYiu,c.\displaystyle=\left(\mathbf{X}^{{}^{\prime}T}\mathbf{Y}\right)_{\delta c,c^{\prime}}=\sum_{iu}X^{\prime}_{iu,\delta c}Y_{iu,c^{\prime}}. (39)
Rewriting this in terms of XX (i.e. without explicitly cutting out image patches), we obtain,
(𝐗T𝐘)δc,c\displaystyle\left(\mathbf{X}^{{}^{\prime}T}\mathbf{Y}\right)_{\delta c,c^{\prime}} =iuXi(u+δ)cYiu,c.\displaystyle=\sum_{iu}X_{i\left(u+\delta\right)c}Y_{iu,c^{\prime}}. (40)

This can be directly viewed as the convolution of XX and YY, where we treat YY as the “convolutional weights”, uu as the location within the now very large (size WW) “convolutional patch”, and δ\delta as the location in the resulting output. Once we realise that the computation is a spatial convolution, it is possible to fit it into standard convolution functions provided by deep-learning frameworks (albeit with some rearrangement of the tensors).

Next, we need to compute,

(ϕ(𝐔1)T𝚲ϕ(𝐔1))\displaystyle\left(\phi\left(\mathbf{U}_{\ell-1}\right)^{T}\mathbf{\Lambda}_{\ell}\phi\left(\mathbf{U}_{\ell-1}\right)\right) =(𝐗T𝐗)δc,δc=iuXiu,δcXiu,δc.\displaystyle=\left(\mathbf{X}^{{}^{\prime}T}\mathbf{X}^{\prime}\right)_{\delta c,\delta^{\prime}c^{\prime}}=\sum_{iu}X^{\prime}_{iu,\delta c}X^{\prime}_{iu,\delta^{\prime}c^{\prime}}. (41)
Again, rewriting this in terms of XX (i.e. without explicitly cutting out image patches), we obtain,
(𝐗T𝐗)δc,δc\displaystyle\left(\mathbf{X}^{{}^{\prime}T}\mathbf{X}^{\prime}\right)_{\delta c,\delta^{\prime}c^{\prime}} =iuXi(u+δ)cXi(u+δ)c.\displaystyle=\sum_{iu}X_{i\left(u+\delta\right)c}X_{i\left(u+\delta^{\prime}\right)c^{\prime}}. (42)

To treat this as a convolution, we first need exact translational invariance, which can be achieved by using circular boundary conditions. Note that circular boundary conditions are not typically used in neural networks for images, and we therefore only use circular boundary conditions to define the approximate posterior over weights. The variational framework does not restrict us to also using circular boundary conditions within our feedforward network, and as such, we use standard zero-padding. With exact translational invariance, we can write this expression directly as a convolution,

(𝐗T𝐗)δc,δc\displaystyle\left(\mathbf{X}^{{}^{\prime}T}\mathbf{X}^{\prime}\right)_{\delta c,\delta^{\prime}c^{\prime}} =iuXiucXi(u+δδ)c\displaystyle=\sum_{iu}X_{iuc}X_{i\left(u+\delta^{\prime}-\delta\right)c^{\prime}} (43)
where
(δδ)\displaystyle\left(\delta^{\prime}-\delta\right) {(S1),,(S1)}\displaystyle\in\left\{-(S-1),\dotsc,(S-1)\right\} (44)

i.e. for a size 33 kernel, (δδ){2,1,0,1,2}\left(\delta^{\prime}-\delta\right)\in\left\{-2,-1,0,1,2\right\}, where we treat XiucX_{iuc} as the “convolutional weights”, uu as the location within the “convolutional patch”, and δδ\delta^{\prime}-\delta as the location in the resulting output “feature-map”.

Finally, note that this form offers considerable benefits in terms of memory consumption. In particular, the output matrices are usually quite small — the number of channels is typically 3232 or 6464, and the number of locations within a patch is typically 99, giving a very manageable total size that is typically smaller than 1000×10001000\times 1000.

Appendix D Wishart distributions with real-valued degrees of freedom

The classical description of the Wishart distribution,

𝚺Wishart(𝐈,ν),\displaystyle\mathbf{\Sigma}\sim\text{Wishart}\left(\mathbf{I},\nu\right), (45)

where 𝚺\mathbf{\Sigma} is a P×PP\times P matrix, states that PνP\geq\nu is an integer and we can generate 𝚺\mathbf{\Sigma} by taking the product of matrices, 𝐗ν×P\mathbf{X}\in\mathbb{R}^{\nu\times P}, generated IID from a standard Gaussian,

𝚺\displaystyle\mathbf{\Sigma} =𝐗T𝐗,\displaystyle=\mathbf{X}^{T}\mathbf{X}, Xij𝒩(0,1).\displaystyle X_{ij}\sim\mathcal{N}\left(0,1\right). (46)

However, for the purposes of defining learnable approximate posteriors, we need to be able to sample and evaluate the probability density when ν\nu is positive real.

To do this, consider the alternative, much more efficient means of sampling from a Wishart distribution, using the Bartlett decomposition (Bartlett, 1933). The Bartlett decomposition gives the probability density for the Cholesky of a Wishart sample. In particular,

𝐓\displaystyle\mathbf{T} =(T11T1m0Tmm),\displaystyle=\begin{pmatrix}T_{11}&\ldots&T_{1m}\\ \vdots&\ddots&\vdots\\ 0&\ldots&T_{mm}\\ \end{pmatrix}, (47)
P(Tjj2)\displaystyle\operatorname{P}\left(T_{jj}^{2}\right) =Gamma(Tjj2;νj+12,12),\displaystyle=\text{Gamma}\left(T_{jj}^{2};\tfrac{\nu-j+1}{2},\tfrac{1}{2}\right), (48)
P(Tj<k)\displaystyle\operatorname{P}\left(T_{j<k}\right) =𝒩(Tjk;0,1).\displaystyle=\mathcal{N}\left(T_{jk};0,1\right). (49)

Here, 𝐓jj\mathbf{T}_{jj} is usually considered to be sampled from a χ2\chi^{2} distribution, but we have generalised this slightly using the equivalent Gamma distribution to allow for real-valued ν\nu. Following Chafaï (2015), We need to change variables to TjjT_{jj} rather than Tjj2T_{jj}^{2},

P(Tjj)\displaystyle\operatorname{P}\left(T_{jj}\right) =P(Tjj2)|Tjj2Tjj|,\displaystyle=\operatorname{P}\left(T_{jj}^{2}\right)\left\lvert\frac{\partial T_{jj}^{2}}{\partial T_{jj}}\right\rvert, (50)
=Gamma(Tjj2;νj+12,12)2Tjj,\displaystyle=\text{Gamma}\left(T_{jj}^{2};\tfrac{\nu-j+1}{2},\tfrac{1}{2}\right)2T_{jj}, (51)
=(Tjj2)(νj+1)/21eTjj2/22(νj+1)/2Γ(νj+12)2Tjj,\displaystyle=\frac{\left(T_{jj}^{2}\right)^{\left(\nu-j+1\right)/2-1}e^{-T_{jj}^{2}/2}}{2^{\left(\nu-j+1\right)/2}\Gamma\left({\tfrac{\nu-j+1}{2}}\right)}2T_{jj}, (52)
=TjjνjeTjj2/22(νj1)/2Γ(νj+12).\displaystyle=\frac{T_{jj}^{\nu-j}e^{-T_{jj}^{2}/2}}{2^{\left(\nu-j-1\right)/2}\Gamma\left({\tfrac{\nu-j+1}{2}}\right)}. (53)

Thus, the probability density for 𝐓\mathbf{T} under the Bartlett sampling operation is

P(𝐓)\displaystyle\operatorname{P}\left(\mathbf{T}\right) =jTjjνjeT2jj/22νj12Γ(νj+12)on-diagonalsk{j+1,,m}12πeTjk2/2off-diagonals.\displaystyle=\underbrace{\prod_{j}\frac{T_{jj}^{\nu-j}e^{-T^{2}_{jj}/2}}{2^{\tfrac{\nu-j-1}{2}}\Gamma\left(\tfrac{\nu-j+1}{2}\right)}}_{\text{on-diagonals}}\underbrace{\prod_{k\in\left\{j+1,\dotsc,m\right\}}\frac{1}{\sqrt{2\pi}}e^{-T_{jk}^{2}/2}}_{\text{off-diagonals}}. (54)

To convert this to a distribution on 𝚺\mathbf{\Sigma}, we need the volume element for the transformation from 𝐓\mathbf{T} to 𝚺\mathbf{\Sigma},

d𝚺\displaystyle\mathrm{d}\mathbf{\Sigma} =2mj=1mTjjmj+1d𝐓,\displaystyle=2^{m}\prod_{j=1}^{m}T_{jj}^{m-j+1}\mathrm{d}\mathbf{T}, (55)

which can be obtained directly by computing the log-determinant of the Jacobian for the transformation from 𝐓\mathbf{T} to 𝚺\mathbf{\Sigma}, or by taking the ratio of Eq. (54) and the usual Wishart probability density (with integral ν\nu). Thus,

P(𝚺)\displaystyle\operatorname{P}\left(\mathbf{\Sigma}\right) =P(𝐓)(2mj=1mTjjmj+1)1\displaystyle=\operatorname{P}\left(\mathbf{T}\right)\left(2^{m}\prod_{j=1}^{m}T_{jj}^{m-j+1}\right)^{-1} (56)
=jTjjνm1eT2jj/22νj+12Γ(νj+12).k{j+1,,m}12πeTjk2/2.\displaystyle=\prod_{j}\frac{T_{jj}^{\nu-m-1}e^{-T^{2}_{jj}/2}}{2^{\tfrac{\nu-j+1}{2}}\Gamma\left(\tfrac{\nu-j+1}{2}\right)}.\prod_{k\in\left\{j+1,\dotsc,m\right\}}\frac{1}{\sqrt{2\pi}}e^{-T_{jk}^{2}/2}. (57)
Breaking this down into separate components and performing straightforward algebraic manipulations,
jTjjνm1\displaystyle\prod_{j}T_{jj}^{\nu-m-1} =|𝐓|νm1=|𝚺|(νm1)/2,\displaystyle=\left\lvert\mathbf{T}\right\rvert^{\nu-m-1}=\left\lvert\mathbf{\Sigma}\right\rvert^{\left(\nu-m-1\right)/2}, (58)
P(𝚺)\displaystyle\operatorname{P}\left(\mathbf{\Sigma}\right) =jeT2jj/2k{j+1,,m}eTjk2/2\displaystyle=\prod_{j}e^{-T^{2}_{jj}/2}\prod_{k\in\left\{j+1,\dotsc,m\right\}}e^{-T_{jk}^{2}/2} (59)
=ejkT2jk/2=eTr(𝚺)/2,\displaystyle=e^{-\sum_{jk}T^{2}_{jk}/2}=e^{-\operatorname*{Tr}\left(\mathbf{\Sigma}\right)/2}, (60)
j12(νj1)/2k{j+1,,m}12\displaystyle\prod_{j}\frac{1}{2^{\left(\nu-j-1\right)/2}}\prod_{k\in\left\{j+1,\dotsc,m\right\}}\frac{1}{\sqrt{2}} =(j12(νj1)/2)(j12(mj1)/2)\displaystyle=\left(\prod_{j}\frac{1}{2^{\left(\nu-j-1\right)/2}}\right)\left(\prod_{j}\frac{1}{2^{\left(m-j-1\right)/2}}\right) (61)
=(j12(νj+1)/2)(j12(j1)/2)\displaystyle=\left(\prod_{j}\frac{1}{2^{\left(\nu-j+1\right)/2}}\right)\left(\prod_{j}\frac{1}{2^{\left(j-1\right)/2}}\right) (62)
=j12(νm)/2=2mν/2,\displaystyle=\prod_{j}\frac{1}{2^{\left(\nu-m\right)/2}}=2^{-m\nu/2}, (63)

where the second-to-last line was obtained by noting that j1j-1 covers the same range of integers as mj1m-j-1 under the product. Finally, using the definition of the multivariate Gamma function,

jΓ(νj+12)k{j+1,,m}π=πm(m1)/4jΓ(νj+12)=Γm(ν2).\displaystyle\prod_{j}\Gamma\left(\tfrac{\nu-j+1}{2}\right)\prod_{k\in\left\{j+1,\dotsc,m\right\}}\sqrt{\pi}=\pi^{m(m-1)/4}\prod_{j}\Gamma\left(\tfrac{\nu-j+1}{2}\right)=\Gamma_{m}\left(\tfrac{\nu}{2}\right). (64)

We thereby re-obtain the probability density for the standard Wishart distribution,

P(𝚺)\displaystyle\operatorname{P}\left(\mathbf{\Sigma}\right) =|𝚺|(νm1)/2eTr(𝚺)/22mν/2Γm(ν2).\displaystyle=\frac{\left\lvert\mathbf{\Sigma}\right\rvert^{\left(\nu-m-1\right)/2}e^{-\operatorname*{Tr}\left(\mathbf{\Sigma}\right)/2}}{2^{m\nu/2}\Gamma_{m}\left(\tfrac{\nu}{2}\right)}. (65)

Appendix E Full description of our method for deep Gaussian process

Here we give the full derivation for our doubly-stochastic variational deep GPs, following Salimbeni & Deisenroth (2017). A deep Gaussian process (DGP; (Damianou & Lawrence, 2013; Salimbeni & Deisenroth, 2017)) defines a prior over function values, 𝐅P×N\mathbf{F}_{\ell}\in\mathbb{R}^{P\times N_{\ell}}, where \ell is the layer, PP is the number of input points, and NN_{\ell} is the “width” of this layer, by stacking L+1L+1 layers of standard Gaussian processes (we use L+1L+1 layers instead of the typical LL layers to retain consistency with how we define BNNs):

P(𝐘,{𝐅}=1L+1|𝐗)=P(𝐘|𝐅L)=1L+1P(𝐅|𝐅1).\operatorname{P}\left(\mathbf{Y},\{\mathbf{F}_{\ell}\}_{\ell=1}^{L+1}\middle|\mathbf{X}\right)=\operatorname{P}\left(\mathbf{Y}|\mathbf{F}_{L}\right){\textstyle\prod_{\ell=1}^{L+1}}\operatorname{P}\left(\mathbf{F}_{\ell}\middle|\mathbf{F}_{\ell-1}\right). (66)

Here, the input is 𝐅0=𝐗\mathbf{F}_{0}=\mathbf{X}, and the output is 𝐘\mathbf{Y} (which could be continuous values for regression, or class labels for classification), and the distribution over 𝐅P×N\mathbf{F}_{\ell}\in\mathbb{R}^{P\times N_{\ell}} factorises into independent multivariate Gaussian distributions over each function,

P(𝐅|𝐅1)\displaystyle\operatorname{P}\left(\mathbf{F}_{\ell}|\mathbf{F}_{\ell-1}\right) =λ=1NP(𝐟λ|𝐅1)=λ=1N𝒩(𝐟λ|𝟎,𝐊(𝐅1)),\displaystyle={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\operatorname{P}\left(\mathbf{f}_{\lambda}^{\ell}\middle|\mathbf{F}_{\ell-1}\right)={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{f}_{\lambda}^{\ell}\middle|{\bf{0}},\mathbf{K}\left(\mathbf{F}_{\ell-1}\right)\right), (67)

where 𝐟λ\mathbf{f}_{\lambda}^{\ell} is the λ\lambdath column of 𝐅\mathbf{F}_{\ell}, giving the activation of all datapoints for the λ\lambdath feature, and 𝐊()\mathbf{K}\left(\cdot\right) is a function that computes the kernel-matrix from the features in the previous layer.

To define a variational approximate posterior, we augment {𝐅}=1L+1\{\mathbf{F}_{\ell}\}_{\ell=1}^{L+1} with inducing points consisting of the function values {𝐔}=1L+1\{\mathbf{U}_{\ell}\}_{\ell=1}^{L+1},

P(𝐘,{𝐅,𝐔}=1L+1|𝐗,𝐔0)\displaystyle\operatorname{P}\left(\mathbf{Y},\{\mathbf{F}_{\ell},\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}|\mathbf{X},\mathbf{U}_{0}\right) =P(𝐘|𝐅L+1)=1L+1P(𝐅,𝐔|𝐅1,𝐔1)\displaystyle=\operatorname{P}\left(\mathbf{Y}\middle|\mathbf{F}_{L+1}\right){\textstyle\prod_{\ell=1}^{L+1}}\operatorname{P}\left(\mathbf{F}_{\ell},\mathbf{U}_{\ell}\middle|\mathbf{F}_{\ell-1},\mathbf{U}_{\ell-1}\right) (68)

and because 𝐅\mathbf{F}_{\ell} and 𝐔\mathbf{U}_{\ell} are the function outputs corresponding to different inputs (𝐅1\mathbf{F}_{\ell-1} and 𝐔1\mathbf{U}_{\ell-1}), they form a joint multivariate Gaussian distribution, analogous to Eq. (67),

P(𝐅,𝐔|𝐅1,𝐔1)\displaystyle\operatorname{P}\left(\mathbf{F}_{\ell},\mathbf{U}_{\ell}\middle|\mathbf{F}_{\ell-1},\mathbf{U}_{\ell-1}\right) =λ=1Nl𝒩((𝐟λ𝐮λ)|𝟎,𝐊((𝐅1𝐔1))).\displaystyle=\prod_{\lambda=1}^{N_{l}}\mathcal{N}\left(\begin{pmatrix}\mathbf{f}^{\ell}_{\lambda}\\ \mathbf{u}^{\ell}_{\lambda}\end{pmatrix}\middle|{\bf{0}},\mathbf{K}\left(\begin{pmatrix}\mathbf{F}_{\ell-1}\\ \mathbf{U}_{\ell-1}\end{pmatrix}\right)\right). (69)

Following Salimbeni & Deisenroth (2017) we form an approximate posterior by conditioning the function values, 𝐅\mathbf{F}_{\ell} on the inducing outputs, 𝐔\mathbf{U}_{\ell},

Q({𝐅,𝐔}=1L+1|𝐗,𝐔0)\displaystyle\operatorname{Q}\left(\{\mathbf{F}_{\ell},\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}\middle|\mathbf{X},\mathbf{U}_{0}\right) ==1L+1Q(𝐅,𝐔|𝐅1,𝐔1)\displaystyle={\textstyle\prod_{\ell=1}^{L+1}}\operatorname{Q}\left(\mathbf{F}_{\ell},\mathbf{U}_{\ell}\middle|\mathbf{F}_{\ell-1},\mathbf{U}_{\ell-1}\right)
==1L+1P(𝐅|𝐔,𝐔1,𝐅1)Q(𝐔|𝐔1),\displaystyle={\textstyle\prod_{\ell=1}^{L+1}}\operatorname{P}\left(\mathbf{F}_{\ell}\middle|\mathbf{U}_{\ell},\mathbf{U}_{\ell-1},\mathbf{F}_{\ell-1}\right)\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right), (70)

where Q(𝐔|𝐔1)\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right) is given by Eq. (22), i.e.

Q(𝐔|𝐔1)\displaystyle\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right) =λ=1N𝒩(𝐮λ|𝚺𝐮𝚲𝐯λ,𝚺𝐮),\displaystyle={\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{u}_{\lambda}^{\ell}\middle|\mathbf{\Sigma}^{\mathbf{u}}_{\ell}\mathbf{\Lambda}_{\ell}\mathbf{v}_{\lambda}^{\ell},\mathbf{\Sigma}^{\mathbf{u}}_{\ell}\right), 𝚺𝐮\displaystyle\mathbf{\Sigma}^{\mathbf{u}}_{\ell} =(𝐊1(𝐔1)+𝚲)1,\displaystyle=\left(\mathbf{K}^{-1}\left(\mathbf{U}_{\ell-1}\right)+\mathbf{\Lambda}_{\ell}\right)^{-1}, (71)

and P(𝐅|𝐔,𝐔1,𝐅1)\operatorname{P}\left(\mathbf{F}_{\ell}\middle|\mathbf{U}_{\ell},\mathbf{U}_{\ell-1},\mathbf{F}_{\ell-1}\right) is given by standard manipulations of Eq. (69). Importantly, the prior can be factorised in an analogous fashion,

P(𝐅,𝐔|𝐅1,𝐔1)\displaystyle\operatorname{P}\left(\mathbf{F}_{\ell},\mathbf{U}_{\ell}\middle|\mathbf{F}_{\ell-1},\mathbf{U}_{\ell-1}\right) =P(𝐅|𝐔,𝐔1,𝐅1)P(𝐔|𝐔1).\displaystyle=\operatorname{P}\left(\mathbf{F}_{\ell}\middle|\mathbf{U}_{\ell},\mathbf{U}_{\ell-1},\mathbf{F}_{\ell-1}\right)\operatorname{P}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right). (72)

The full ELBO can be written as

\displaystyle\mathcal{L} =𝔼Q({𝐅,𝐔}=1L+1|𝐗,𝐔0)[logP(𝐘,{𝐅,𝐔}=1L+1|𝐗,𝐔0)Q({𝐅,𝐔}=1L+1|𝐗,𝐔0)].\displaystyle=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{F}_{\ell},\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}\middle|\mathbf{X},\mathbf{U}_{0}\right)}}\left[\log\frac{\operatorname{P}\left(\mathbf{Y},\{\mathbf{F}_{\ell},\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}\middle|\mathbf{X},\mathbf{U}_{0}\right)}{\operatorname{Q}\left(\{\mathbf{F}_{\ell},\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}\middle|\mathbf{X},\mathbf{U}_{0}\right)}\right]. (73)

Substituting Eqs. (68), (70) and (72) into Eq. (73), the P(𝐅|𝐔,𝐔1,𝐅1)\operatorname{P}\left(\mathbf{F}_{\ell}\middle|\mathbf{U}_{\ell},\mathbf{U}_{\ell-1},\mathbf{F}_{\ell-1}\right) terms cancel and we obtain,

\displaystyle\mathcal{L} =𝔼Q({𝐅,𝐔}=1L+1|𝐗,𝐔0)[logP(𝐘|𝐅L+1)+=1L+1logP(𝐔|𝐔1)Q(𝐔|𝐔1)].\displaystyle=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{F}_{\ell},\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}\middle|\mathbf{X},\mathbf{U}_{0}\right)}}\left[\log\operatorname{P}\left(\mathbf{Y}\middle|\mathbf{F}_{L+1}\right)+\sum_{\ell=1}^{L+1}\log\frac{\operatorname{P}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right)}{\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right)}\right]. (74)

Analogous to the BNN case, we evaluate the ELBO by alternatively sampling the inducing function values 𝐔\mathbf{U}_{\ell} given 𝐔1\mathbf{U}_{\ell-1} and propagating the data by sampling from P(𝐅|𝐔,𝐔1,𝐅1)\operatorname{P}\left(\mathbf{F}_{\ell}\middle|\mathbf{U}_{\ell},\mathbf{U}_{\ell-1},\mathbf{F}_{\ell-1}\right) (see Alg. 2). As in Salimbeni & Deisenroth (2017), since all likelihoods we use factorise across datapoints, we only need to sample from the marginals of the latter distribution to sample the propagated function values. As in the BNN case, the parameters of the approximate posterior are the global inducing inputs, 𝐔0\mathbf{U}_{0}, and the pseudo-outputs and precisions at all layers, {𝐕,𝚲}=1L\{\mathbf{V}_{\ell},\mathbf{\Lambda}_{\ell}\}_{\ell=1}^{L}. We can similarly use standard reparameterised variational inference (Kingma & Welling, 2013; Rezende et al., 2014) to optimise these variational parameters, as Q(𝐔|𝐔1)\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right) is Gaussian. We use Adam (Kingma & Ba, 2014) to optimise the parameters.

Algorithm 2 Global inducing points for deep Gaussian processes
  Parameters: 𝐔0\mathbf{U}_{0}, {𝐕,𝚲}=1L\{\mathbf{V}_{\ell},\mathbf{\Lambda}_{\ell}\}_{\ell=1}^{L}.
  Neural network inputs: 𝐅0\mathbf{F}_{0}
  Neural network outputs: 𝐅L+1\mathbf{F}_{L+1}
  0\mathcal{L}\leftarrow 0
  for \ell in {1,,L+1}\{1,\dotsc,L+1\} do
     Compute the mean and covariance over the inducing outputs at this layer
     𝚺𝐮=(𝐊1(𝐔1)+𝚲)1\mathbf{\Sigma}^{\mathbf{u}}_{\ell}=\left(\mathbf{K}^{-1}\left(\mathbf{U}_{\ell-1}\right)+\mathbf{\Lambda}_{\ell}\right)^{-1}
     𝐌=𝚺𝐮𝚲𝐕\mathbf{M}_{\ell}=\mathbf{\Sigma}^{\mathbf{u}}_{\ell}\mathbf{\Lambda}_{\ell}\mathbf{V}_{\ell}
     Sample the inducing outputs and compute the ELBO
     𝐔𝒩(𝐌,𝚺𝐮)=Q(𝐔|𝐔1)\mathbf{U}_{\ell}\sim\mathcal{N}\left(\mathbf{M}_{\ell},\mathbf{\Sigma}^{\mathbf{u}}_{\ell}\right)=\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right)
     +logP(𝐔|𝐔1)log𝒩(𝐔|𝐌,𝚺𝐮)\mathcal{L}\leftarrow\mathcal{L}+\log\operatorname{P}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right)-\log\mathcal{N}\left(\mathbf{U}_{\ell}|\mathbf{M}_{\ell},\mathbf{\Sigma}^{\mathbf{u}}_{\ell}\right)
     Propagate the inputs using the sampled inducing outputs,
     𝐅P(𝐅|𝐔,𝐔1,𝐅1)\mathbf{F}_{\ell}\sim\operatorname{P}\left(\mathbf{F}_{\ell}\middle|\mathbf{U}_{\ell},\mathbf{U}_{\ell-1},\mathbf{F}_{\ell-1}\right)
  end for
  +logP(𝐘|𝐅L+1)\mathcal{L}\leftarrow\mathcal{L}+\log\operatorname{P}\left(\mathbf{Y}|\mathbf{F}_{L+1}\right)

Appendix F Motivating the approximate posterior for deep GPs

Our original motivation for the approximate posterior was for the BNN case, which we then extended to deep GPs. Here, we show how the same approximate posterior can be motivated from a deep GP perspective. As with the BNN case, we first derive the form of the optimal approximate posterior for the last layer, in the regression case. Without inducing points, the ELBO becomes

\displaystyle\mathcal{L} =𝔼Q({𝐅}=1L+1)[logP(𝐘,{𝐅}=1L+1)Q({𝐅}=1L+1)],\displaystyle=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{F}_{\ell}\}_{\ell=1}^{L+1}\right)}}\left[\log\frac{\operatorname{P}\left(\mathbf{Y},\{\mathbf{F}_{\ell}\}_{\ell=1}^{L+1}\right)}{\operatorname{Q}\left(\{\mathbf{F}_{\ell}\}_{\ell=1}^{L+1}\right)}\right], (75)

where we have defined a generic variational posterior Q({𝐅}=1L+1)\operatorname{Q}\left(\{\mathbf{F}_{\ell}\}_{\ell=1}^{L+1}\right). Since we are interested in the form of Q(𝐅L+1|{𝐅}=1L)\operatorname{Q}\left(\mathbf{F}_{L+1}\middle|\{\mathbf{F}_{\ell}\}_{\ell=1}^{L}\right), we rearrange the ELBO so that all terms that do not depend on 𝐅L+1\mathbf{F}_{L+1} are absorbed into a constant, cc:

\displaystyle\mathcal{L} =𝔼Q({𝐅}=1L+1)[logP(𝐘,𝐅L+1|{F}=1L)logQ(𝐅L+1)+c].\displaystyle=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{F}_{\ell}\}_{\ell=1}^{L+1}\right)}}\left[\log\operatorname{P}\left(\mathbf{Y},\mathbf{F}_{L+1}\middle|\{F_{\ell}\}_{\ell=1}^{L}\right)-\log\operatorname{Q}\left(\mathbf{F}_{L+1}\right)+c\right]. (76)

Some straightforward rearrangements lead to a similar form to before,

=𝔼Q({𝐅}=1L)[logP(𝐘|{𝐅}=1L)DKL(Q(𝐅L+1|{𝐅}=1L)||P(𝐅L+1|𝐘,{𝐅}=1L))+c],\mathcal{L}=\mathbb{E}_{{\operatorname{Q}\left(\{\mathbf{F}_{\ell}\}_{\ell=1}^{L}\right)}}\big{[}\log\operatorname{P}\left(\mathbf{Y}|\,\{\mathbf{F}_{\ell}\}_{\ell=1}^{L}\right)-D_{KL}\left(\operatorname{Q}\left(\mathbf{F}_{L+1}|\{\mathbf{F}_{\ell}\}_{\ell=1}^{L}\right)||\operatorname{P}\left(\mathbf{F}_{L+1}|\,\mathbf{Y},\{\mathbf{F}_{\ell}\}_{\ell=1}^{L}\right)\right)+c\big{]}, (77)

from which we see that the optimal conditional posterior is given by Q(𝐅L+1|{𝐅}=1L)=Q(𝐅L+1|𝐅L)=P(𝐅L+1|𝐘,𝐅L)\operatorname{Q}\left(\mathbf{F}_{L+1}|\{\mathbf{F}_{\ell}\}_{\ell=1}^{L}\right)=\operatorname{Q}\left(\mathbf{F}_{L+1}|\mathbf{F}_{L}\right)=\operatorname{P}\left(\mathbf{F}_{L+1}|\,\mathbf{Y},\mathbf{F}_{L}\right), which has a closed form for regression: it is simply the standard GP posterior given by training data 𝐘\mathbf{Y} at inputs 𝐅L\mathbf{F}_{L}. In particular, for likelihood

P(𝐘|𝐅L+1,𝚲L+1)\displaystyle\operatorname{P}\left(\mathbf{Y}\middle|\mathbf{F}_{L+1},\mathbf{\Lambda}_{L+1}\right) =λ=1NL+1𝒩(𝐲λL+1|𝐟λL+1,𝚲L+11),\displaystyle={\textstyle\prod}_{\lambda=1}^{N_{L+1}}\mathcal{N}\left(\mathbf{y}_{\lambda}^{L+1}\middle|\mathbf{f}_{\lambda}^{L+1},\mathbf{\Lambda}_{L+1}^{-1}\right), (78)

where 𝚲L+1\mathbf{\Lambda}_{L+1} is the precision,

Q(𝐅L+1|𝐅L)\displaystyle\operatorname{Q}\left(\mathbf{F}_{L+1}|\mathbf{F}_{L}\right) =λ=1NL+1𝒩(𝐟λL+1|𝚺L+1𝐟𝚲L+1𝐲λL+1,𝚺L+1𝐟),\displaystyle={\textstyle\prod}_{\lambda=1}^{N_{L+1}}\mathcal{N}\left(\mathbf{f}_{\lambda}^{L+1}\middle|\mathbf{\Sigma}_{L+1}^{\mathbf{f}}\mathbf{\Lambda}_{L+1}\mathbf{y}_{\lambda}^{L+1},\mathbf{\Sigma}_{L+1}^{\mathbf{f}}\right), (79)
𝚺L+1𝐟\displaystyle\mathbf{\Sigma}_{L+1}^{\mathbf{f}} =(𝐊1(𝐅L)+𝚲L+1)1.\displaystyle=(\mathbf{K}^{-1}\left(\mathbf{F}_{L}\right)+\mathbf{\Lambda}_{L+1})^{-1}. (80)

This can be understood as kernelized Bayesian linear regression conditioned on the features from the previous layers. Finally, as is usual in GPs (Rasmussen & Williams, 2006), the predictive distribution for test points can be obtained by conditioning using this approximate posterior.

Appendix G Comparing our deep GP approximate posterior to previous work

𝐅\mathbf{F}_{\ell}𝐅+1\mathbf{F}_{\ell+1}𝐅1\mathbf{F}_{\ell-1}\mathcal{F}_{\ell}𝐔\mathbf{U}_{\ell}+1\mathcal{F}_{\ell+1}𝐔+1\mathbf{U}_{\ell+1}1\mathcal{F}_{\ell-1}𝐔1\mathbf{U}_{\ell-1}A
𝐅\mathbf{F}_{\ell}𝐅+1\mathbf{F}_{\ell+1}𝐅1\mathbf{F}_{\ell-1}\mathcal{F}_{\ell}𝐔\mathbf{U}_{\ell}+1\mathcal{F}_{\ell+1}𝐔+1\mathbf{U}_{\ell+1}1\mathcal{F}_{\ell-1}𝐔1\mathbf{U}_{\ell-1}B
𝐅\mathbf{F}_{\ell}𝐅+1\mathbf{F}_{\ell+1}𝐅1\mathbf{F}_{\ell-1}\mathcal{F}_{\ell}𝐔\mathbf{U}_{\ell}+1\mathcal{F}_{\ell+1}𝐔+1\mathbf{U}_{\ell+1}1\mathcal{F}_{\ell-1}𝐔1\mathbf{U}_{\ell-1}C
Figure A1: Comparison of the graphical models for three approaches to inference in deep GPs: Salimbeni & Deisenroth (2017), Ustyuzhaninov et al. (2020), and ours. A Graphical model illustrating the approach of Salimbeni & Deisenroth (2017). The inducing inputs, {𝐔1}=1L+1\{\mathbf{U}_{\ell-1}\}_{\ell=1}^{L+1} are treated as learned parameters and are therefore omitted from the model. B Graphical model illustrating the approach of Ustyuzhaninov et al. (2020). The inducing inputs are given by the inducing outputs at the previous layer. C Graphical model illustrating our approach.

The standard approach to inference in deep GPs (e.g.  Salimbeni & Deisenroth, 2017) involves local inducing points and an approximate posterior over {𝐔}=1L+1\{\mathbf{U}_{\ell}\}_{\ell=1}^{L+1} that is factorised across layers,

Q({𝐔}=1L+1)=l=1L+1λ=1N𝒩(𝐮λ;𝐦λ,𝚺λ).\displaystyle\operatorname{Q}\left(\{\mathbf{U}_{\ell}\}_{\ell=1}^{L+1}\right)=\textstyle\prod_{l=1}^{L+1}{\textstyle\prod_{\lambda=1}^{N_{\ell}}}\mathcal{N}\left(\mathbf{u}_{\lambda}^{\ell};\mathbf{m}^{\ell}_{\lambda},\mathbf{\Sigma}^{\ell}_{\lambda}\right). (81)

This approximate posterior induces an approximate posterior over the underlying infinite-dimensional functions \mathcal{F}_{\ell} at each layer, which are implicitly used to propagate the data through the network via 𝐅=(𝐅1)\mathbf{F}_{\ell}=\mathcal{F}_{\ell}(\mathbf{F}_{\ell-1}). We show a graphical model summarising the standard approach in Fig. A1A. While, as Salimbeni & Deisenroth (2017) point out, the function values {𝐅}=1L+1\{\mathbf{F}_{\ell}\}_{\ell=1}^{L+1} are correlated, the functions {}=1L+1\{\mathcal{F}_{\ell}\}_{\ell=1}^{L+1} themselves are independent across layers. We note that for BNNs, this is equivalent to having a posterior over weights that factorises across layers.

One approach to introduce dependencies across layers for the functions would be to introduce the notion of global inducing points, propagating the initial 𝐔0\mathbf{U}_{0} through the model. In fact, as we note in the Related Work section, Ustyuzhaninov et al. (2020) independently proposed this approach to introducing dependencies, using a toy problem to motivate the approach. However, they kept the form of the approximate posterior the same as the standard approach (Eq. 21); we show the corresponding graphical model in Fig. A1B. The graphical model shows that as adjacent functions \mathcal{F}_{\ell} and +1\mathcal{F}_{\ell+1} share the parent node 𝐔\mathbf{U}_{\ell}, they are in fact dependent. However, non-adjacent functions do not share any parent nodes, and so are independent; this can be seen by considering the d-separation criterion (Pearl, 1988) for 1\mathcal{F}_{\ell-1} and +1\mathcal{F}_{\ell+1}, which have parents (𝐔2,𝐔1)(\mathbf{U}_{\ell-2},\mathbf{U}_{\ell-1}) and (𝐔,𝐔+1)(\mathbf{U}_{\ell},\mathbf{U}_{\ell+1}) respectively.

Our approach, by contrast, determines the form of the approximate posterior over 𝐔\mathbf{U}_{\ell} by performing Bayesian regression using 𝐔1\mathbf{U}_{\ell-1} as input to that layer’s GP, where the output data is 𝐕\mathbf{V}_{\ell}. This results in a posterior that depends on the previous layer, Q(𝐔|𝐔1)\operatorname{Q}\left(\mathbf{U}_{\ell}\middle|\mathbf{U}_{\ell-1}\right). We show the corresponding graphical model in Fig. A1C. From this graphical model it is straightforward to see that our approach results in a posterior over functions that are correlated across all layers.

Appendix H Parameter scaling for ADAM

Refer to caption
Figure A2: Predictive distributions on the toy dataset as the number of inducing points changes.

The standard optimiser for variational BNNs is ADAM (Kingma & Ba, 2014), which we also use. Considering similar RMSprop updates for simplicity (Tieleman & Hinton, 2012),

Δw\displaystyle\Delta w =ηg𝔼[g2]\displaystyle=\eta\frac{g}{\sqrt{\mathbb{E}\left[g^{2}\right]}} (82)

where the expectation over g2g^{2} is approximated using a moving-average of past gradients. Thus, absolute parameter changes are going to be of order η\eta. This is fine if all the parameters have roughly the same order of magnitude, but becomes a serious problem if some of the parameters are very large and others are very small. For instance, if a parameter is around 10410^{-4} and η=104\eta=10^{-4}, then a single ADAM step can easily double the parameter estimate, or change it from positive to negative. In contrast, if a parameter is around 11, then ADAM, with η4\eta^{-4} can make proportionally much smaller changes to this parameter, (around 0.01%0.01\%). Thus, we need to ensure that all of our parameters have the same scale, especially as we mix methods, such as combining factorised and global inducing points. We thus design all our new approximate posteriors (i.e. the inducing inputs and outputs) such that the parameters have a scale of around 11. The key issue is that the mean weights in factorised methods tend to be quite small — they have scale around 1/fan-in1/\sqrt{\text{fan-in}}. To resolve this issue, we store scaled weights, and we divide these stored, scaled mean parameters by the fan-in as part of the forward pass,

weights =scaled weightsfan-in.\displaystyle=\frac{\text{scaled weights}}{\sqrt{\text{fan-in}}}. (83)

This scaling forces us to use larger learning rates than are typically used.

Appendix I Exploring the effect of the number of inducing points

In this section, we briefly consider the effect of changing the number of inducing points, MM, used in global inducing. We reconsider the toy problem from Sec. 1, and plot predictive posteriors obtained with global inducing as the number of inducing points increases from 2 to 40 (noting that in Fig. 1 we used 100 inducing points).

We plot the results of our experiment in Fig. A2. While two inducing points are clearly not sufficient, we observe that there is remarkably very little difference between the predictive posteriors for 10 or more inducing points. This observation is reflected in the ELBOs per datapoint (listed above each plot), which show that adding more points beyond 10 gains very little in terms of closeness to the true posterior.

However, we note that this is a very simple dataset: it consists of only two clusters of close points with a very clear trend. Therefore, we would expect that for more complex datasets more inducing points would be necessary. We leave a full investigation of how many inducing points are required to obtain a suitable approximate posterior, such as that found in Burt et al. (2020) for sparse GP regression, to future work.

Appendix J Understanding compositional uncertainty

In this section, we take inspiration from the experiments of Ustyuzhaninov et al. (2020) which investigate the compositional uncertainty obtained by different approximate posteriors for DGPs. They noted that methods which factorise over layers have a tendency to cause the posterior distribution for each layer to collapse to a (nearly) deterministic function, resulting in worse uncertainty quantification within layers and worse ELBOs. In contrast, they found that allowing the approximate posterior to have correlations between layers allows those layers to capture more uncertainty, resulting in better ELBOs and therefore a closer approximation to the true posterior. They argue that this then allows the model to better discover compositional structure in the data.

Refer to caption
Figure A3: Posterior distributions for 2-layer DGPs with local inducing and global inducing. The first two columns show the predictive distributions for each layer taken individually, while the last column shows the predictive distribution of the output yy.
Table 2: ELBOs and variances of the intermediate functions for a BNN fit to the toy data of Fig. 1.
ELBO 𝕍[1]\mathbb{V}[\mathcal{F}_{1}] 𝕍[2]\mathbb{V}[\mathcal{F}_{2}] 𝕍[3]\mathbb{V}[\mathcal{F}_{3}]
factorised -4.585 0.0728 0.4765 0.1926
local inducing -5.469 0.0763 0.4473 0.0643
global inducing 2.236 0.4820 0.4877 1.0820

We first consider a toy problem consisting of 100 datapoints generated by sampling from a two-layer DGP of width one, with squared-exponential kernels in each layer. We then fit two two-layer DGPs to this data - one using local inducing, the other using global inducing. The results of this experiment can be seen in Fig. A3, which show the final fit, along with the learned posteriors over intermediate functions 1\mathcal{F}_{1} and 2\mathcal{F}_{2}. These results mirror those observed by Ustyuzhaninov et al. (2020) on a similar experiment: local inducing, which factorises over layers, collapses to a nearly deterministic posterior over the intermediate functions, whereas global inducing provides a much broader distribution over functions for the two layers. Therefore, global inducing leads to a wider range of plausible functions that could explain the data via composition, which can be important in understanding the data. We observe that this behaviour directly leads to better uncertainty quantification for the out-of-distribution region, as well as better ELBOs.

To illustrate a similar phenomenon in BNNs, we reconsider the toy problem of Sec. 1. As it is not meaningful to consider neural networks with only one hidden unit per layer, instead of plotting intermediate functions we instead look at the mean variance of the functions at random input points, following roughly the experiment Dutordoir et al. (2019) in Table 1. For each layer, we consider the quantity

𝔼x[1Nλ=1N𝕍[flλ(x)]],\mathbb{E}_{x}\left[\frac{1}{N_{\ell}}\sum_{\lambda=1}^{N_{\ell}}\mathbb{V}[f^{l}_{\lambda}(x)]\right], (84)

where the expectation is over random input points, which we sample from a standard normal. We expect that for methods which introduce correlations across layers, this quantity will be higher, as there will be a wider range of intermediate functions that could plausibly explain the data. We confirm this in Table 2, which indicates that global inducing leads to many more compositions of functions being considered as plausible explanations of the data. This is additionally reflected in the ELBO, which is far better for global inducing than the other, factorised methods. However, we note that the variances are far closer than we might otherwise expect for the second layer. We hypothesise that this is due to the pruning effects described in Trippe & Turner (2018), where a layer has many weights that are close to the prior that are then pruned out by the following layer by having the outgoing weights collapse to zero. In fact, we note that the variances in the last layer are small for the factorised methods, which supports this hypothesis. By contrast, global inducing leads to high variances across all layers.

We believe that understanding the role of compositional uncertainty in variational inference for deep Bayesian models can lead to important conclusions about both the models being used and the compositional structure underlying the data being modelled, and is therefore an important direction for future work to consider.

Appendix K UCI results with Bayesian neural networks

Refer to caption
Figure A4: ELBOs per datapoint and average test log likelihoods for BNNs on UCI datasets.

For this Appendix, we consider all of the UCI datasets from (Hernández-Lobato & Adams, 2015), along with four approximation families: factorised (i.e. mean-field), local inducing, global inducing, and fac\rightarrowgi, which may offer some computational advantages to global inducing. We also considered three priors: the standard 𝒩(0,1)\mathcal{N}(0,1) prior, NealPrior, and ScalePrior. The test LLs and ELBOs for BNNs applied to UCI datasets are given in Fig. A4. Note that the ELBOs for the global inducing methods (both global inducing and fac\rightarrowgi) are almost always better than those for baseline methods, often by a very large margin. However, as noted earlier, this does not necessarily correspond to better test log likelihoods due to model misspecification: there is not a straightforward relationship between the ELBO and the predictive performance, and so it is possible to obtain better test log likelihoods with worse inference. We present all the results, including for the test RMSEs, in tabulated form in Appendix P.

K.1 Experimental details

The architecture we considered for all BNN UCI experiments were fully-connected ReLU networks with 2 hidden layers of 50 hidden units each. We performed a grid search to select the learning rate and minibatch size. For the fully factorised approximation, we selected the learning rate from {3e-4,1e-3,3e-3,1e-2}\{3\textrm{e-}4,1\textrm{e-}3,3\textrm{e-}3,1\textrm{e-}2\} and the minibatch size from {32,100,500}\{32,100,500\}, optimising for 25000 gradient steps; for the other methods we selected the learning rate from {3e-3,1e-2}\{3\textrm{e-}3,1\textrm{e-}2\} and fixed the minibatch size to 10000 (as in Salimbeni & Deisenroth, 2017), optimising for 10000 gradient steps. For all methods we selected the hyperparameters that gave the best ELBO. We trained the models using 10 samples from the approximate posterior, while using 100 for evaluation. For the inducing point methods, we used the selected batch size for the number of inducing points per layer. For all methods, we initialised the log noise variance at -3, but use the scaling trick in Appendix H to accelerate convergence, scaling by a factor of 10. Note that for the fully factorised method we used the local reparameterisation trick (Kingma et al., 2015); however, for fac \rightarrow gi we cannot do so because the inducing point methods require that covariances be propagated through the network correctly. For the inducing point methods, we additionally use output channel-specific precisions, 𝚲l,λ\mathbf{\Lambda}_{l,\lambda}, which effectively allows the network to prune unnecessary neurons if that benefits the ELBO. However, we only parameterise the diagonal of these precision matrices to save on computational and memory cost.

Appendix L Uncertainty calibration & out-of-distribution detection for CIFAR-10

To assess how well our methods capture uncertainty, we consider calibration, as well as the predictive entropy for out-of-distribution data. Calibration is assessed by comparing the model’s probabilistic assessment of its confidence with its accuracy — the proportion of the time that it is actually correct. For instance, gathering model predictions with some confidence (e.g. softmax probabilities in the range 0.9 to 0.95), and looking at the accuracy of these predictions, we would expect the model to be correct with probability 0.925; a higher or lower value would represent miscalibration.

We begin by plotting calibration curves (for the small ‘ResNet’ model) in Fig. A5, obtained by binning the predictions in 20 equal bins and assessing the mean accuracy of the binned predictions. For well-calibrated models, we expect the line to lie on the diagonal. A line above the diagonal indicates the model is underconfident (the model is performing better than it expects), whereas a line below the diagonal indicates it is overconfident (it is performing worse than it expects). While it is difficult to draw strong conclusions from these plots, it appears generally that factorised is poorly calibrated for both priors, that SpatialIWPrior generally improves calibration over ScalePrior, and that local inducing with SpatialIWPrior performs very well.

Refer to caption
Figure A5: Calibration curves for CIFAR-10

To come to more quantitative conclusions, we use expected calibration error (ECE; (Naeini et al., 2015; Guo et al., 2017)), which measures the expected absolute difference between the model’s confidence and accuracy. Confirming the results from the plots (Fig. A5), we find that using the more sophisticated SpatialIWPrior gave considerable improvements in calibration. While, as expected, we find that our most accurate prior, SpatialIWPrior, in combination with global inducing points did very well (ECE of 0.021), the model with the best ECE is actually local inducing with SpatialIWPrior, albeit by a very small margin. We leave investigation of exactly why this is to future work. Finally, note our final ECE value of 0.021 is a considerable improvement over those for uncalibrated models in Guo et al. (2017) (Table 1), which are in the region of 0.03-0.045 (although considerably better calibration can be achieved by post-hoc scaling of the model’s confidence).

Table 3: Expected calibration error for CIFAR-10
factorised local inducing fac \rightarrow gi global inducing
ScalePrior 0.053 0.040 0.049 0.038
SpatialIWPrior 0.045 0.018 0.036 0.021

In addition to calibration, we consider out-of-distribution performance. Given out-of-distribution data, we would hope that the network would give high output entropies (i.e. low confidence in the predictions), whereas for in-distribution data, we would hope for low entropy predictions (i.e. high confidence in the predictions). To evaluate this, we consider the mean predictive entropies for both the CIFAR-10 test set and the SVHN (Netzer et al., 2011) test set, when the network has been trained on CIFAR-10. We compare the ratio (CIFAR-10 entropy/ SVHN entropy) of these mean predictive entropies for each model in Table 4; a lower ratio indicates that the model is doing a better job of differentiating the datasets. We see that for both priors we considered, global inducing performs the best.

Table 4: Predictive entropy ratios for CIFAR-10 & SVHN
factorised local inducing fac \rightarrow gi global inducing
ScalePrior 0.506 0.534 0.462 0.352
SpatialIWPrior 0.461 0.480 0.412 0.342

Appendix M UCI results with deep Gaussian processes

In this appendix, we again consider all of the UCI datasets from Hernández-Lobato & Adams (2015) for DGPs with depths ranging from two to five layers. We compare DSVI (Salimbeni & Deisenroth, 2017), local inducing, and global inducing. While local inducing uses the same inducing-point architecture as Salimbeni & Deisenroth (2017), the actual implementation and parameterisation is very different. As such, we do expect to see differences between local inducing and Salimbeni & Deisenroth (2017).

We show our results in Fig. A6. Here, the results are not as clear-cut as in the BNN case. For the smaller datasets (i.e. boston, concrete, energy, wine, but with the notable exception of yacht), global inducing generally outperforms both local inducing and DSVI, as noted in the main text, especially when considering the ELBOs. We do however observe that for power, protein, yacht, and one model for kin8nm, the local approaches sometimes outperform global inducing, even for the ELBOs. We believe this is due to the fact that relatively few inducing points were used (100), in combination with the fact that global inducing has far fewer variational parameters than the local approaches. This may make optimisation harder in the global inducing case, especially for larger datasets where the model uncertainty does not matter as much, as the posterior concentration will be stronger. Importantly, however, our results on CIFAR-10 indicate that these issues do not arise in very large-scale, high-dimensional datasets, which are of most interest for future work. Surprisingly, local inducing generally significantly outperforms DSVI, even though they are different parameterisations of the same approximate posterior. We leave consideration of why this is for future work.

We provide tabulated results, including for RMSEs, in Appendix P.

M.1 Experimental details

Here, we matched the experimental setup in Salimbeni & Deisenroth (2017) as closely as possible. In particular, we used 100 inducing points, and full-covariance observation noise. However, our parameterisation is still somewhat different from theirs, in part because our approximate posterior is defined in terms of noisy function-values, while their approximate posterior was defined in terms of the function-values themselves.

As the original results in Salimbeni & Deisenroth (2017) used different UCI splits, and did not provide the ELBO, we reran their code https://github.com/ICL-SML/Doubly-Stochastic-DGP (changing the number of epochs and noise variance to reflect the values in the paper), which gave very similar log likelihoods to those in the paper.

Refer to caption
Figure A6: ELBOs per datapoint and average test log likelihoods for DGPs on UCI datasets. The numbers indicate the depths of the models.

Appendix N MNIST 500

For MNIST, we considered a LeNet-inspired model consisting of two conv2d-relu-maxpool blocks, followed by conv2d-relu-linear, where the convolutions all have 3×33\times 3 kernels with 64 channels. We trained all models using a learning rate of 10310^{-3}.

When training on very small datasets, such as the first 500 training examples in MNIST, we can see a variety of pathologies emerge with standard methods. To help build intuition for these pathologies, we introduce a sanity check for the ELBO. In particular, we could imagine a model that sets the distribution over all lower-layer parameters equal to the prior, and sets the top-layer parameters so as to ensure that the predictions are uniform. With 1010 classes, this results in an average test log likelihood of 2.30log(1/10)-2.30\approx\log(1/10), and an ELBO (per datapoint) of approximately 2.30-2.30. We found that many combinations of the approximate posterior/prior converged to ELBOs near this baseline. Indeed, the only approximate posterior to escape this baseline for ScalePrior and SpatialIWPrior is global inducing points. This is because ScalePrior and SpatialIWPrior both offer the flexibility to shrink the prior variance, and hence shrink the weights towards zero, giving uniform predictions, and potentially zero KL divergence. In contrast, NealPrior and StandardPrior do not offer this flexibility: you always have to pay something in KL divergence in order to give uniform predictions. We believe that this is the reason that factorised performs better than expected with NealPrior, despite having an ELBO that is close to the baseline. Furthermore, it is unclear why local inducing gives very test log likelihood and performance, despite having an ELBO that is similar to factorised. For StandardPrior, all the ELBOs are far lower than the baseline, and far lower than for any other priors. Despite this, factorised and fac \rightarrow gi in combination with StandardPrior appear to transiently perform better in terms of predictive accuracy than any other method. These results should sound a note of caution whenever we try to use factorised approximate posteriors with fixed prior covariances (e.g. Blundell et al., 2015; Farquhar et al., 2020). We leave a full investigation of these effects for future work.

Refer to caption
Figure A7: The ELBO, test log likelihoods and classification accuracy with different priors and approximate posteriors on a reduced MNIST dataset consisting of only the first 500 training examples.

Appendix O Additional experimental details

All the methods were implemented in PyTorch. We ran the toy experiments on CPU, with the UCI experiments being run on a mixture of CPU and GPU. The remaining experiments – linear, CIFAR-10, and MNIST 500 – were run on various GPUs. For CIFAR-10, the most intensive of our experiments, we trained the models on one NVIDIA Tesla P100-PCIE-16GB. We optimised using ADAM (Kingma & Ba, 2014) throughout.

Factorised

We initialise the posterior weight means to follow the Neal scaling (i.e. drawn from NealPrior); however, we use the scaling described in Appendix H to accelerate training. We initialise the weight variances to 1e-3/N11\textrm{e-}3/\sqrt{N_{\ell-1}} for each layer.

Inducing point methods

For global inducing, we initialise the inducing inputs, 𝐔0\mathbf{U}_{0}, and pseudo-outputs for the last layer, VL+1V_{L+1}, using the first batch of data, except for the toy experiment, where we initialise using samples from 𝒩(0,1)\mathcal{N}\left(0,1\right) (since we used more inducing points than datapoints). For the remaining layers, we initialise the pseudo-outputs by sampling from 𝒩(0,1)\mathcal{N}\left(0,1\right). We initialise the log precision to 4-4, except for the last layer, where we initialise it to 0. We additionally use a scaling factor of 3 as described in Appendix H. For local inducing, the initialisation is largely the same, except we initialise the pseudo-outputs for every layer by sampling from 𝒩(0,1)\mathcal{N}\left(0,1\right). We additionally sample the inducing inputs for every layer from 𝒩(0,1)\mathcal{N}\left(0,1\right).

Toy experiment

For each variational method, we optimise the ELBO over 5000 epochs, using full batches for the gradient descent. We use a learning rate of 1e-21\textrm{e-}2. We fix the noise variance at its true value, to help assess the differences between each method more clearly. We use 10 samples from the variational posterior for training, using 100 for testing. For HMC, we use 10000 samples to burn in, and 10000 samples for evaluation, which we subsequently thin by a factor of 10. We initialise the samples from a standard normal distribution, and use 20 leapfrog steps for each sample. We hand-tune the leapfrog step sizes to be 0.0007 and 0.003 for the burn-in and sampling phases, respectively.

Deep linear network

We use 10 inducing points for the inducing point methods. We use 1 sample from the approximate posterior for training and 10 for testing, training for 40 periods of 1000 gradient steps, using full batches for each step, with a learning rate of 1e-21\textrm{e-}2.

UCI experiments

The splits that we used for the UCI datasets can be found at https://github.com/yaringal/DropoutUncertaintyExps.

CIFAR-10

The CIFAR-10 dataset (https://www.cs.toronto.edu/~kriz/cifar.html; (Krizhevsky et al., 2009)) is a 10-class dataset comprising RGB, 32×3232\times 32 images. It is divided in two sets: a training set of 50,000 examples, and a validation set of 10,000 examples. For the purposes of this paper, we use the validation set as our test set and refer to it as such, as is commonly done in the literature. We use a batch size of 500, with one sample from the approximate posterior for training and 10 for testing. For pre-processing, we normalise the data using the training dataset’s mean and standard deviation. Finally, we train for 1000 epochs with a learning rate of 1e-2 (see App. H for an explanation of why our learning rate is higher than might be expected), and we use a tempering scheme for the first 100 epochs, slowly increasing the influence of the KL divergence to the prior by multiplying it by a factor that increases from 0 to 1. In our scheme, we increase the factor in a step-wise manner, meaning that for the first ten epochs it is 0, then 0.1 for the next ten, 0.2 for the following ten, and so on. Importantly, we still have 900 epochs of training where the standard, untempered ELBO is used, meaning that our results reflect that ELBO. Finally, we note that we share the precisions 𝚲\mathbf{\Lambda}_{\ell} within layers instead of using a separate precision for each output channel as was done in the UCI case. This saves memory and computational cost although possibly at the expense of predictive performance. For the full ResNet-18 experiments, we use the same training procedure. We do not use batch normalization (Ioffe & Szegedy, 2015) for the fully factorised case, since batch normalization is difficult to interpret in a Bayesian manner as it treats training and testing points differently. However, for the inducing point methods, we use a modified version of batch normalization that computes batch statistics from the inducing data, which do not change between training and testing. For the random cropping, we use zero padding of four pixels on each edge (resulting in 40×4040\times 40 images), and randomly crop out a 32×3232\times 32 image.

MNIST 500

The MNIST dataset (http://yann.lecun.com/exdb/mnist/) is a dataset of grayscale handwritten digits, each 28×2828\times 28 pixels, with 10 classes. It comprises 60,000 training images and 10,000 test images. For the MNIST 500 experiments, we trained using the first 500 images from the training dataset and discarded the rest. We normalised the images using the full training dataset’s statistics.

Deep GPs

As mentioned, we largely follow the approach of Salimbeni & Deisenroth (2017) for hyperparameters. For global inducing, we initialise the inducing inputs to the first batch of training inputs, and we initialise the pseudo-outputs for the last layer to the respective training outputs. For the remaining layers, we initialise the pseudo-outputs by sampling from a standard normal distribution. We initialise the precision matrix to be diagonal with log precision zero for the output layer, and log precision -4 for the remaining layers. For local inducing, we initialise inducing inputs and pseudo-outputs by sampling from a standard normal for every layer, and initialise the precision matrices to be diagonal with log precision zero.

Appendix P Tables of UCI Results

Table 5: Average test log likelihoods in nats for BNNs on UCI datasets (errors are ±\pm 1 standard error)
factorised local inducing global inducing factorised \rightarrow global
boston - 𝒩(0,1)\mathcal{N}(0,1) -2.74 ±\pm 0.03 -2.80 ±\pm 0.04 2.59±0.03\mathbf{-2.59\pm 0.03} -2.60 ±\pm 0.02
NealPrior -2.76 ±\pm 0.04 -2.71 ±\pm 0.04 2.55±0.05\mathbf{-2.55\pm 0.05} -2.63 ±\pm 0.05
ScalePrior -3.63 ±\pm 0.03 -2.73 ±\pm 0.03 2.50±0.04\mathbf{-2.50\pm 0.04} -2.61 ±\pm 0.04
concrete - 𝒩(0,1)\mathcal{N}(0,1) -3.17 ±\pm 0.02 -3.28 ±\pm 0.01 3.08±0.01\mathbf{-3.08\pm 0.01} -3.14 ±\pm 0.01
NealPrior -3.21 ±\pm 0.01 -3.29 ±\pm 0.01 3.14±0.01\mathbf{-3.14\pm 0.01} -3.15 ±\pm 0.02
ScalePrior -3.89 ±\pm 0.09 -3.30 ±\pm 0.01 3.12±0.01\mathbf{-3.12\pm 0.01} -3.15 ±\pm 0.01
energy - 𝒩(0,1)\mathcal{N}(0,1) -0.76 ±\pm 0.02 -1.75 ±\pm 0.01 -0.73 ±\pm 0.03 0.68±0.03\mathbf{-0.68\pm 0.03}
NealPrior -0.79 ±\pm 0.02 -2.06 ±\pm 0.09 -0.72 ±\pm 0.02 0.70±0.02\mathbf{-0.70\pm 0.02}
ScalePrior -2.55 ±\pm 0.01 -2.42 ±\pm 0.02 0.73±0.02\mathbf{-0.73\pm 0.02} -0.74 ±\pm 0.02
kin8nm - 𝒩(0,1)\mathcal{N}(0,1) 1.24 ±\pm 0.01 1.06 ±\pm 0.01 1.22 ±\pm 0.01 1.28±0.01\mathbf{1.28\pm 0.01}
NealPrior 1.26 ±\pm 0.01 1.06 ±\pm 0.01 1.18 ±\pm 0.01 1.29±0.01\mathbf{1.29\pm 0.01}
ScalePrior 1.23 ±\pm 0.01 1.10 ±\pm 0.01 1.19 ±\pm 0.01 1.29±0.00\mathbf{1.29\pm 0.00}
naval - 𝒩(0,1)\mathcal{N}(0,1) 7.25 ±\pm 0.04 6.06 ±\pm 0.10 7.29±0.04\mathbf{7.29\pm 0.04} 7.23 ±\pm 0.02
NealPrior 7.37 ±\pm 0.03 4.28 ±\pm 0.37 6.97 ±\pm 0.04 7.45±0.02\mathbf{7.45\pm 0.02}
ScalePrior 6.99 ±\pm 0.03 2.80 ±\pm 0.00 6.63 ±\pm 0.04 7.50±0.02\mathbf{7.50\pm 0.02}
power - 𝒩(0,1)\mathcal{N}(0,1) -2.81 ±\pm 0.01 -2.82 ±\pm 0.01 -2.80 ±\pm 0.01 2.79±0.01\mathbf{-2.79\pm 0.01}
NealPrior 2.81±0.01\mathbf{-2.81\pm 0.01} -2.84 ±\pm 0.01 -2.82 ±\pm 0.01 2.81±0.01\mathbf{-2.81\pm 0.01}
ScalePrior -2.82 ±\pm 0.01 -2.84 ±\pm 0.01 -2.82 ±\pm 0.01 2.81±0.01\mathbf{-2.81\pm 0.01}
protein - 𝒩(0,1)\mathcal{N}(0,1) -2.83 ±\pm 0.00 -2.93 ±\pm 0.00 -2.84 ±\pm 0.00 2.81±0.00\mathbf{-2.81\pm 0.00}
NealPrior -2.86 ±\pm 0.00 -2.92 ±\pm 0.01 -2.87 ±\pm 0.00 2.82±0.00\mathbf{-2.82\pm 0.00}
ScalePrior -2.86 ±\pm 0.00 -2.91 ±\pm 0.00 -2.85 ±\pm 0.00 2.80±0.00\mathbf{-2.80\pm 0.00}
wine - 𝒩(0,1)\mathcal{N}(0,1) -0.98 ±\pm 0.01 -0.99 ±\pm 0.01 0.96±0.01\mathbf{-0.96\pm 0.01} 0.96±0.01\mathbf{-0.96\pm 0.01}
NealPrior -0.99 ±\pm 0.01 -0.98 ±\pm 0.01 -0.97 ±\pm 0.01 0.96±0.01\mathbf{-0.96\pm 0.01}
ScalePrior -1.22 ±\pm 0.01 -0.99 ±\pm 0.01 0.96±0.01\mathbf{-0.96\pm 0.01} 0.96±0.01\mathbf{-0.96\pm 0.01}
yacht - 𝒩(0,1)\mathcal{N}(0,1) -1.41 ±\pm 0.05 -2.39 ±\pm 0.05 0.68±0.03\mathbf{-0.68\pm 0.03} -1.12 ±\pm 0.02
NealPrior -1.58 ±\pm 0.04 -1.84 ±\pm 0.05 0.81±0.03\mathbf{-0.81\pm 0.03} -1.04 ±\pm 0.01
ScalePrior -4.12 ±\pm 0.03 -2.71 ±\pm 0.22 0.79±0.02\mathbf{-0.79\pm 0.02} -0.97 ±\pm 0.05
Table 6: Test RMSEs for BNNs on UCI datasets (errors are ±\pm 1 standard error)
factorised local inducing global inducing factorised \rightarrow global
boston - 𝒩(0,1)\mathcal{N}(0,1) 3.60 ±\pm 0.21 3.85 ±\pm 0.26 3.13±0.20\mathbf{3.13\pm 0.20} 3.14 ±\pm 0.20
NealPrior 3.64 ±\pm 0.24 3.55 ±\pm 0.23 3.14±0.18\mathbf{3.14\pm 0.18} 3.33 ±\pm 0.21
ScalePrior 9.03 ±\pm 0.26 3.57 ±\pm 0.20 2.97±0.19\mathbf{2.97\pm 0.19} 3.27 ±\pm 0.20
concrete - 𝒩(0,1)\mathcal{N}(0,1) 5.73 ±\pm 0.11 6.34 ±\pm 0.11 5.39±0.09\mathbf{5.39\pm 0.09} 5.55 ±\pm 0.10
NealPrior 5.96 ±\pm 0.11 6.35 ±\pm 0.12 5.64±0.10\mathbf{5.64\pm 0.10} 5.70 ±\pm 0.11
ScalePrior 12.66 ±\pm 1.04 6.48 ±\pm 0.12 5.56±0.10\mathbf{5.56\pm 0.10} 5.68 ±\pm 0.09
energy - 𝒩(0,1)\mathcal{N}(0,1) 0.51 ±\pm 0.01 1.35 ±\pm 0.02 0.50 ±\pm 0.01 0.47±0.02\mathbf{0.47\pm 0.02}
NealPrior 0.51 ±\pm 0.01 1.95 ±\pm 0.14 0.49 ±\pm 0.01 0.47±0.01\mathbf{0.47\pm 0.01}
ScalePrior 3.02 ±\pm 0.05 2.67 ±\pm 0.06 0.50 ±\pm 0.01 0.49±0.01\mathbf{0.49\pm 0.01}
kin8nm - 𝒩(0,1)\mathcal{N}(0,1) 0.07±0.00\mathbf{0.07\pm 0.00} 0.08 ±\pm 0.00 0.07±0.00\mathbf{0.07\pm 0.00} 0.07±0.00\mathbf{0.07\pm 0.00}
NealPrior 0.07±0.00\mathbf{0.07\pm 0.00} 0.08 ±\pm 0.00 0.07±0.00\mathbf{0.07\pm 0.00} 0.07±0.00\mathbf{0.07\pm 0.00}
ScalePrior 0.07±0.00\mathbf{0.07\pm 0.00} 0.08 ±\pm 0.00 0.07±0.00\mathbf{0.07\pm 0.00} 0.07±0.00\mathbf{0.07\pm 0.00}
naval - 𝒩(0,1)\mathcal{N}(0,1) 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00}
NealPrior 0.00±0.00\mathbf{0.00\pm 0.00} 0.01 ±\pm 0.00 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00}
ScalePrior 0.00±0.00\mathbf{0.00\pm 0.00} 0.01 ±\pm 0.00 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00}
power - 𝒩(0,1)\mathcal{N}(0,1) 4.00 ±\pm 0.03 4.06 ±\pm 0.03 3.96 ±\pm 0.04 3.93±0.04\mathbf{3.93\pm 0.04}
NealPrior 4.03 ±\pm 0.04 4.13 ±\pm 0.03 4.06 ±\pm 0.03 4.00±0.04\mathbf{4.00\pm 0.04}
ScalePrior 4.06 ±\pm 0.04 4.12 ±\pm 0.04 4.05 ±\pm 0.04 4.01±0.04\mathbf{4.01\pm 0.04}
protein - 𝒩(0,1)\mathcal{N}(0,1) 4.12 ±\pm 0.02 4.54 ±\pm 0.02 4.14 ±\pm 0.02 4.04±0.01\mathbf{4.04\pm 0.01}
NealPrior 4.21 ±\pm 0.01 4.50 ±\pm 0.03 4.27 ±\pm 0.02 4.06±0.01\mathbf{4.06\pm 0.01}
ScalePrior 4.22 ±\pm 0.02 4.46 ±\pm 0.01 4.19 ±\pm 0.02 4.00±0.02\mathbf{4.00\pm 0.02}
wine - 𝒩(0,1)\mathcal{N}(0,1) 0.65 ±\pm 0.01 0.65 ±\pm 0.01 0.63±0.01\mathbf{0.63\pm 0.01} 0.63±0.01\mathbf{0.63\pm 0.01}
NealPrior 0.66 ±\pm 0.01 0.65 ±\pm 0.01 0.64±0.01\mathbf{0.64\pm 0.01} 0.64±0.01\mathbf{0.64\pm 0.01}
ScalePrior 0.82 ±\pm 0.01 0.65 ±\pm 0.01 0.64±0.01\mathbf{0.64\pm 0.01} 0.64±0.01\mathbf{0.64\pm 0.01}
yacht - 𝒩(0,1)\mathcal{N}(0,1) 0.98 ±\pm 0.07 2.35 ±\pm 0.13 0.56 ±\pm 0.04 0.50±0.04\mathbf{0.50\pm 0.04}
NealPrior 1.15 ±\pm 0.07 1.37 ±\pm 0.11 0.57±0.04\mathbf{0.57\pm 0.04} 0.63 ±\pm 0.05
ScalePrior 14.55 ±\pm 0.59 5.75 ±\pm 1.34 0.56±0.04\mathbf{0.56\pm 0.04} 0.63 ±\pm 0.04
Table 7: ELBOs per datapoint in nats for BNNs on UCI datasets (errors are ±\pm 1 standard error)
factorised local inducing global inducing factorised \rightarrow global
boston - 𝒩(0,1)\mathcal{N}(0,1) -1.55 ±\pm 0.00 -1.54 ±\pm 0.00 1.02±0.01\mathbf{-1.02\pm 0.01} -1.03 ±\pm 0.00
NealPrior -1.03 ±\pm 0.00 -0.99 ±\pm 0.00 0.63±0.00\mathbf{-0.63\pm 0.00} -0.70 ±\pm 0.00
ScalePrior -1.54 ±\pm 0.00 -0.96 ±\pm 0.00 0.59±0.00\mathbf{-0.59\pm 0.00} -0.70 ±\pm 0.00
concrete - 𝒩(0,1)\mathcal{N}(0,1) -1.10 ±\pm 0.00 -1.08 ±\pm 0.00 0.71±0.00\mathbf{-0.71\pm 0.00} -0.78 ±\pm 0.00
NealPrior -0.88 ±\pm 0.00 -0.87 ±\pm 0.00 0.59±0.00\mathbf{-0.59\pm 0.00} -0.65 ±\pm 0.00
ScalePrior -1.45 ±\pm 0.01 -0.88 ±\pm 0.00 0.57±0.00\mathbf{-0.57\pm 0.00} -0.63 ±\pm 0.00
energy - 𝒩(0,1)\mathcal{N}(0,1) -0.13 ±\pm 0.02 -0.53 ±\pm 0.01 0.72±0.00\mathbf{0.72\pm 0.00} 0.59 ±\pm 0.00
NealPrior 0.21 ±\pm 0.00 -0.33 ±\pm 0.04 0.95±0.00\mathbf{0.95\pm 0.00} 0.79 ±\pm 0.01
ScalePrior -1.12 ±\pm 0.00 -0.47 ±\pm 0.01 0.96±0.01\mathbf{0.96\pm 0.01} 0.80 ±\pm 0.01
kin8nm - 𝒩(0,1)\mathcal{N}(0,1) -0.38 ±\pm 0.00 -0.43 ±\pm 0.00 0.26±0.00\mathbf{-0.26\pm 0.00} -0.31 ±\pm 0.00
NealPrior -0.35 ±\pm 0.00 -0.43 ±\pm 0.00 -0.31 ±\pm 0.00 0.28±0.00\mathbf{-0.28\pm 0.00}
ScalePrior -0.51 ±\pm 0.00 -0.39 ±\pm 0.01 -0.29 ±\pm 0.00 0.27±0.00\mathbf{-0.27\pm 0.00}
naval - 𝒩(0,1)\mathcal{N}(0,1) 1.68 ±\pm 0.01 1.65 ±\pm 0.09 2.89±0.04\mathbf{2.89\pm 0.04} 2.11 ±\pm 0.02
NealPrior 2.02 ±\pm 0.02 -0.09 ±\pm 0.34 2.53 ±\pm 0.04 2.62±0.02\mathbf{2.62\pm 0.02}
ScalePrior 1.91 ±\pm 0.03 -1.42 ±\pm 0.00 2.30 ±\pm 0.03 2.71±0.02\mathbf{2.71\pm 0.02}
power - 𝒩(0,1)\mathcal{N}(0,1) -0.08 ±\pm 0.00 -0.06 ±\pm 0.00 0.02±0.00\mathbf{-0.02\pm 0.00} -0.03 ±\pm 0.00
NealPrior -0.05 ±\pm 0.00 -0.05 ±\pm 0.00 0.01±0.00\mathbf{-0.01\pm 0.00} 0.01±0.00\mathbf{-0.01\pm 0.00}
ScalePrior -0.13 ±\pm 0.00 -0.05 ±\pm 0.00 0.01±0.00\mathbf{-0.01\pm 0.00} 0.01±0.00\mathbf{-0.01\pm 0.00}
protein - 𝒩(0,1)\mathcal{N}(0,1) -1.09 ±\pm 0.00 -1.14 ±\pm 0.00 1.06±0.01\mathbf{-1.06\pm 0.01} -1.09 ±\pm 0.00
NealPrior -1.11 ±\pm 0.00 -1.13 ±\pm 0.00 1.09±0.00\mathbf{-1.09\pm 0.00} 1.09±0.00\mathbf{-1.09\pm 0.00}
ScalePrior -1.13 ±\pm 0.00 -1.12 ±\pm 0.00 1.07±0.00\mathbf{-1.07\pm 0.00} 1.07±0.00\mathbf{-1.07\pm 0.00}
wine - 𝒩(0,1)\mathcal{N}(0,1) -1.48 ±\pm 0.00 -1.47 ±\pm 0.00 1.36±0.00\mathbf{-1.36\pm 0.00} 1.36±0.00\mathbf{-1.36\pm 0.00}
NealPrior -1.31 ±\pm 0.00 -1.30 ±\pm 0.00 1.22±0.00\mathbf{-1.22\pm 0.00} -1.23 ±\pm 0.00
ScalePrior -1.46 ±\pm 0.00 -1.29 ±\pm 0.00 1.22±0.00\mathbf{-1.22\pm 0.00} -1.23 ±\pm 0.00
yacht - 𝒩(0,1)\mathcal{N}(0,1) -1.04 ±\pm 0.02 -1.30 ±\pm 0.02 0.08±0.01\mathbf{0.08\pm 0.01} -0.23 ±\pm 0.01
NealPrior -0.46 ±\pm 0.02 -0.39 ±\pm 0.01 0.74±0.01\mathbf{0.74\pm 0.01} 0.31 ±\pm 0.01
ScalePrior -1.61 ±\pm 0.00 -0.77 ±\pm 0.10 0.79±0.01\mathbf{0.79\pm 0.01} 0.30 ±\pm 0.01
Table 8: Average test log likelihoods for our rerun of Salimbeni & Deisenroth (2017), and our implementations of local and global inducing points for deep GPs of various depths.
{dataset} - {depth} DSVI local inducing global inducing
boston - 2 -2.50 ±\pm 0.05 2.42±0.05\mathbf{-2.42\pm 0.05} -2.42±0.05\mathbf{-2.42\pm 0.05}
3 -2.51 ±\pm 0.05 -2.43 ±\pm 0.06 2.40±0.05\mathbf{-2.40\pm 0.05}
4 -2.51 ±\pm 0.05 -2.41 ±\pm 0.04 2.40±0.05\mathbf{-2.40\pm 0.05}
5 -2.51 ±\pm 0.05 -2.41 ±\pm 0.04 2.36±0.05\mathbf{-2.36\pm 0.05}
concrete - 2 -3.11 ±\pm 0.01 -3.08 ±\pm 0.02 3.06±0.02\mathbf{-3.06\pm 0.02}
3 -3.11 ±\pm 0.01 -3.10 ±\pm 0.02 3.06±0.02\mathbf{-3.06\pm 0.02}
4 -3.11 ±\pm 0.01 -3.12 ±\pm 0.02 3.05±0.02\mathbf{-3.05\pm 0.02}
5 -3.11 ±\pm 0.01 -3.13 ±\pm 0.02 3.06±0.02\mathbf{-3.06\pm 0.02}
energy - 2 -0.73 ±\pm 0.02 -0.71 ±\pm 0.03 0.70±0.03\mathbf{-0.70\pm 0.03}
3 -0.76 ±\pm 0.02 -0.71 ±\pm 0.03 0.70±0.03\mathbf{-0.70\pm 0.03}
4 -0.75 ±\pm 0.02 -0.71 ±\pm 0.03 0.70±0.03\mathbf{-0.70\pm 0.03}
5 -0.75 ±\pm 0.02 -0.71 ±\pm 0.03 0.70±0.03\mathbf{-0.70\pm 0.03}
kin8nm - 2 1.34 ±\pm 0.00 1.36±0.01\mathbf{1.36\pm 0.01} 1.35 ±\pm 0.00
3 1.36 ±\pm 0.00 1.38±0.00\mathbf{1.38\pm 0.00} 1.36 ±\pm 0.00
4 1.35 ±\pm 0.00 1.39±0.01\mathbf{1.39\pm 0.01} 1.37 ±\pm 0.01
5 1.35 ±\pm 0.00 1.38±0.01\mathbf{1.38\pm 0.01} 1.37 ±\pm 0.00
naval - 2 6.77 ±\pm 0.07 7.59 ±\pm 0.04 8.24±0.07\mathbf{8.24\pm 0.07}
3 6.61 ±\pm 0.07 7.54 ±\pm 0.04 7.91±0.10\mathbf{7.91\pm 0.10}
4 6.54 ±\pm 0.14 7.54 ±\pm 0.06 8.28±0.05\mathbf{8.28\pm 0.05}
5 5.02 ±\pm 0.41 7.51 ±\pm 0.06 8.24±0.04\mathbf{8.24\pm 0.04}
power - 2 -2.78 ±\pm 0.01 2.76±0.01\mathbf{-2.76\pm 0.01} 2.76±0.01\mathbf{-2.76\pm 0.01}
3 2.76±0.01\mathbf{-2.76\pm 0.01} -2.77 ±\pm 0.01 -2.77 ±\pm 0.01
4 2.75±0.01\mathbf{-2.75\pm 0.01} -2.77 ±\pm 0.01 -2.77 ±\pm 0.01
5 2.75±0.01\mathbf{-2.75\pm 0.01} -2.77 ±\pm 0.01 -2.77 ±\pm 0.01
protein - 2 2.80±0.00\mathbf{-2.80\pm 0.00} -2.82 ±\pm 0.00 -2.83 ±\pm 0.00
3 -2.73 ±\pm 0.00 2.71±0.01\mathbf{-2.71\pm 0.01} -2.76 ±\pm 0.01
4 2.71±0.01\mathbf{-2.71\pm 0.01} -2.71 ±\pm 0.01 -2.73 ±\pm 0.01
5 2.70±0.01\mathbf{-2.70\pm 0.01} -2.71 ±\pm 0.01 -2.74 ±\pm 0.01
wine - 2 0.95±0.01\mathbf{-0.95\pm 0.01} -0.96 ±\pm 0.01 -0.96 ±\pm 0.01
3 0.95±0.01\mathbf{-0.95\pm 0.01} -0.96 ±\pm 0.01 -0.96 ±\pm 0.01
4 0.95±0.01\mathbf{-0.95\pm 0.01} -0.96 ±\pm 0.01 -0.96 ±\pm 0.01
5 0.95±0.01\mathbf{-0.95\pm 0.01} -0.96 ±\pm 0.01 -0.96 ±\pm 0.01
yacht - 2 -0.40 ±\pm 0.03 0.05±0.14\mathbf{0.05\pm 0.14} -0.38 ±\pm 0.10
3 -0.47 ±\pm 0.02 0.17±0.11\mathbf{0.17\pm 0.11} -0.41 ±\pm 0.13
4 -0.50 ±\pm 0.02 0.31±0.29\mathbf{-0.31\pm 0.29} -0.47 ±\pm 0.19
5 -0.50 ±\pm 0.02 0.31±0.20\mathbf{-0.31\pm 0.20} -0.48 ±\pm 0.19
Table 9: Test RMSEs for our rerun of Salimbeni & Deisenroth (2017), and our implementations of local and global inducing points for deep GPs of various depths.
{dataset} - {depth} DSVI local inducing global inducing
boston - 2 2.95 ±\pm 0.18 2.78±0.15\mathbf{2.78\pm 0.15} 2.82 ±\pm 0.14
3 2.98 ±\pm 0.18 2.78±0.14\mathbf{2.78\pm 0.14} 2.79 ±\pm 0.14
4 2.99 ±\pm 0.18 2.75±0.12\mathbf{2.75\pm 0.12} 2.80 ±\pm 0.15
5 3.01 ±\pm 0.19 2.78 ±\pm 0.13 2.73±0.13\mathbf{2.73\pm 0.13}
concrete - 2 5.51 ±\pm 0.10 5.24 ±\pm 0.11 5.21±0.12\mathbf{5.21\pm 0.12}
3 5.53 ±\pm 0.10 5.38 ±\pm 0.11 5.18±0.12\mathbf{5.18\pm 0.12}
4 5.50 ±\pm 0.09 5.47 ±\pm 0.11 5.16±0.13\mathbf{5.16\pm 0.13}
5 5.53 ±\pm 0.11 5.50 ±\pm 0.10 5.23±0.13\mathbf{5.23\pm 0.13}
energy - 2 0.50 ±\pm 0.01 0.49 ±\pm 0.01 0.48±0.01\mathbf{0.48\pm 0.01}
3 0.50 ±\pm 0.01 0.49 ±\pm 0.01 0.48±0.01\mathbf{0.48\pm 0.01}
4 0.50 ±\pm 0.01 0.49 ±\pm 0.01 0.48±0.01\mathbf{0.48\pm 0.01}
5 0.50 ±\pm 0.01 0.49 ±\pm 0.01 0.48±0.01\mathbf{0.48\pm 0.01}
kin8nm - 2 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00}
3 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00}
4 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00}
5 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00}
naval - 2 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00}
3 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00}
4 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00}
5 0.01 ±\pm 0.00 0.00±0.00\mathbf{0.00\pm 0.00} 0.00±0.00\mathbf{0.00\pm 0.00}
power - 2 3.88 ±\pm 0.03 3.82 ±\pm 0.04 3.81±0.04\mathbf{3.81\pm 0.04}
3 3.80±0.04\mathbf{3.80\pm 0.04} 3.85 ±\pm 0.04 3.87 ±\pm 0.04
4 3.78±0.04\mathbf{3.78\pm 0.04} 3.84 ±\pm 0.04 3.84 ±\pm 0.04
5 3.78±0.04\mathbf{3.78\pm 0.04} 3.83 ±\pm 0.04 3.84 ±\pm 0.04
protein - 2 4.01±0.01\mathbf{4.01\pm 0.01} 4.07 ±\pm 0.02 4.11 ±\pm 0.01
3 3.75 ±\pm 0.01 3.73±0.03\mathbf{3.73\pm 0.03} 3.88 ±\pm 0.03
4 3.73±0.01\mathbf{3.73\pm 0.01} 3.73±0.03\mathbf{3.73\pm 0.03} 3.77 ±\pm 0.03
5 3.70±0.02\mathbf{3.70\pm 0.02} 3.73 ±\pm 0.03 3.80 ±\pm 0.03
wine - 2 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00}
3 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00}
4 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00}
5 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00} 0.06±0.00\mathbf{0.06\pm 0.00}
yacht - 2 0.40 ±\pm 0.03 0.36±0.03\mathbf{0.36\pm 0.03} 0.41 ±\pm 0.03
3 0.42 ±\pm 0.03 0.37 ±\pm 0.03 0.36±0.03\mathbf{0.36\pm 0.03}
4 0.44 ±\pm 0.03 0.37 ±\pm 0.03 0.36±0.03\mathbf{0.36\pm 0.03}
5 0.44 ±\pm 0.03 0.40 ±\pm 0.03 0.35±0.03\mathbf{0.35\pm 0.03}
Table 10: ELBOs per datapoint for our rerun of Salimbeni & Deisenroth (2017), and our implementations of local and global inducing points for deep GPs of various depths.
{dataset} - {depth} DSVI local inducing global inducing
boston - 2 -0.52 ±\pm 0.04 -0.35 ±\pm 0.00 0.28±0.01\mathbf{-0.28\pm 0.01}
3 -0.54 ±\pm 0.04 -0.35 ±\pm 0.01 0.25±0.01\mathbf{-0.25\pm 0.01}
4 -0.55 ±\pm 0.04 -0.36 ±\pm 0.00 0.25±0.01\mathbf{-0.25\pm 0.01}
5 -0.58 ±\pm 0.04 -0.35 ±\pm 0.01 0.24±0.01\mathbf{-0.24\pm 0.01}
concrete - 2 -0.61 ±\pm 0.02 -0.41 ±\pm 0.00 0.36±0.00\mathbf{-0.36\pm 0.00}
3 -0.63 ±\pm 0.01 -0.42 ±\pm 0.00 0.34±0.00\mathbf{-0.34\pm 0.00}
4 -0.64 ±\pm 0.01 -0.42 ±\pm 0.00 0.33±0.00\mathbf{-0.33\pm 0.00}
5 -0.64 ±\pm 0.01 -0.42 ±\pm 0.00 0.33±0.00\mathbf{-0.33\pm 0.00}
energy - 2 0.93 ±\pm 0.02 1.48±0.00\mathbf{1.48\pm 0.00} 1.48±0.00\mathbf{1.48\pm 0.00}
3 0.84 ±\pm 0.02 1.47 ±\pm 0.00 1.48±0.00\mathbf{1.48\pm 0.00}
4 0.87 ±\pm 0.01 1.47 ±\pm 0.00 1.48±0.00\mathbf{1.48\pm 0.00}
5 0.86 ±\pm 0.01 1.47 ±\pm 0.00 1.48±0.00\mathbf{1.48\pm 0.00}
kin8nm - 2 -0.18 ±\pm 0.01 -0.11 ±\pm 0.00 0.10±0.00\mathbf{-0.10\pm 0.00}
3 -0.19 ±\pm 0.01 0.09±0.00\mathbf{-0.09\pm 0.00} 0.09±0.00\mathbf{-0.09\pm 0.00}
4 -0.19 ±\pm 0.01 0.08±0.00\mathbf{-0.08\pm 0.00} 0.08±0.00\mathbf{-0.08\pm 0.00}
5 -0.19 ±\pm 0.01 -0.09 ±\pm 0.00 0.08±0.00\mathbf{-0.08\pm 0.00}
naval - 2 2.29 ±\pm 0.08 3.01 ±\pm 0.05 3.89±0.07\mathbf{3.89\pm 0.07}
3 2.07 ±\pm 0.10 2.93 ±\pm 0.05 3.55±0.12\mathbf{3.55\pm 0.12}
4 1.90 ±\pm 0.25 2.92 ±\pm 0.07 3.93±0.05\mathbf{3.93\pm 0.05}
5 0.61 ±\pm 0.37 2.99 ±\pm 0.06 3.93±0.04\mathbf{3.93\pm 0.04}
power - 2 0.02 ±\pm 0.00 0.04 ±\pm 0.00 0.05±0.00\mathbf{0.05\pm 0.00}
3 0.02 ±\pm 0.00 0.04±0.00\mathbf{0.04\pm 0.00} 0.04±0.00\mathbf{0.04\pm 0.00}
4 0.03 ±\pm 0.00 0.04 ±\pm 0.00 0.05±0.00\mathbf{0.05\pm 0.00}
5 0.02 ±\pm 0.00 0.04±0.00\mathbf{0.04\pm 0.00} 0.04±0.00\mathbf{0.04\pm 0.00}
protein - 2 -1.06 ±\pm 0.00 1.05±0.00\mathbf{-1.05\pm 0.00} -1.06 ±\pm 0.00
3 -1.02 ±\pm 0.00 0.98±0.00\mathbf{-0.98\pm 0.00} -1.01 ±\pm 0.00
4 -1.01 ±\pm 0.00 0.97±0.00\mathbf{-0.97\pm 0.00} -0.98 ±\pm 0.00
5 -1.01 ±\pm 0.00 0.97±0.00\mathbf{-0.97\pm 0.00} 0.97±0.00\mathbf{-0.97\pm 0.00}
wine - 2 -1.18 ±\pm 0.02 1.17±0.00\mathbf{-1.17\pm 0.00} 1.17±0.00\mathbf{-1.17\pm 0.00}
3 -1.18 ±\pm 0.02 1.17±0.00\mathbf{-1.17\pm 0.00} 1.17±0.00\mathbf{-1.17\pm 0.00}
4 -1.18 ±\pm 0.02 1.17±0.00\mathbf{-1.17\pm 0.00} 1.17±0.00\mathbf{-1.17\pm 0.00}
5 -1.18 ±\pm 0.02 -1.18 ±\pm 0.00 1.17±0.00\mathbf{-1.17\pm 0.00}
yacht - 2 1.05 ±\pm 0.06 2.53±0.01\mathbf{2.53\pm 0.01} 2.12 ±\pm 0.05
3 1.00 ±\pm 0.06 2.54±0.01\mathbf{2.54\pm 0.01} 2.26 ±\pm 0.01
4 0.97 ±\pm 0.06 2.46±0.02\mathbf{2.46\pm 0.02} 2.26 ±\pm 0.01
5 0.95 ±\pm 0.06 2.43±0.01\mathbf{2.43\pm 0.01} 2.25 ±\pm 0.01