Augmented Message Passing Stein Variational Gradient Descent
Abstract
Stein Variational Gradient Descent (SVGD) is a popular particle-based method for Bayesian inference. However, its convergence suffers from the variance collapse, which reduces the accuracy and diversity of the estimation. In this paper, we study the isotropy property of finite particles during the convergence process and show that SVGD of finite particles cannot spread across the entire sample space. Instead, all particles tend to cluster around the particle center within a certain range and we provide an analytical bound for this cluster. To further improve the effectiveness of SVGD for high-dimensional problems, we propose the Augmented Message Passing SVGD (AUMP-SVGD) method, which is a two-stage optimization procedure that does not require sparsity of the target distribution, unlike the MP-SVGD method. Our algorithm achieves satisfactory accuracy and overcomes the variance collapse problem in various benchmark problems.
1 Introduction
Stein variational gradient descent (SVGD) proposed by Liu and Wang (2016) is a non-parametric inference method. In order to approximate an intractable but distinguishable target distribution, it constructs a set of particles, which can be initialized from any initial distribution. These particles move in the Reproducing Kernel Hilbert Space (RKHS) determined by the kernel function. SVGD drives these particles in the direction that the KL divergence of the two distribution decreases most rapidly. SVGD is more efficient than traditional Markov chain Monte Carlo (MCMC) method due to the fact that the particles in SVGD reach the target distribution precisely through the dynamic process. These advantages make SVGD appealing and it has attracted lots of research interest (Liu et al. (2022); Ba et al. (2021); Zhuo et al. (2018); Salim et al. (2022); Yan and Zhou (2021a)).
Although SVGD succeeds in many applications (Liu (2017); Yoon et al. (2018); Yan and Zhou (2021b)), it lacks the necessary theoretical support in terms of convergence under the condition of limited particles. The convergence of SVGD is guaranteed only in the mean field assumption and when the number of particles is infinite, i.e., particles converge to the true distribution when the number of particles is infinite (Liu (2017); Salim et al. (2022)). However, the convergence of SVGD with finite particles is still an open problem. Furthermore, it has been observed that as the dimension of the problem increases, the variance estimated by SVGD may be much smaller than that of the target distribution. This phenomenon is known as the variance collapse, which limits the applicability of SVGD due to the following facts. First, underestimated variance leads to a failed explanation of the uncertainty of the model. Second, the Bayesian inference is usually high-dimensional in practice, but SVGD is not applicable in some scenarios due to this high-dimensional curse. For example, training Bayesian Neural Network (BNNs) (MacKay (1992)) requires inferring a huge amount of network weight post-test distributions whose dimensions are in millions (Krizhevsky et al. (2017)). Recently, structural prediction of proteins with long structures requires inferences on the position of each atom, which results in a high-dimensional problem (Wu et al. (2022)).
The first contributions of this paper is that we show particles of SVGD under convergence do not spread across the whole probability space but in a finite range. We give an analytic bound of this clustered space. This bounded distribution of particles is an indication of the curse of high dimension. In addition, we provide an estimate of the error between the covariance of finite particles and the true covariance.
There have been many efforts to make SVGD applicable for high-dimension problems. According to Zhuo et al. (2018), the size of the repulsive force of particles is inversely proportional to the dimension of the problem. Reducing the dimension of the problem is the key to address the variance collapse, such methods include combining the Grassmann manifold and matrix decompositions to reduce the dimension of the target distribution (Chen and Ghattas (2020); Liu et al. (2022)). Another approach to resolve this problem is to find the Markov blanket for each dimension of the target distribution, therefore the global kernel function could be replaced by a local kernel function. Under such scenarios, the efficiency of SVGD is improved and such method is called message passing SVGD (MP-SVGD) (Zhuo et al. (2018); Wang et al. (2018)). However, MP-SVGD needs to know the probability graph model structure in advance and is efficient only when the graph is sparse. Moreover, identifying the Markov blanket for high-dimension problems is challenging.
The second contribution of this paper is that we further overcome the shortcomings of MP-SVGD and extend MP-SVGD to high-dimension problems. Combined with the important results of our variance analysis, we propose the so-called Augmented MP-SVGD (AUMP-SVGD). AUMP-SVGD decomposes the problem dimension into three parts by the KL divergence factorization. Different from MP-SVGD, AUMP-SVGD adopts a two-stage update procedure to solve the dependence on sparse probabilistic graphical models. Therefore, it overcomes the shortcomings of the variance collapse and does not require prior knowledge of the graph structure. We show the superiority of AUMP-SVGD theoretically and experimentally to state-of-the-art algorithms.
2 Preliminaries
2.1 SVGD
SVGD approximates an intractable unknown target distribution where with the best by minimizing the Kullback-Leibler (KL) divergence (Liu and Wang (2016)). Here, is the dimension of the target distribution and
SVGD takes a group of particles from the initial distribution , and after a series of smooth transforms, these particles finally converge to the target distribution . Each smooth transformation can be expressed by , where is the step size and is the transformation direction. Here, is the collection of the particles. Let denote the set of positive definite kernels in the reproducing kernel Hilbert space (RKHS) and denote , where is the Cartesian product. The steepest descent direction is obtained by minimizing the KL divergence,
where means particles take the distribution after mapping, is the Stein operator given by . SVGD updates the particles drawn from the initial distribution by
The steepest descent direction is given by
(1) |
The kernel function can be chosen as the RBF kernel (Liu and Wang (2016)) or the IMQ kernel (Gorham and Mackey (2017)). Equation (1) can be divided into two parts: the driving force term and the repulsive force term . It has been demonstrated that SVGD falls under the high-dimension curse (Zhuo et al. (2018); Liu (2017)). Related research shows that there exists a negative correlation between the problem dimension and the repulsive force of SVGD (Ba et al. (2021)). The influence of repulsive force is mainly related to the dimension of the target distribution.
MP-SVGD. The Message Passing SVGD (MP-SVGD) (Zhuo et al. (2018); Wang et al. (2018)) is proposed to reduce the dimension of the target distribution by identifying the Markov blanket for problems with a known graph structure. For dimension index , its Markov blanket contains neighborhood nodes of such that , where and . However, it is necessary to rely on the sparse correlation of variables of the target distribution in order to obtain good results.
2.2 Mixing for random variables
Since SVGD forms a dynamic system where particles interact with each other, one can no longer treat the converged particles as independent and identically distributed. Therefore, we need to use the mathematical tool “mixing”, cf. Bradley (2005) for more information.
Mixing. Let be a sequence of random variables over some probability space , where denotes the -algebra. For any two -fields with any , let
(2) |
where
is the maximum taken over all finite partitions and of . A family of random variables will be said to be absolutely regular (or mixing) if , where the coefficients of absolute regularity are defined in Equation (2) (Banna et al. (2016)). These coefficients quantify the strength of dependence between the -algebra generated by and the one generated by for all . tends to zero while goes to infinity implies that the -algebra generated by is less and less dependent on (Bradley (2005)).
3 Covariance Analysis under -mixing
Ba et al. (2021) analyzed the convergence of SVGD when the covariance matrix of the Gaussian distribution is an identity matrix under the assumption of near-orthogonality. On the basis of this work, we further analyze the more general form of the variance collapse. Moreover, the quantification of the variance collapse for finite particles is still an open problem to the best of our knowledge. Chewi et al. (2020) shows that SVGD can be viewed as a kernelized Wasserstein gradient flow of the chi-squared divergence, which makes us believe that extending our analysis in this section to other MCMC methods such as Wasserstein gradient flow or normalizing flow is possible.
3.1 Assumptions
A1 (Fixed points). Although the convergence of SVGD under finite particles is still an open problem, it can be found in many experimental studies that all particles converge to fixed points (Ba et al. (2021)). In this paper, we also assume that for SVGD with finite number of particles, these particles eventually converge to the target distribution and approach fixed points.
A2 (-dependent of particles)
Definition 1
(Hoeffding and Robbins (1948)) If for some function , the inequality implies that the tow sets and are independent, then the sequence is said to -dependent.
We assume that the fixed points of SVGD satisfy the -dependent assumption. In Ba et al. (2021), only weak correlation between SVGD particles is reported. We perform numerical verification in Appendix B and leave the rigorous proof for our future work. Under Assumptions A1-A2 and the zero-mean of the target distribution, we analyze the variance collapse of SVGD quantitatively in the following part.
3.2 Concentration of particles
We first give the upper bound of the concentrated particles. The main tool used here is the Jensen gap (Gao et al. (2019)).
Proposition 1
Let Assumption A1 hold, for mean zero Gaussian target and Gaussian RBF kernel , we have
where , , is a positive constant, and is the empirical covariance matrix of the particles.
We leave this proof in Appendix A. For most of the sampling-based inference methods, such as MCMC, VI, et al., samples spread across the whole sample space but with extremely small probability for some samples. However, Proposition 1 shows that SVGD with finite particles are confined to a certain range, although this range may expand as the number of particles increases. With the RBF kernel, this upper bound is related to the trace of the covariance of the target distribution. Under the IMQ kernel conditions, this range will be further reduced (Gorham and Mackey (2017)).
3.3 Covariance estimation
For independent and identically distributed (i.i.d.) samples, the variance can be estimated using the Bernstein inequality or the Lieb inequality. However, these random matrix results typically require the particles to be i.i.d, which is no longer satisfied by SVGD due to its interacting update. To analyze the variance of particles from SVGD, we assume these particles are -dependent. For a sequence of random variables, the -dependent assumption implies that they satisfy the -mixing condition (Bradley (2005)). We obtain the following results based on the Bernstein inequality of dependent random matrices (Banna et al. (2016)).
Proposition 2
Let Assumption A1-A2 hold, for SVGD with the Gaussian RBF kernel , denote where is the covariance matrix of the target distribution. There exists such that for any integer , the following inequality holds,
Denote the covariance matrix of these particles by , we have
where and , are identical to that in Proposition 1. Here, measures the correlation between particles, and is the dimension of the target distribution. Moreover,
Here, represents the eigenvalue of with the maximum magnitude, is the cardinality of the set , and
Proposition 2 shows that the main factors that affect the upper bound of the variance error include the inter-particle correlation, the number of particles, the dimension of the target distribution, and the trace of its covariance matrix. According to Ba et al. (2021), it can be considered that should be true for SVGD, therefore the constant in Proposition 1 can be replaced by .
4 Augmented MP-SVGD
Here, we propose the so-called Augmented Message Passing (AUMP-SVGD) to overcome the covariance underestimation of SVGD. Compared with MP-SVGD, AUMP-SVGD requires neither a known graph structure nor the sparsity structure of the target distribution.
4.1 MP-SVGD
The update direction of SVGD is given by
The log derivative term in the update rule corresponds to the driving force that guides particles toward the high-likelihood region. The (Stein) score function is a vector field describing the target distribution. provides a repulsive force to prevent particles from aggregating. However, as the dimension of the target distribution increases, this repulsive force gradually decreases (Zhuo et al. (2018)). This causes SVGD to fall under the curse of high dimensions. How to effectively reduce the dimension has become the guiding ideology to make SVGD overcome the curse of dimensionality.
Our concern is the problem with the continuous graphical model, i.e., the target distribution that satisfies the following form where is the family of index sets that specifies the Markov structure. For any index , its Markov blanket is represented by . According to Wang et al. (2018), one can transform the global kernel function into a local kernel function and this local kernel function is just related to for any . Under this transformation, the dimension is reduced from to , where is the size of the set . Then, SVGD becomes
(3a) | |||
(3b) |
where . Such method is known as the message passing SVGD (MP-SVGD) (Zhuo et al. (2018); Liu (2017)). However, MP-SVGD needs to know the graph structure of the target distribution in advance to determine the Markov blanket and the sparsity of this graph is needed in order to achieve dimension reduction.
4.2 Augmented MP-SVGD
Inspired by MP-SVGD, we propose the augmented MP-SVGD which is suitable for more complex graph structures. We keep the previous symbols but redefine them for clarity. We assume can be factorized as where is the index set. Partition and such that and . Let . Similarly, .
Consider the probability graph model illustrated by Figure 1, can be represented as . Our method relies on the key observation of Zhuo et al. (2018) that
To minimize , we adopt a two-stage procedure so that and are optimized alternatively. At the first stage, is further decomposed into
(4) |
We can fix to apply the local kernel function to the second part of Equation (4) to minimize ,
This optimization procedure is given by Proposition 3 and we leave the proof in the appendix.
Proposition 3
Let where and . Here is the space that defines the local kernel function . The optimal solution of the optimization problem
is given by where
At the second stage, and are fixed while only is updated. We can further decompose into three parts via the convexity of the KL divergence,
where is a positive constant due to the fact that is fixed. Therefore,
The optimization procedure is described by Proposition 4 and we also leave the proof in the appendix.
Proposition 4
Let where and . Here is the space that defines the local kernel and or . The optimal solution of the following optimization problem
is given by where
For , is updated through the above two-stage procedure, which reduces the dimension of the original problem from to the size of . In this way, we are able to solve the problem of inferring more complex target distributions that traditional MP-SVGD failed. This comparison is illustrated in Experiment 2. Moreover, we do not need to know the real probability graph structure of the target distribution in advance.
The key start of our AUMP-SVGD is to choose or . This problem can be formulated in the following form, for particles with each , denote the ensemble matrix of these particles by , i.e., . Let and be the set of sub-matrices of . Determining the set corresponds to select the sub-matrix from the set to minimize , where is the empirical covariance matrix of particles and is the empirical covariance matrix of sub-ensembles. The upper bound of is already given by Proposition 2 and it is related to . Therefore, we choose the sub-matrix with the smallest to ensure that is minimized. This corresponds to select from to get minimal . Since , we just need to reorder each column of by the 2-norm and choose columns with smallest 2-norm. The computational complexity for this part is . Finally, we give the complete form of our AUMP-SVGD by Algorithm 1.
5 Experiments
We study the uncertainty quantification properties of AUMP-SVGD compared with other existing methods such as SVGD, MP-SVGD, projected SVGD (PSVGD) (Chen and Ghattas (2020)), and Sliced Stein variational gradient descent (S-SVGD) (Gong et al. (2020)) through extensive experiments. We conclude from experiments that SVGD may underestimate uncertainty, S-SVGD may overestimate it and AUMP-SVGD with a properly partitioned and produces the best estimate. In almost all scenarios, AUMP-SVGD outperforms PSVGD.
5.1 Gaussian Mixture Models
Multivariate Gaussian. The first example is a -dimensional multivariate Gaussian . For each method, 100 particles are initialized from .
Spaceship Mixture. The target in the second experiment is a -dimensional mixture of two correlated Gaussian distributions . The mean , of each Gaussian have components equal to 1 in the first two coordinates and 0 otherwise. The covariance matrix admits a correlated block diagonal structure. The mixture hence manifests as a “spaceship” density margin in the first two dimensions (see Figure 2).

It can be seen from Figure 2 that for the high-dimensional inference, particles from SVGD aggregate, which leads to a high-dimensional curse (Zhuo et al. (2018); Liu and Wang (2016)). However, AUMP-SVGD can estimate the true probability distribution well in these high-dimensional situations. We calculate the energy distance and the mean-square error (MSE) between the samples from the inference algorithm and the real samples. The energy distance is given by , where and are the cumulative distribution function (CDF) of and , respectively. and denote an independent and identically distributed (i.i.d.) copy of and (Rizzo and Székely (2016)). 10 experiments are performed and the averaged results are given in Figure 3.

Figure 3 demonstrates the gradual expansion of the error difference of SVGD as the dimension increases. For the sparse problem, when the graph structure is already known, MP-SVGD achieves comparable outcomes with S-SVGD and PSVGD-2. In the above example, PSVGD achieves its best results when the problem dimension is reduced to 2. The AUMP-SVGD with a set size of of 1 or 3 outperforms other methods. In Experiment 1, we demonstrate that AUMP-SVGD yields a variance estimate equivalent to that of MP-SVGD under the simplified graph structure. In Experiment 2, as the correlation between different dimensions of the target becomes strong or the density of the graph increases, our algorithm performs better than other SVGD variants. Furthermore, our approach exhibits superior variance estimation compared with SVGD, MP-SVGD, S-SVGD, and PSVGD. In practice, the covariance matrix of the target distribution may not be sparse, making it challenging to capture this structure. As a result, the effectiveness of MP-SVGD is significantly limited. However, AUMP-SVGD can still attain stable and superior results.
Non-sparse experiment We set the dimension of the target distribution to 50 and systematically transfer the sparse covariance matrix to a non-sparse one by augmenting the correlations between different dimensions along the main diagonal. The results are presented in Figure 4.

As the density of the probability map increases, the discrepancy between MP-SVGD and the actual target distribution gradually magnifies. Eventually, MP-SVGD succumbs to the curse of dimensionality. This phenomenon arises due to the fact that the dispersion force of particles in MP-SVGD primarily depends on the size of the Markov blanket. With increasing density, MP-SVGD encounters the same challenges associated with high-dimensional scenarios as observed in SVGD. However, as illustrated in Figure 4, regardless of the density of the target distribution’s graph structure, AUMP-SVGD remains unaffected by variance collapse. This is attributed to the repulsion force of particles in AUMP-SVGD being influenced by the artificially chosen Markov blanket size, underscoring the superiority of our algorithm over MP-SVGD.
5.2 Conditioned Diffusion Process
The next example is a benchmark that is often used to test inference methods in high dimensions (Detommaso et al. (2018); Chen and Ghattas (2020); Liu et al. (2022)). We consider a stochastic process governed by
where , the forcing term follows a Brownian motion so that with . The noisy data at 50 equi-spaced time points with , where for with . The objective is to use to infer the forcing term and thus the state of the solution . The results are given in Figure 5 where the shadow interval depicts the mean plus/minus the standard deviation.

As depicted by Figure 5, it is evident that in the case of 50 dimensions, both SVGD and S-SVGD exhibit certain deviations from the ground truth while S-SVGD exhbits excessively large variances in numerous tests. Consequently, SVGD and S-SVGD prove inadequate in effectively addressing the conditional diffusion model. Conversely, our AUMP-SVGD demonstrates satisfactory performance with the size of of 5 and 10.
5.3 Bayesian Logistic Regression
We investigate a Bayesian logistic regression model from Liu and Wang (2016) applied to the Covertype dataset from Asuncion and Newman (2007). We use 70% data for training and 30% for testing. We compare AUMP-SVGD with SVGD, S-SVGD, PSVGD, MP-SVGD, and Hamiltonian Monte Carlo(HMC) and the number of generated samples ranges from 100 to 500. Each experiment uses ten different random seeds and the error of each value does not exceed 0.01. We verify the impact of different sampling methods on the prediction accuracy and the results are given by Table 1.
methods | accuracy | ||||||
---|---|---|---|---|---|---|---|
# particles | HMC | SVGD | S-SVGD | MP-SVGD | PSVGD_2 | AUMP-SVGD-5 | AUMP-SVGD-10 |
100 | 0.70 | 0.74 | 0.76 | 0.74 | 0.77 | 0.73 | 0.75 |
200 | 0.71 | 0.74 | 0.762 | 0.74 | 0.79 | 0.78 | 0.78 |
300 | 0.73 | 0.74 | 0.762 | 0.741 | 0.81 | 0.80 | 0.81 |
400 | 0.80 | 0.75 | 0.76 | 0.74 | 0.83 | 0.81 | 0.81 |
500 | 0.81 | 0.75 | 0.765 | 0.75 | 0.85 | 0.82 | 0.86 |
Table 1 demonstrates that our AUMP-SVGD has a prediction accuracy rate similar to that of PSVGD, and as the number of particles increases, our algorithm is more accurate than other algorithms.
6 Conclusion and Future work
In this paper, we analyze the upper bound of the variance collapse of SVGD when the number of particles is finite. We show that the distribution of particles is restricted to a specific region rather than the entire probability space. We also propose the AUMP-SVGD algorithm to further overcome the dependency of MP-SVGD on the known and sparse graph structure. We show the effectiveness of AUMP-SVGD through various experiments. For future work, we aim to further investigate the convergence of SVGD with finite particles and tighten the estimation limit. We also plan to apply MP-SVGD to more complex real-world applications, such as posture estimation (Pacheco et al. (2014)).
References
- Asuncion and Newman [2007] Arthur Asuncion and David Newman. UCI machine learning repository, 2007.
- Ba et al. [2021] Jimmy Ba, Murat A Erdogdu, Marzyeh Ghassemi, Shengyang Sun, Taiji Suzuki, Denny Wu, and Tianzong Zhang. Understanding the variance collapse of SVGD in high dimensions. In International Conference on Learning Representations, 2021.
- Banna et al. [2016] Marwa Banna, Florence Merlevède, and Pierre Youssef. Bernstein-type inequality for a class of dependent random matrices. Random Matrices: Theory and Applications, 5(2):1650006, 2016.
- Bradley [2005] Richard C Bradley. Basic properties of strong mixing conditions. A survey and some open questions. Probability Surveys, 2:107–144, 2005.
- Chen and Ghattas [2020] Peng Chen and Omar Ghattas. Projected Stein variational gradient descent. In Advances in Neural Information Processing Systems, volume 33, pages 1947–1958, 2020.
- Chewi et al. [2020] Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, and Philippe Rigollet. SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence. In Advances in Neural Information Processing Systems, volume 33, pages 2098–2109, 2020.
- Detommaso et al. [2018] Gianluca Detommaso, Tiangang Cui, Youssef Marzouk, Alessio Spantini, and Robert Scheichl. A Stein variational Newton method. In Advances in Neural Information Processing Systems, volume 31, 2018.
- Gao et al. [2019] Xiang Gao, Meera Sitharam, and Adrian E Roitberg. Bounds on the Jensen gap, and implications for mean-concentrated distributions. The Australian Journal of Mathematical Analysis and Applications, 16:1–16, 2019.
- Gong et al. [2020] Wenbo Gong, Yingzhen Li, and José Miguel Hernández-Lobato. Sliced kernelized Stein discrepancy. In International Conference on Learning Representations, 2020.
- Gorham and Mackey [2017] Jackson Gorham and Lester Mackey. Measuring sample quality with kernels. In International Conference on Machine Learning, pages 1292–1301, 2017.
- Hoeffding and Robbins [1948] Wassily Hoeffding and Herbert Robbins. The central limit theorem for dependent random variables. Duke Mathematical Journal, 15(3):773–780, 1948.
- Krizhevsky et al. [2017] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
- Liu [2017] Qiang Liu. Stein variational gradient descent as gradient flow. In Advances in Neural Information Processing Systems, volume 30, 2017.
- Liu and Wang [2016] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose Bayesian inference algorithm. In Advances in Neural Information Processing Systems, volume 29, 2016.
- Liu et al. [2022] Xing Liu, Harrison Zhu, Jean-Francois Ton, George Wynne, and Andrew Duncan. Grassmann Stein variational gradient descent. In International Conference on Artificial Intelligence and Statistics, pages 2002–2021, 2022.
- MacKay [1992] David JC MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992.
- Pacheco et al. [2014] Jason Pacheco, Silvia Zuffi, Michael Black, and Erik Sudderth. Preserving modes and messages via diverse particle selection. In International Conference on Machine Learning, pages 1152–1160, 2014.
- Rizzo and Székely [2016] Maria L Rizzo and Gábor J Székely. Energy distance. Wiley Interdisciplinary Reviews: Computational Statistics, 8(1):27–38, 2016.
- Salim et al. [2022] Adil Salim, Lukang Sun, and Peter Richtarik. A convergence theory for SVGD in the population limit under Talagrand’s inequality T1. In International Conference on Machine Learning, pages 19139–19152, 2022.
- Wang et al. [2018] Dilin Wang, Zhe Zeng, and Qiang Liu. Stein variational message passing for continuous graphical models. In International Conference on Machine Learning, pages 5219–5227, 2018.
- Wu et al. [2022] Kevin E Wu, Kevin K Yang, Rianne van den Berg, James Y Zou, Alex X Lu, and Ava P Amini. Protein structure generation via folding diffusion. arXiv preprint arXiv:2209.15611, 2022.
- Yan and Zhou [2021a] Liang Yan and Tao Zhou. Stein variational gradient descent with local approximations. Computer Methods in Applied Mechanics and Engineering, 386, 2021a.
- Yan and Zhou [2021b] Liang Yan and Tao Zhou. Gradient-free Stein variational gradient descent with kernel approximation. Applied Mathematics Letters, 121, 2021b.
- Yoon et al. [2018] Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian model-agnostic meta-learning. In Advances in Neural Information Processing Systems, volume 31, 2018.
- Zhuo et al. [2018] Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, and Bo Zhang. Message passing Stein variational gradient descent. In International Conference on Machine Learning, pages 6018–6027, 2018.
Appendix A
Proof of Proposition 1
Proof of Proposition 2
According to Proposition 1:
Apply the expectation version of the Bernstein inequality (Banna et al. [2016]) for the sum of mean zero random matrices and we obtain,
and is any number chosen such that,
To bound is simple:
which completes the proof.
Proof of Proposition 3
Proof of Proposition 4
Similar to the last proof, first we have
Then we derive the optimality condition for Equation (4),
Following the proof of Theorem 3.1 in Liu and Wang [2016], we have
According to Liu and Wang [2016], we can show that the optimal solution is given by , where
which completes the proof.
Appendix B
Numerical verification of -dependent
In our paper, we use the concept of -dependent in the mixture to estimate the variance of non-i.i.d. particles. Here we give a verification experiment for -dependent.
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/0e71bcd9-6975-4148-9bac-e05eb0f37d74/x6.png)
The above figure shows the final dynamic magnitude of the two particles in SVGD. Here, the legend indicates 3000 particles for a 10-dimensional Gaussian model. We sort the particles according to and measure the update effect between two particles at intervals of 500, i.e.,
(6) |
We can see from the above figure, as the number of particle intervals increases, the force between particles decreases.
Numerical verification of Proposition 1-2
In the presented tabular data, we have undertaken empirical investigations for the target distribution and have demonstrated the soundness of our theoretical bounds through these experiments.
dim | 2 | 5 | 10 | 15 | 20 | 25 | |
---|---|---|---|---|---|---|---|
10 particles | max | 1.43 | 1.18 | 1.17 | 1.17 | 1.17 | 1.17 |
theoretical bound | 4.82 | 5.56 | 5.39 | 5.36 | 5.35 | 5.35 | |
50 particles | Max | 2.11 | 1.63 | 1.55 | 1.52 | 1.52 | 1.51 |
theoretical bound | 9.41 | 16.1 | 17.1 | 14.8 | 14.31 | 14.0 |
In the presented table, the upper bound of the variance error of SVGD has been assessed for the case where the distribution is .
Number of particles | 1000 | 5000 | 10000 | 15000 | 20000 | |
---|---|---|---|---|---|---|
Dim-2 | 0.008 | 0.002 | 0.004 | 0.0063 | 0.003 | |
theoretical bound | 0.082 | 0.019 | 0.01 | 0.0078 | 0.005 | |
Dim-5 | 0.3 | 0.11 | 0.06 | 0.06 | 0.04 | |
theoretical bound | 4.63 | 1.71 | 1.02 | 0.71 | 0.57 |