Second order ensemble Langevin method for sampling and inverse problems
Abstract
We propose a sampling method based on an ensemble approximation of second order Langevin dynamics. The log target density is appended with a quadratic term in an auxiliary momentum variable and damped-driven Hamiltonian dynamics introduced; the resulting stochastic differential equation is invariant to the Gibbs measure, with marginal on the position coordinates given by the target. A preconditioner based on covariance under the law of the dynamics does not change this invariance property, and is introduced to accelerate convergence to the Gibbs measure. The resulting mean-field dynamics may be approximated by an ensemble method; this results in a gradient-free and affine-invariant stochastic dynamical system. Numerical results demonstrate its potential as the basis for a numerical sampler in Bayesian inverse problems.
keywords:
Bayesian inverse problems; Sampling; Ensemble method; Second order Langevin equation; Hybrid Monte Carlo; Mean-field models; Ensemble method; Nonlinear Fokker-Planck equation.1 Introduction
1.1 Set-Up
Consider sampling the distribution
where is termed the potential function and the normalization constant. A broad family of problems can be cast into this formulation; the Bayesian approach to inverse problems provides a particular focus for our work [34]. The point of departure for the algorithms considered in this paper is the following mean-field Langevin equation:
(1.1) |
where stands for the gradient operator, is an -dimensional standard Brownian motion, is the density associated to the law of , and is the covariance under this density. This constitutes a mean-field generalization [23] of the standard Langevin equation [44]. Applying a particle approximation to the mean-field model results in an interacting particle system, and coupled Langevin dynamics [37, 23, 42, 24]. The benefit of preconditioning using the covariance is that it leads to mixing rates independent of the problem, provably for quadratic and empirically beyond this setting, because of the affine invariance [26] of the resulting algorithms [24].
In order to further accelerate mixing and achieve sampling efficiency, we introduce an auxiliary variable and consider the Hamiltonian
(1.2) |
where the new state variable . Define measure on by
where is the normalization constant. The marginal distribution of in the variable gives the desired distribution , i.e. . We now aim at sampling the joint distribution. To this end, consider the following underdamped Langevin dynamics in :
(1.3) |
with the choices
(1.4) |
Here is a standard Brownian motion in with components . Then we have the following, proved in Subsection 8.1:
Proposition 1.1.
Assume that , are symmetric and non-negative, and that is symmetric positive-definite. Assume further that , and depend on the law of under the dynamics defined by (1.3) and (1.4), but are are independent of : all derivatives with respect to are zero. Then the Gibbs measure is invariant under the dynamics defined by (1.3), (1.4).
In practice, to simulate from such a mean-field model, it will be necessary to consider a particle approximation of the form
(1.5) |
for the set of particles and where are appropriate empirical approximations of based on the approximation
and the Hamiltonian is given by
(1.6) |
We use the notation to emphasize the dependence on the ensemble , but when the context is clear we also use the abbreviated notation . Note that is the appropriate finite particle approximation of given the particle approximation of
In this paper we will concentrate on a specific choice of mean-field operators within the above general construction, which we now describe. Let denote the -marginal in the covariance under the law of (1.3). We make the choices , , and , for a scalar damping parameter . Then the underdamped Langevin dynamics yields
(1.7) | ||||
To implement a particle approximation of the mean-field dynamics (1.7) we introduce particles in the form and we use the ensemble covariance and mean approximations
(1.8a) | ||||
(1.8b) |
In order to obtain affine invariance, we take the generalized square root of the ensemble covariance , similarly to [24]. We introduce the matrix
which allows us to define the empirical covariance and generalized (nonsymmetric) square root via
Now with independent standard Brownian motions , a natural particle approximation of (1.7) is, for ,
(1.9) | ||||
In later sections, we will add corrections ensuring preservation of the desired marginals of the invariant measure. These corrections will account for both finite particle approximation of the mean field limit, and the fact that (1.9) does not account for the dependence of on ; note that this latter dependence is accounted for in (1.5). To establish the form of the correction we follow the exposition of the well-trodden subject of determining a diffusion with given invariant measure from [40][Theorem 1].
In subsequent sections we will employ an ensemble approximation of , as in [23], thereby avoiding the need to compute adjoints of the forward model; we note also that in the linear case this approximation is exact. We will show that the resulting interacting particle system has the potential to provide accurate derivative-free inference for certain classes of inverse problems. In the remainder of this section we provide a literature review, we highlight our contributions, and we outline the structure of the paper.
1.2 Literature Review
The overdamped Langevin equation is the canonical SDE that is invariant with respect to given target density distribution. Many sampling algorithms are built upon this idea, and in particular, it is shown to govern a large class of Monte Carlo Markov Chain (MCMC) methods; see [47, 43, 40]. To enhance mixing and accelerate convergence, a second-order method, Hybrid Monte Carlo (HMC, also referred to as Hamiltonian Monte Carlo) [19, 41] has been proposed, leading to underdamped Langevin dynamics. There have been many attempts to justify the empirically observed fast convergence speed of second-order methods in comparison with first-order methods [4]. Recently a quantitative convergence rate is established in [10], showing that the underdamped Langevin dynamics converges faster than the overdamped Langevin dynamics when the log of the target has a small Poincaré constant; see also [21].
The idea of introducing preconditioners within the context of interacting particle systems used for sampling is developed in in [37]. Preconditioning via ensemble covariance is shown to boost convergence and numerical performance [23]. Other choices of preconditioners in sampling will lead to different forms of SDEs and associated Fokker-Planck equations with different structures, which can result in different sampling methods effective in different scenarios; see for example [38]. Affine invariance is introduced in [26] where it is argued that this property leads to desirable convergence properties for interacting particle systems used for sampling: methods that satisfy affine invariance are invariant under an affine change of coordinates, and are thus uniformly effective for problems that can be rendered well-conditioned under an affine transformation. An affine invariant version of the mean-field underdamped Langevin dynamics [23] is proposed in [42, 24].
Kalman methods have shown wide success in state estimation problems since their introduction by Evensen; see [22] for an overview of the field, and the papers [45, 31] for discussion of their use in inverse problems. Using the ensemble covariance as a preconditioner leads to affine invariant [24] and gradient-free [23] approximations of Langevin dynamics; this is desirable in practical computations in view of the intractability of derivatives in many large-scale models arising in science and engineering [15, 28, 35, 30]. See [2, 49] for analysis of these methods and, in the context of continuous data-assimilation, see [17, 3, 51]. There are other derivative-free methods that can be derived from the mean-field perspective, and in particular consensus-based methods show promise for optimization [12] and have recently been extended to sampling in [11].
Recent work has established the convergence of ensemble preconditioning methods to mean field limits; see for example [18]. For other works on rigorous derivation of mean field limits of interacting particle systems, see [50, 13, 32]. For underpinning theory of Hamiltonian-based sampling, see [7, 8, 39, 5].
1.3 Our Contributions
The following contributions are made in this paper:
-
1.
We introduce an underdamped second order mean field Langevin dynamics, with a covariance-based preconditioner.
-
2.
In the case of Bayesian inverse problems defined by a linear forward map, we show that that this mean field model preserves Gaussian distributions under time evolution and, if initialized at a Gaussian, converges to the desired target at rate independent of the linear map.
-
3.
We introduce finite particle approximations of the mean field model, and introduce finite size corrections to preserve invariance of the desired target measure, resulting in an affine invariant method.
-
4.
For Bayesian inverse problems, we introduce a gradient-free approximation of the algorithm, based on ensemble Kalman methodology.
-
5.
In the context of Bayesian inverse problems we provide numerical examples to demonstrate that the algorithm resulting from the previous considerations has desirable sampling properties.
In Section 2, we introduce the inverse problems context that motivates us. In Section 3 we discuss the equilibrium distribution of the mean field model; and then we introduce a finite particle approximation of the mean field model, identifying corrections which ensure that the equilibrium distribution is an independent product (of dimension equal to the particle cardinality) of copies of the desired equilibrium distribution. Section 4 introduces the ensemble Kalman approximation of the finite particle system; and in that section we also demonstrate affine invariance of the resulting method. Section 5 presents analysis of the finite particle system in the case of linear inverse problems, where the ensemble Kalman approximation is exact; we demonstrate that the relaxation time to equilibrium is independent of the specific linear inverse problem considered, a consequence of affine invariance. In Section 6 we provide numerical results which demonstrate the efficiency and potential value of our method, and in Section 7 we draw conclusions. Proofs of the propositions are given in the appendix, Section 8.
2 Inverse Problem
Consider the Bayesian inverse problem of finding from an observation determined by the forward model
Here is a (in general) nonlinear forward map. We assume a prior zero-mean Gaussian on unknown and assume that random variable , representing measurement error, is independent of the prior on . We also assume that are positive-definite. Then by Bayes rule, the posterior density that we aim to sample is given by111In what follows , with analogous notation for the inducing inner-product, for any positive-definite covariance and for the Euclidean norm.
with potential function of the following form:
(2.10) |
In the linear case when , is quadratic and the gradient can be written as a linear function:
(2.11a) | ||||
(2.11b) | ||||
(2.11c) |
In this linear setting, the posterior distribution is the Gaussian .
3 Equilibrium Distributions
3.1 Fokker-Planck Equation: Mean Field
The mean field underdamped Langevin equation (1.3) has an associated nonlinear and nonlocal Fokker-Planck equation giving the evolution of the law of particles , denoted . The equation for this law is (see Proof 8.1.)
(3.12) |
By Proposition 1.1 this Fokker-Planck equation has as its equilibrium; this follows as for standard linear Fokker-Planck equations [44] since the dependence of the sample paths on involves only the mean and covariance; see Proof 8.1 and see also [20, 27, 33, 46, 44] for discussions on how to derive Fokker-Planck equations with a prescribed stationary distribution. In the specific case (1.7), recall that and are given by
(3.13) |
These choices satisfy the assumption of Proposition 1.1.
3.2 Fokker-Planck Equation: Finite Particle System
The approximation of (1.7) by the interacting particle system (1.9) does not result in an equilibrium distribution with marginals on the position coordinates given by independent products To address this issue we return to the formulation in (1.5), (1.6), with the following empirical approximations to and :
(3.14) |
where is given in (1.8); note that in defining to be consistent with choices leading to (1.7) we make the approximation leading to
(3.15) |
We would like the resulting finite particle system (1.5), (3.14), (3.15) to have as the marginal on the position coordinates of its equilibrium solution; in general, however, it does not. We now determine a drift correction term which, when added to the system results in invariance of ; here, for normalization constant ,
(3.16a) | ||||
(3.16b) |
Note that does not have product form, but its marginal on the coordinates does, and is
To achieve the desired invariance of we add the correction term to the interacting particle system (1.5) to obtain the following coupled system of coupled diffusions in for
(3.17) |
By Theorem 1 in both [40] and [20] this will result in a linear Fokker-Planck equation (3.18) with as its equilibrium. This Fokker-Planck equation has the form
(3.18) |
where stands for the joint probability of the particles, is the vector containing copies of , and are block-diagonal matrices with copies of on the diagonal entry respectively. Note that in the case of finite particle approximation, the preconditioner is no longer dependent of the density , and thus the Fokker-Planck equation is indeed linear. For the specific choices of and made in (3.14) and (3.15), equation (3.17) simplifies to the following system of diffusions in , holding for
(3.19) | ||||
where we have defined . Concerning these equations, we establish the following proposition. We postpone the proof to Subsection 8.2; it is based on direct calculations of the terms and .
Proposition 3.1.
The interacting particle system (3.19) has invariant distribution .
4 Ensemble Kalman Approximation
4.1 Derivatives Via Differences
We now make the ensemble Kalman approximation to approximate the gradient term by differences, as in [23]:
where . Invoking this approximation within (3.19), using the specific form of , yields the following system of interacting particles in , for
(4.20) | ||||
We will use this system as the basis of all our numerical experiments.
4.2 Affine Invariance
In this subsection, we show the affine invariance property [26, 24, 37] for the Fokker-Planck equations in the mean-field regime (3.12) and for the ensemble approximation (3.18); the particle equation in the mean-field regime (1.7), for the ensemble approximation (1.9), and its correction (3.19); and the gradient-free approximation (4.20). For simplicity of presentation, we only state the results in the case of ensemble approximation, and the mean field case is a straightforward analogy upon dropping all of the particle superscripts.
Definition 4.1 (Affine invariance for particle formulation).
We say a particle formulation is affine invariant, if under all affine transformations of the form
the equations on the transformed particle systems are given by the same equations with , replaced by , respectively, and with potential replaced by via
Here is any invertible matrix and is a vector.
Definition 4.2 (Affine invariance for Fokker-Planck equation).
We say a Fokker-Planck equation is affine invariant, if under all affine transformations of the form
the equations on the pushforward PDF are given by the same equation on with , replaced by , respectively, and with Hamiltonian replaced by via
Here is any invertible matrix and is a vector.
Then we have the observation that all of our systems are affine invariant.
Proposition 4.3.
We defer the proof in Subsection 8.3. The significance of affine invariance is that it implies that the rate of convergence is preserved under affine transformations. The proposed methodology is thus uniformly effective for problems that become well-conditioned under an affine transformation. Proposition 5.1, which follows in the next section, illustrates this property in the setting of linear forward map
5 Mean Field Model For Linear Inverse Problems
We consider the mean field SDE (1.7) in the linear inverse problem setting of Section 2 with thus (2.11) holds. We note that in (2.11) is both well-defined and symmetric positive definite since are assumed to be symmetric positive definite. The results demonstrate problem-independent rates of convergence, in this linear setting, a consequence of affine invariance which in turn is a consequence of our choice of preconditioned mean field system.
In the setting of the linear inverse problem, the mean field model (1.7) reduces to
(5.21) | ||||
We prove the following result about this system in Subsection 8.4:
Proposition 5.1.
Write the mean and the covariance of the law of particles in equation (5.21) in the block form
The evolution of the mean and covariance is prescribed by the following system of ODEs:
(5.22) | ||||
The unique steady solution with positive-definite covariance is the Gibbs measure the marginal on gives the solution of the linear Gaussian Bayesian inverse problem. All other steady state solutions have degenerate covariance, are unstable, and take the form for a projection matrix and in the nullspace of . The equilibrium point with positive-definite covariance is hyperbolic and linearly stable and hence its basin of attraction is an open set, containing the equilibrium point itself, in the set of all mean vectors and positive-definite covariances. Furthermore, the mean and covariance converge to this equilibrium, from all points in its basin of attraction, with a speed independent of and .
Remark 5.2.
Proposition 5.1 demonstrates that the convergence speed to the non-degenerate equilibrium point is independent of the specific linear inverse problem to be solved; the rate does, however, depend on . Analysis of the linear stability problem shows that gives the best local convergence rate; see Remark 8.5. In the case where mean field preconditioning is not used, the optimal choice of for underdamped Langevin dynamics depends on the linear inverse problem being solved, and can be identified explicitly in the scalar setting [44]; for analysis in the non-Gaussian setting see [14]. Motivated by analogies with the work in [44, 14] we expect the optimal choice of to be problem dependent in the nonlinear case, but motivated by our analysis in the linear setting we expect to see a good choice which is not too small or too large and can be identified by straightforward optimization.
Now we show that for Gaussian initial data, the solution remains Gaussian and is thus determined by the evolution of the mean and covariance via the ODE (5.22). We prove this by investigating the mean field Fokker-Planck equation in the linear case. The evolution in time of the law of equation (5.21) is governed by the equation
(5.23) |
We prove the following in Subsection 8.5:
6 Numerical Results
We introduce, and study, a numerical method for sampling the Bayesian inverse problem of Section 2; the method is based on numerical time stepping of the interacting particle system (4.20). In this section, we demonstrate that the proposed sampler, which we refer to as ensemble Kalman hybrid Monte Carlo (EKHMC) can effectively approximate posterior distributions for two widely studied inverse problem test cases. We compare EKHMC with its first-order version EKS [23] and a gold standard MCMC [9]. EKHMC inherits two major advantages of EKS: (1) exact gradients are not required (i.e., derivative-free); (2) the ensemble can faithfully approximate the spread of the posterior distribution, rather than collapse to a single point as happens with the basic EKI algorithm [49]. Furthermore, we show empirically that EKHMC can obtain samples of similar quality to EKS, and has faster convergence than EKS. We detail our numerical time-stepping scheme in the first subsection, before studying two examples (one low dimensional, one a PDE inverse problem for a field) in the subsequent subsections.
6.1 Time Integration Schemes
We employ a splitting method to integrate the stochastic dynamical system given by equation (4.20): the first capturing the Hamiltonian evolution, including the finite-size correction, and the second capturing an OU process in momentum space. The Hamiltonian evolution follows the equation:
(6.24) | ||||
The OU process follows the equation:
(6.25) | ||||
We implement a symplectic Euler integrator [48] for the Hamiltonian part, i.e., take a half step of momentum updates, then a full step of position updates, and finally a half step of momentum updates.222With the ensemble approximation the system is only approximately Hamiltonian; the splitting used in symplectic Euler is still well-defined, however. Let be the collection of all position and momentum particles at time : Starting from time this symplectic Euler integration gives map We set to be the position coordinates of and then update the momentum coordinates using the OU process; the damping coefficient is treated as a hyperparameter of EKHMC. Vector is set at the value given by output of the preceding symplectic Euler integrator, denoted by . The update of the momentum coordinate given by the OU process is then:
(6.26) | ||||
where are the momentum coordinates from . Within the OU process the damping coefficient is treated as a hyperparameter of EKHMC.
Similarly to [23], and as there with the goal of improving stability and convergence speed towards posterior distribution, we implement an adaptive step size, i.e., the true step size is rescaled by the magnitude of the “force field” :
(6.27) |
6.2 Low Dimensional Parameter Space
We follow the example presented in Section 4.3 of the paper [23]. We start by defining the forward map which is given by the one-dimensional elliptic boundary value problem
(6.28a) | |||
(6.28b) |
The solution is given explicitly by
(6.29) |
The forward model operator is then defined by
(6.30) |
Here is a constant vector that we want to find and we assume that we are given noise measurements of at locations and . Parameters are chosen according to [23], but we summarize them here for completeness:
-
•
noise , ;
-
•
prior , ,
-
•
measurement
-
•
number of particles
-
•
initialization:
Here is the uniform distribution.

We choose , in both EKS and EKHMC. We find choosing causes overshooting, a phenomenon which can be ameliorated by increasing to ; indeed this latter value appears, empirically, to be close to optimal in terms of convergence speed. We conjecture this difference from the linear case, where the optimal value is (see Remark 5.2), is due to the non-Gaussianity of the desired target: the particles have accumulated considerable momentum when entering the linear convergence regime, and so extra damping is required to counteract this and avoid overshooting. Despite the desirable problem independence of the optimal in the linear case, we expect case-by-case optimization to be needed for nonlinear problems.
We evolve the ensemble of particles for 200 iterations, and record their positions in the last iteration as the approximation of the posterior distribution, shown in Figure 1. The EKHMC samples are quite similar to EKS samples, both of which are reasonable approximations of the samples obtained from the gold standard HMC [19] simulations – but both approximations of the gold standard miss the full spread of the true distribution, because of the ensemble Kalman approximation they invoke. We also compute four ensemble quantities to characterize the evolution of EKHMC and EKS: mean, mean, two eigenvalues and of the covariance matrix . EKHMC has faster convergence towards equilibrium than EKS, benefiting from the second-order dynamics.
6.3 Darcy Flow

This example follows the problem setting detailed in [23]. We summarize the essential problem specification here for completeness.The forward problem of porous medium flow, defined by permeability and source term , is to find the pressure field where is solution of the following elliptic PDE:
(6.31) | ||||
We assume that the permeability depends on some unknown parameters . The resulting inverse problem is, given (noisy) pointwise measurements of on a grid, to infer or . We model as a log-Gaussian field. The Gaussian underlying this log-Gaussian model has mean zero and has precision operator defined as ; here is equipped with Neumann boundary conditions on the spatial-mean zero functions. We set and in the experiments. Such parametrization yields as Karhunen-Loeve (KL) expansion
(6.32) |
where , , . Draws from this random field are Hölder with any exponent less than one. In practice, we truncate to have dimension . We generate a truth random field by sampling . We create data with . We choose the prior covariance .
To characterize the evolution of EKHMC and compare it with EKS, we compute two metrics:
(6.33) |
For these metrics, we use norms
(6.34) |
where the first is defined in the negative Sobolev space , whilst the second in the space.
We set and for both EKHMC and EKS. We set in EKHMC; unlike the previous example, this choice does not lead to over-shooting. We simulate the particles for iterations, and use their positions in the last iteration as the approximation to the posterior distribution. We compare the samples obtained from MCMC (we use the pCN variant on RWMH [16]), EKS () and EKHMC () in Figure 2. The evolution of and are plotted to show that convergence is achieved, and that EKHMC converges faster than EKS.
7 Conclusions
Gradient-free methods for inverse problems are increasingly important in large-scale multi-physics science and engineering problems; see [29] and the references therein. In this paper we have provided an initial study of gradient-free methods which leverage the potential power of Hamiltonian based sampling. Analysis of the resulting method is hard, and our initial theoretical results leave many avenues open; in particular convergence to equilibrium is not established for the underlying nonlinear, nonlocal Fokker-Planck equation arising from the mean field model, and optimization of convergence rates over is not understood, except in the setting of linear Gaussian inverse problems. The Fokker-Planck equation associated with standard underdamped Langevin dynamics has been studied in the context of hypocoercity – see [1, 52] – and generalization of these methods will be of potential interest. Preconditioned HMC is also proven to converge in [6]; generalization to ensemble approximations would be of interest. On the computational side there are other approaches to avoiding gradient computations, yet leveraging Hamiltonian structure, which need to be evaluated and compared with what is proposed here [36].
8 Appendix
We present the proofs of various results from the paper here.
8.1 Proof of Proposition 1.1
Proof 8.1.
The determination of the most general form of diffusion process which is invariant with respect to a given measure has a long history; see [20][Theorem 1] for a statement and the historical context. Note, however, that this theory is not developed in the mean-field setting and concerns only the linear Fokker-Planck equation. However, the dependence of the matrices and on readily allows use of the approach taken in the linear case. We find it expedient to use the exposition of this topic in [40]. The same derivation as used to obtain Eq. (5) from [40] shows333We use the notational convention concerning divergence of matrix fields that is standard in continuum mechanics [25]; this differs from the notational convention adopted in [40]. that the density associated with the dynamics (1.3), (1.4) satisfies Eq. (3.12); this follows from the (resp. skew-) symmetry properties of (resp. ) and the fact that , and are assumed independent of , despite their dependence on It is also manifestly the case that the Gibbs measure is invariant for (3.12) since the mass term appearing in is independent of so that
when is the Gibbs measure and (3.12) shows that then
8.2 Proof of Proposition 3.1
Proof 8.2.
We can compute by Lemma 2.1 in [42]:
For our specific choices of , and , and hence of , we can explicitly write out the term
in (3.17). Meanwhile we also have
note that the quadratic momentum term is now dependent on the ensemble of the positions, and hence contributes terms which disappear only in the mean-field limit. It suffices to show that
In fact, for and being the -th standard basis, we have
Thus we have
and we conclude the claim.
8.3 Proof of Proposition 4.3
Proof 8.3.
In fact, consider the affine transformation
along with
Then we have that the gradient term scales like = and the covariance preconditioner scales like . The generalized square root scales like . Therefore we can check that affine invariance holds for the particle systems (1.9) and (3.19). Affine invariance for the gradient free approximation (4.20) is more easily seen to be true. Finally for the Fokker-Planck equation (3.18), we can check similarly via the scaling of the terms. See a similar argument in the proof of Lemma 4.7 in [24].
8.4 Proof of Proposition 5.1
Proof 8.4.
We can take expectations in (5.21) to obtain the first two ODEs in (5.22). Now define
to obtain the following evolution for :
Note that , , and . We can use Ito’s formula and the above evolution to derive the last three ODEs in (5.22). The form of the steady state solutions is immediate from setting the right hand side of (5.22) to zero.
Now, we will establish the independence of the essential dynamics of the mean and covariance on the specific choice of . To this end, denote , , , , . Applying this change of variables we obtain, from (5.22), the system
(8.35) | ||||
We notice that the steady state solutions take the form for a projection matrix and such that . The steady state with and corresponds to the desired posterior mean. The transformed equation is indeed independent of and . Therefore the speed of convergence , within its basin of attraction, is indeed independent of and The same is true in the original variables.
To complete the proof of stability it suffices to establish the local exponential convergence to the steady solution . As a first step, we show that the following system converges to :
(8.36) | ||||
Linearization around in variables of entries of gives the matrix
whose eigenvalues satisfy which all have strictly negative real part for , since we can show the unique real eigenvalue is in the interval . Therefore we conclude the local exponential convergence of covariances in a neighbourhood of . The exponential convergence of means then follows linearizing the first two linear ODEs in the system (8.35) at
Similarly, for any other steady state with a projection matrix , we will show that there is an unstable direction in (8.36). In fact, since is symmetric with only eigenvalues and , there exists a nonzero vector such that . Linearization around in the direction gives the following matrix for scalars :
whose eigenvalues satisfy . For all it may be shown that there exists a real eigenvalue in the interval ; thus there is an unstable direction determined by , and therefore in the original formulation.
Remark 8.5.
We can further investigate the spectral gap around , which is the absolute value in the real root of equation . To be precise, we can show that for and , the spectral gap is maximized as and we expect fastest convergence for our method in the linear setting.
To establish this claim we proceed as follows. By the intermediate value theorem, in order to show there always exists a root in and the spectral gap is at most , we only need to show that
The claim is true because and . By the basic inequality , we have
The maximal spectral gap is attained if and only if the equality in holds, i.e. when .
We also plot the spectral gap as a function of to help visualize the dependence of the rate of convergence on damping for the linear problem; see Figure 3. The clear message from this figure is that there is a natural optimization problem for in this linear Gaussian setting; this is used to motivate searches for optimal parameters in the non-Gaussian setting.

8.5 Proof of Proposition 5.3
Proof 8.6.
Note that since , are independent of the particle , we have
Using this we can substitute the Gaussian profile into equation (5.23). We compute the first term on the right hand to obtain
The second term on the right hand side can be written as:
For the left hand side of (5.23), note that
Using the evolution of the mean (5.22) we obtain
Therefore we can compute the left hand side as:
In order to show that the Gaussian profile is indeed a solution to (5.23), we only need to show that . Note that
Defining , we collect the terms in the to-be-proven identity as:
We only need to show that .
Acknowledgments The work of ZL is supported by IAIFI through NSF grant PHY-2019786. The work of AMS is supported by NSF award AGS1835860, the Office of Naval Research award N00014-17-1-2079 and by a Department of Defense Vannevar Bush Faculty Fellowship.
References
- [1] D. Bakry and M. Émery. Diffusions hypercontractives. In Seminaire de probabilités XIX 1983/84, pages 177–206. Springer, 1985.
- [2] K. Bergemann and S. Reich. A mollified ensemble Kalman filter. Quarterly Journal of the Royal Meteorological Society, 136(651):1636–1643, 2010.
- [3] K. Bergemann and S. Reich. An ensemble Kalman-Bucy filter for continuous data assimilation. Meteorologische Zeitschrift, 21(3):213, 2012.
- [4] M. Betancourt. A conceptual introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434, 2017.
- [5] M. Betancourt, S. Byrne, S. Livingstone, and M. Girolami. The geometric foundations of Hamiltonian Monte Carlo. Bernoulli, 23(4A):2257–2298, 2017.
- [6] N. Bou-Rabee and A. Eberle. Two-scale coupling for preconditioned Hamiltonian Monte Carlo in infinite dimensions. Stochastics and Partial Differential Equations: Analysis and Computations, 9(1):207–242, 2021.
- [7] N. Bou-Rabee and J. M. Sanz-Serna. Randomized Hamiltonian Monte Carlo. The Annals of Applied Probability, 27(4):2159–2194, 2017.
- [8] N. Bou-Rabee and J. M. Sanz-Serna. Geometric integrators and the Hamiltonian Monte Carlo method. Acta Numerica, 27:113–206, 2018.
- [9] S. Brooks, A. Gelman, G. Jones, and X.-L. Meng. Handbook of Markov Chain Monte Carlo. CRC press, 2011.
- [10] Y. Cao, J. Lu, and L. Wang. On explicit -convergence rate estimate for underdamped Langevin dynamics. arXiv preprint arXiv:1908.04746, 2019.
- [11] J. Carrillo, F. Hoffmann, A. Stuart, and U. Vaes. Consensus-based sampling. Studies in Applied Mathematics, 148(3):1069–1140, 2022.
- [12] J. A. Carrillo, Y.-P. Choi, C. Totzeck, and O. Tse. An analytical framework for consensus-based global optimization method. Mathematical Models and Methods in Applied Sciences, 28(06):1037–1066, 2018.
- [13] J. A. Carrillo, M. Fornasier, G. Toscani, and F. Vecil. Particle, kinetic, and hydrodynamic models of swarming. In Mathematical modeling of collective behavior in socio-economic and life sciences, pages 297–336. Springer, 2010.
- [14] M. Chak, N. Kantas, T. Lelièvre, and G. A. Pavliotis. Optimal friction matrix for underdamped langevin sampling. arXiv preprint arXiv:2112.06844, 2021.
- [15] E. Cleary, A. Garbuno-Inigo, S. Lan, T. Schneider, and A. M. Stuart. Calibrate, emulate, sample. Journal of Computational Physics, 424:109716, 2021.
- [16] S. L. Cotter, G. O. Roberts, A. M. Stuart, and D. White. Mcmc methods for functions: modifying old algorithms to make them faster. Statistical Science, 28(3):424–446, 2013.
- [17] P. Del Moral, A. Kurtzmann, and J. Tugaut. On the stability and the uniform propagation of chaos of a class of extended ensemble Kalman–Bucy filters. SIAM Journal on Control and Optimization, 55(1):119–155, 2017.
- [18] Z. Ding and Q. Li. Ensemble Kalman sampler: Mean-field limit and convergence analysis. SIAM Journal on Mathematical Analysis, 53(2):1546–1578, 2021.
- [19] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics letters B, 195(2):216–222, 1987.
- [20] A. B. Duncan, N. Nüsken, and G. A. Pavliotis. Using perturbed underdamped langevin dynamics to efficiently sample from probability distributions. Journal of Statistical Physics, 169(6):1098–1131, 2017.
- [21] A. Eberle, A. Guillin, and R. Zimmer. Couplings and quantitative contraction rates for Langevin dynamics. The Annals of Probability, 47(4):1982–2010, 2019.
- [22] G. Evensen et al. Data assimilation: the ensemble Kalman filter, volume 2. Springer, 2009.
- [23] A. Garbuno-Inigo, F. Hoffmann, W. Li, and A. M. Stuart. Interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler. SIAM Journal on Applied Dynamical Systems, 19(1):412–441, 2020.
- [24] A. Garbuno-Inigo, N. Nüsken, and S. Reich. Affine invariant interacting Langevin dynamics for bayesian inference. SIAM Journal on Applied Dynamical Systems, 19(3):1633–1658, 2020.
- [25] O. Gonzalez and A. M. Stuart. A first course in continuum mechanics, volume 42. Cambridge University Press, 2008.
- [26] J. Goodman and J. Weare. Ensemble samplers with affine invariance. Communications in applied mathematics and computational science, 5(1):65–80, 2010.
- [27] R. Graham. Covariant formulation of non-equilibrium statistical thermodynamics. Zeitschrift für Physik B Condensed Matter, 26(4):397–405, 1977.
- [28] E. Haber, F. Lucka, and L. Ruthotto. Never look back-a modified enkf method and its application to the training of neural networks without back propagation. arXiv preprint arXiv:1805.08034, 2018.
- [29] D. Z. Huang, J. Huang, S. Reich, and A. M. Stuart. Efficient derivative-free bayesian inference for large-scale inverse problems. arXiv preprint arXiv:2204.04386, 2022.
- [30] D. Z. Huang, T. Schneider, and A. M. Stuart. Unscented Kalman inversion. arXiv preprint arXiv:2102.01580, 2021.
- [31] M. A. Iglesias, K. J. Law, and A. M. Stuart. Ensemble Kalman methods for inverse problems. Inverse Problems, 29(4):045001, 2013.
- [32] P.-E. Jabin and Z. Wang. Mean field limit for stochastic particle systems. In Active Particles, Volume 1, pages 379–402. Springer, 2017.
- [33] D.-Q. Jiang and D. Jiang. Mathematical theory of nonequilibrium steady states: on the frontier of probability and dynamical systems. Springer Science & Business Media, 2004.
- [34] J. Kaipio and E. Somersalo. Statistical and computational inverse problems, volume 160. Springer Science & Business Media, 2006.
- [35] N. B. Kovachki and A. M. Stuart. Ensemble Kalman inversion: a derivative-free technique for machine learning tasks. Inverse Problems, 35(9):095005, 2019.
- [36] S. Lan, T. Bui-Thanh, M. Christie, and M. Girolami. Emulation of higher-order tensors in manifold Monte Carlo methods for bayesian inverse problems. Journal of Computational Physics, 308:81–101, 2016.
- [37] B. Leimkuhler, C. Matthews, and J. Weare. Ensemble preconditioning for Markov chain Monte Carlo simulation. Statistics and Computing, 28(2):277–290, 2018.
- [38] M. Lindsey, J. Weare, and A. Zhang. Ensemble Markov chain Monte Carlo with teleporting walkers. arXiv preprint arXiv:2106.02686, 2021.
- [39] S. Livingstone, M. Betancourt, S. Byrne, and M. Girolami. On the geometric ergodicity of Hamiltonian Monte Carlo. Bernoulli, 25(4A):3109–3138, 2019.
- [40] Y.-A. Ma, T. Chen, and E. Fox. A complete recipe for stochastic gradient MCMC. In Advances in Neural Information Processing Systems, pages 2917–2925, 2015.
- [41] R. M. Neal. An improved acceptance procedure for the hybrid Monte Carlo algorithm. Journal of Computational Physics, 111(1):194–203, 1994.
- [42] N. Nüsken and S. Reich. Note on interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler by garbuno-inigo, hoffmann, li and stuart. arXiv preprint arXiv:1908.10890, 2019.
- [43] M. Ottobre and G. Pavliotis. Asymptotic analysis for the generalized Langevin equation. Nonlinearity, 24(5):1629, 2011.
- [44] G. A. Pavliotis. Stochastic processes and applications: diffusion processes, the Fokker-Planck and Langevin equations, volume 60. Springer, 2014.
- [45] S. Reich. A dynamical systems framework for intermittent data assimilation. BIT Numerical Mathematics, 51(1):235–249, 2011.
- [46] H. Risken. Fokker-planck equation. In The Fokker-Planck Equation, pages 63–95. Springer, 1996.
- [47] G. O. Roberts and J. S. Rosenthal. Optimal scaling for various metropolis-hastings algorithms. Statistical science, 16(4):351–367, 2001.
- [48] J.-M. Sanz-Serna and M.-P. Calvo. Numerical Hamiltonian Problems. Courier Dover Publications, 2018.
- [49] C. Schillings and A. M. Stuart. Analysis of the ensemble Kalman filter for inverse problems. SIAM Journal on Numerical Analysis, 55(3):1264–1290, 2017.
- [50] A.-S. Sznitman. Topics in propagation of chaos. In Ecole d’été de probabilités de Saint-Flour XIX—1989, pages 165–251. Springer, 1991.
- [51] A. Taghvaei, J. De Wiljes, P. G. Mehta, and S. Reich. Kalman filter and its modern extensions for the continuous-time nonlinear filtering problem. Journal of Dynamic Systems, Measurement, and Control, 140(3), 2018.
- [52] C. Villani. Hypocoercivity. arXiv preprint math/0609050, 2006.