Review
Computational statistics
Zheng Zhao
Conditional sampling within generative diffusion models
Abstract
Generative diffusions are a powerful class of Monte Carlo samplers that leverage bridging Markov processes to approximate complex, high-dimensional distributions, such as those found in image processing and language models. Despite their success in these domains, an important open challenge remains: extending these techniques to sample from conditional distributions, as required in, for example, Bayesian inverse problems. In this paper, we present a comprehensive review of existing computational approaches to conditional sampling within generative diffusion models. Specifically, we highlight key methodologies that either utilise the joint distribution, or rely on (pre-trained) marginal distributions with explicit likelihoods, to construct conditional generative samplers.
This article is part of the theme issue “Bayesian inverse problems with generative models”.
keywords:
Generative diffusions, stochastic differential equations, conditional sampling, Bayesian inference1 Introduction
Consider the conditional probability distribution of a random variable with condition . Sampling the distribution is the fundamental question in computational statistics, and there are a plethora of developed sampling schemes to use depending on what we know about . As an example, when the density function of is available (up to a constant), Markov chain Monte Carlo (MCMC, Meyn and Tweedie, 2009) methods are popular and generic algorithms widely used. The MCMC algorithms simulate a Markov chain that leaves the target distribution invariant. The drawback is that this often makes the algorithms computationally and statistically inefficient for high-dimensional problems.
In this article, we discuss an emerging class of samplers that leverage generative diffusions (see, e.g., Benton et al., 2024; Song et al., 2021), which have empirically worked well for many Bayesian inverse problems. At the heart, the generative diffusions aim to find a continuos-time Markov process (e.g., stochastic differential equation) that bridges the target distribution and a reference measure, so that sampling the target simplifies to sample the reference and the Markov process. In contrast to traditional samplers such as MCMC which use the target’s density function to build statistically exact samplers, the generative diffusions use the data to approximate a sampler akin to normalising flow (Chen et al., 2018; Papamakarios et al., 2021) and flow matching (Lipman et al., 2023). This comes with at least three benefits compared to MCMC: 1) scalability of the problem dimension (after the training time), 2) no need to explicitly know the target density function, 3) and the resulting samplers are embarrassingly differentiable (see a use case in Watson et al., 2022).
However, the generative diffusion framework (for unconditional sampling) is not immediately applicable to conditional sampling, since we do not have the conditional data samples from required to train the generative samplers. This article thus presents a class of generative samplers that utilise data samples from the joint , or the marginal with explicit likelihood, which are typically more accessible than the conditional samples. In particular, we focus on describing generic methodologies and show how they can inspire new ones, although plenty of generative conditional samplers have been specifically developed in their respective applications, for instance, the guided diffusions in computer vision (Luo et al., 2023; Kawar et al., 2022; Lugmayr et al., 2022).
The article is organised as follows. In the following section, we briefly explain the core of generative diffusions to set-up the preliminaries of generative conditional samplers. Then, in Section 3 we show how to approximate the samplers based on the data sampled from the joint distribution, called the joint bridging methods. In Section 4 we show another important approach using Feynman–Kac models that leverages the data from the marginal when the likelihood model is accessible, followed by a pedagogical example.
Notation
If is a forward process, then we abbreviate as the distribution of conditioned on for . If the process is further correlated with another random variable , then we use as the joint distribution of , and as the conditional one. We apply this notation analogously for the reverse process , but exchange to , e.g. for conditioned on . The path measures of and are denoted by and , respectively, with interchangeable marginal notation .
2 Generative diffusion sampling
We begin by explaining how generative diffusions are applied to solve unconditional sampling. Let be the distribution of a random variable that we aim to sample from. The generative diffusions start with a (forward-time) stochastic differential equation (SDE)
(1) |
that continuously transports to another reference distribution at time , where and are regular drift and dispersion functions, and is a Brownian motion. Then, the gist is to find a reverse correspondence of Equation (1) such that if the reversal initialises at distribution then it ends up with at . Namely, we look for a reversal
(2) |
and from now on we use to denote the marginal law of . If the forward SDE in Equation (1) is designed in such a way that is easy to sample (e.g., a Gaussian), then we can sample the target —which is hard—by simulating the reversal in Equation (2) which is simpler.111The Brownian motions in Equations (1) and (2) are not necessarily the same. To avoid clutter, we throughout the paper apply the same notation for all Brownian motions when we are not concerned with distinguishing them.
There are infinitely many such reversals if we only require that the forward and reversal match their marginals at and . But if we also constrain the time evolution of to be equal to for all , then we arrive at the classical result by Anderson (1982) who explicitly writes the reversal as
(3) |
where we shorten , and here stands for the probability density function of . In other words, Anderson’s reversal here means that it solves the same Kolmogorov forward equation (in reverse time) as with Equation (1).
However, it is known that Anderson’s construction comes with two computational challenges. The first challenge lies in the intractability of the score function (since that of is unknown). To handle this, the community has developed means of numerical techniques to approximate the score. A notable example in this context is denoising score matching (Song et al., 2021). It works by parametrising the score function (with, e.g., a neural network) and then estimating the parameters by optimising a loss function over the expectation on Equation (1) (see also the convergence analysis in De Bortoli et al., 2021, Thm. 1). The second challenge is that it is non-trivial to design a forward SDE that exactly terminates at an easy-to-sample . In practice, we often specify as a unit Gaussian and then choose the associated Langevin dynamic as the forward equation, at the cost of assuming large enough .
The so-called dynamic Schrödinger bridge (DSB) is gaining traction as an alternative to Anderson’s construction (De Bortoli et al., 2021). In this way, Equation (1) serves as a reference process, instead of the forward process. Suppose that we aim to bridge between and any specified reference measure , the DSB constructs Equation (1) (and its reversal) via the unique solution
(4) |
where is the path measure of the reference process, and stands for the collection of all SDE solutions. Namely, DSB searches for diffusion processes that exactly bridge and the given , and then DSB chooses the unique one that is closest to the reference process in the sense of Kullback–Leibler divergence. The solution of Equation (4) is typically not in closed form, and it has to be computed numerically. Compared to Anderson’s construction, DSB usually comes with smaller sampling error, since it does not make asymptotic assumptions on . However, the current approaches (see, e.g., methods in De Bortoli et al., 2021; Shi et al., 2023; Chen et al., 2022) for solving Equation (4) are computationally more demanding than commonly used denoising score matching, leading to potentially higher training cost.
3 Conditional sampling with joint samples
In the previous section we have seen the gist of generative sampling as well as two constructions of the generative samplers. However, they cannot be directly used to sample the conditional distribution , since they need samples of to estimate Equations (2) and (4). Recently, Vargas et al. (2023) and Phillips et al. (2024) have developed generative samplers without using samples of the target, but when applied to the conditional sampling, we would have to re-train their models every time the condition changes. In this section we show how to apply the two constructions (i.e., Anderson and DSB) to train conditional samplers by samples from the joint distribution which is a reasonable assumption for many applications (e.g., inpainting). We start by a heuristic idea, called Doob’s bridging, and then we show how to extend it to a generic framework that covers many other conditional generative samplers.
3.1 Doob’s bridging
Recall that we aim to sample , and assume that we can sample the joint . Denote by the conditional distribution of conditioned on for . The idea is now to construct the reversal in such a way that
(5) |
for all . If we in addition assume that the dimensions of and match, then an immediate construction that satisfies the requirement above is
(6) |
where the terminal and initial jointly follow by marginalising out the intermediate path. Thus, sampling simplifies to simulating the SDE in Equation (6) with initial value . Directly finding such an SDE is hard, but we can leverage Anderson’s construction to form it as the reversal of a forward equation
(7) |
Now we have a few handy methods to explicitly construct the forward equation above. One notable example is via Doob’s -transform (Rogers and Williams, 2000) by choosing and
(8) |
where is the drift function of another (arbitrary) reference SDE
and the -function is the conditional density function of with condition at . With this choice, we immediately have , and thus the desired is satisfied. However, we have to remark a caveat that the process in Equation (7) marginally is not a Markov process, since its SDE now invokes which is correlated with the process across time. But on the other hand, this problem can be solved by an extension , , such that is jointly Markov. We can then indeed learn the reversal in Equation (6) by Anderson’s construction. This approach was exploited by Somnath et al. (2023) who coined the term “aligned Schrödinger bridge”. But we note that the approach does not solve the DSB problem in Equation (4): In DSB we aim to find the optimal coupling under a path constraint, but in this approach, the coupling is already fixed by the given .
The generative conditional sampling with Doob’s bridging is summarised as follows.
Algorithm 3.1 (Doob’s bridging conditional sampling).
The Doob’s bridging conditional sampling consists in training and sampling a generative diffusion as follows.
- Training.
-
Sampling.
After the reversal is estimated. For any given condition , simulate Equation (6) with initial value . Then it follows that .
From the algorithm above we see that the approach is as straightforward as the standard generative sampling for unconditional distributions, with an additional involvement of in the forward and reverse equations; It adds almost no computational costs. Moreover, the algorithm does not assume unlike many other generative samplers. However, the algorithm suffers from a few non-trivial problems due to the use of Doob’s transform to exactly pin two points. First, Doob’s -function is mostly not available in closed form when the reference process is nonlinear. Although it is still possible to simulate Equation (7) without knowing explicitly (Schauer et al., 2017), and to approximate (Bågmark et al., 2022; Baker et al., 2024). This hinders the use of denoising score matching which is a computationally efficient estimator, as the conditional density of the forward equation is consequently intractable. Second, the function at time is not defined, and moreover, the forward drift is rather stiff around that time. This induces numerical errors and instabilities that are hard to eliminate. Finally, the approach assumes that the dimensions of and are the same, which limits the range of applications. Recently, Denker et al. (2024) generalise the method by exploiting the likelihood in such a way that the two points do not have to be exactly pinned. This can potentially solve the problems mentioned above.
3.2 Generalised joint bridging
Recall again that we aim to find a pair of forward and reversal SDEs such that the reversal satisfies Equation (5). In the previous section, we have exploited Doob’s transform for the purpose, but the approach comes with several problems due to the use of Doob’s -transform for exactly pinning the data random variables and . In this section, we present a generalised forward-reverse SDE that does not require exact pinning.
The core of the idea is to introduce an auxiliary variable in the joint , such that it facilities sampling , and that the marginal at satisfies (cf. Equation (5)). More precisely, the reversal that we desire is
(9) |
starting at any reference , so that . With this reversal, if we can sample , then consequently throughout the SDE. Define the forward equation that is paired with the reversal by
(10) |
where the reference measure is the joint distribution of . The Doob’s bridging is a special case, in the way that we have coerced as the reference. Also note that we can extend Equations (9) and (10) to Markov processes with in the same way as in Doob’s bridging. To ease later discussions, let us first summarise the conditional sampling by the joint bridging method as follows.
Algorithm 3.2 (Joint bridging conditional sampling).
The remaining question is how to identify the generative sampler to use in Algorithm 3.2. Now looking back at Equations (1) and (2), we see that the algorithm is essentially an unconditional generative sampler that operates on the joint space of and . Hence, we can straightforwardly choose either Anderson’s construction or DSB to obtain the required reversal in Equation (9). With Anderson’s approach, the reversal’s drift function becomes
(11) |
where we see that the marginal score in Equation (3) now turns into a conditional one. This conditional score function is the major hurdle of the construction, as the conditional score is often intractable. Song et al. (2021) utilise the decomposition , where the two terms on the right-hand side can be more accessible. Take image classification for example, where and stand for the image and label, respectively. The generative score is learnt via standard score matching, while the likelihood score can be estimated by training an image classifier with categorical for all . However, this is highly application dependent, and developing a generic and efficient estimator for the conditional score remains an active research question (see, e.g., approximations in Chung et al., 2023; Song et al., 2023).
Another problem of using Anderson’s construction is again the common assumption , as we need to sample the conditional reference . For example, Luo et al. (2023) choose , so that is approximately a stationary Gaussian. Under this choice, the resulting forward equation is akin to that of Doob’s bridging but instead pins between and a Gaussian-noised .
As an alternative to Anderson’s construction, we can employ DSB to search for a pair of Equations (9) and (10) that exactly bridges between and a given , within a finite horizon . Formally, we are looking for a process
(12) |
whose path measure minimises , where is the measure of a reference process
(13) |
Due to the fact that is now allowed to arbitrary, we can choose a such that is a reasonable approximation to that is easy to sample from (e.g., Laplace approximation).
Currently there are two (asymptotic) numerical approaches to solve for the (conditional) DSB in the two equations above. The approach developed by De Bortoli et al. (2021) aims to find a sequence that converges to as via half-bridges
(14) | ||||
(15) |
starting at . The solution of each half-bridge can be approximated by conventional parameter estimation methods in SDEs. As an example, in Equation (15), suppose that is obtained and that we can sample from it, then we can approximate by parametrising the SDE in Equation (12) and then estimate its parameters via drift matching between and . However, a downside of this approach is that the solution does not exactly bridge and at any albeit the asymptoticity: it is either or that is preserved, not the both. This problem is solved by Shi et al. (2023). They utilise the decomposition , where solves the optimal transport , and stands for the measure of the reference process conditioning at times and (e.g., Doob’s -transform). The resulting algorithm is then alternating between
(16) |
starting at , where is any coupling of and , for instance, the trivial product measure . In the iteration above, approximates as a Markov process, since the previous is not necessarily Markovian. Moreover, Shi et al. (2023) choose a particular projection , such that their time-marginal distributions match, and therefore preserving the marginals and at any . In essence, this algorithm builds a sequence on couplings that converges to , in conjunction with Doob’s bridging. Although the method has to simulate Doob’s -transform which can be hard for complex reference processes, the cost is marginal in the context.
Compared to the iteration in Equation (15), this projection-bridging method is particularly useful in the generative diffusion scenario, since we mainly care about the marginal distribution and not so much about the actual interpolation path. That said, at any iteration , the solution, is already a valid bridge between and at disposal for generative sampling, even though it is not yet a solution to the Schrödinger bridge.
3.3 Forward-backward path bridging
The previous joint bridging method (including Doob’s bridging as a special case) aims to find a reversal such that the conditional terminal is . In essence, it thinks of sampling the target as simulating a reverse SDE. In this section, we show another useful insight, by viewing sampling the target as simulating a stochastic filtering distribution (Corenflos et al., 2024; Dou and Song, 2024; Trippe et al., 2023). Formally, this approach sets base on
(17) |
and the core is the identity
(18) |
where the integrand is the filtering distribution (in the reverse-time direction) with initial value and measurement . The identity describes an ancestral sampling that, for any given condition , we first sample and then use it to solve the filtering problem of the reversal, hence the name “forward-backward path bridging”. However, since these two steps cannot be computed exactly, we show how to approximate them in practice in the following.
The distribution is in general not tractable. In Dou and Song (2024) and Trippe et al. (2023) they make assumptions so that they can facilitate an approximation
(19) |
More specifically, they assume that 1) the SDE in Equation (17) is separable (i.e., does not depend on while does not depend on ), so that they can independently simulate leaving away the part that depends on , and that 2) the SDE is stationary enough for large such that . Furthermore, the (continuous-time) filtering distribution is intractable too. In practice we apply sequential Monte Carlo (SMC, Chopin and Papaspiliopoulos, 2020) to approximately sample from it. This induces error due to 3) using a finite number of particles. Since continuous-time particle filtering (Bain and Crisan, 2009) is particularly hard in the generative context, we often have to 4) discretise the SDE beforehand, which again introduce errors, though simulating SDEs without discretisation is possible (Beskos and Roberts, 2005; Blanchet and Zhang, 2020). These four points constitute the main sources of errors of this method.
The work in Corenflos et al. (2024) eliminates all the errors except for that of the SDE discretisation. The idea is based on extending the identity in Equation (18) as an invariant Markov chain Monte Carlo (MCMC) kernel by
(20) | |||
where we see that the kernel is indeed -variant: . Moreover, we can replace the filtering distribution with another MCMC kernel by, for instance, gradient Metropolis–Hasting (Titsias and Papaspiliopoulos, 2018), or more suitable in this context, the conditional sequential Monte Carlo (CSMC, Andrieu et al., 2010). The algorithm finally works as a Metropolis-within-Gibbs sampler as follows.
Algorithm 3.3 (Gibbs filtering of conditional generative diffusion).
Let , and be the law of the reversal of Equation (17). For , do
-
1.
simulate forward ,
-
2.
set the reverse path ,
-
3.
simulate using CSMC.
Then for any .
The Gibbs filtering-based conditional sampler above has significantly less errors compared to Trippe et al. (2023) and Dou and Song (2024) as empirically shown in Corenflos et al. (2024). In a sense, the Gibbs sampler has transformed the two major errors due to and using a finite number of particles, to the increased autocorrelation in Algorithm 3.3. When is small, the samples are more correlated, but still statistically exact nonetheless. Moreover, the sampler does not assume the separability of the diffusion model, meaning that in the algorithm we are free to use the reversal construction of any kind (e.g., Anderson or DSB). On the other hand, Trippe et al. (2023) and Dou and Song (2024)’s approach does not instantly apply on DSBs which are rarely separable. The cost we pay for using the Gibbs sampler is mainly the implementation and statistical efficiency of the filtering MCMC in 3 of Algorithm 3.3. It is also an open question for simulating the filtering MCMC in continuous-time without discretising the SDE.
Comparing the filtering-based approaches to the joint bridging in Section 3.2.
These two methods mainly differ in the utilisation of their reversals, and they shine in different applications. More precisely, the joint bridging method aims to specifically construct a pair of forward and backward models that serves the conditional sampling, whereas the filtering-based methods work on any given forward-backward pair. This grants the filtering methods an upper hand for some applications as training-free samplers. Take image inpainting for example, where and stand for the observed and to-paint parts, respectively. The joint bridging method has to specifically train for a pair of forward and backward models for this inpainting problem. On the other hand, the filtering-based methods can take on any pre-trained unconditional diffusion models for the images (see Section 2), and plug-and-play without any training (see, e.g., Corenflos et al., 2024, Sec. 4.3). This is because in the inpainting application and completely factorises the unconditional diffusion model, and Dou and Song (2024) recently show that it can further generalise to linear inverse problems.
However, for applications when the training-free feature cannot be utilised, the joint bridging method is easier to implement and usually runs with less computational costs. The joint bridging method keeps not a path of , and thus it does not need to solve a (reverse) filtering problem which is significantly harder than simulating a reverse SDE. Furthermore, the joint bridging method immediately allows for discrete condition , while this is yet an interesting open question for the filtering-based methods. The discrete extension of generative diffusions in Campbell et al. (2022); Benton et al. (2024) may provide insights that enable the discrete extension.
4 Conditional sampling with explicit likelihood
In Section 3 we have investigated a class of methods that leverage the information from the joint distribution to build up conditional samplers for . However, for some applications we may only be able to sample the marginal . For instance, in image classification we usually have vast amount of images at hand, but pairing them with labels requires extensively more efforts. Thus, in this section we show how to construct conditional samplers within prior generative diffusions, based on exploiting the additional information from the likelihood . In particular, we focus the methods based on Feynman–Kac models with sequential Monte Carlo.
4.1 Conditional generative Feynman–Kac models
Let us assume that we are given a pair of forward-backward diffusion models (using arbitrary reversal construction) that targets as in Equations (1) and (2). For simplicity, let us change to work with the models at discrete times via
where , and . Since we can sample and evaluate the likelihood , it is natural to think of using importance sampling to sample . That is, if then with weight . However, the weights will barely be informative, as the prior samples are very unlikely samples from the posterior distribution in generative applications. Hence, in practice we generalise importance sampling within Feynman–Kac models (Chopin and Papaspiliopoulos, 2020) allowing us to effectively sample the target with sequential Monte Carlo (SMC) samplers. Formally, we define the generative Feynman–Kac model by
(21) |
where and are Markov kernels and potential functions, respectively, and is the marginal likelihood. Instead of applying naive importance sampling, we apply SMC to simulate the model, crucially, the target , as in the following algorithm.
Algorithm 4.4 (Generative SMC sampler).
The sequential Monte Carlo samples the Feynman–Kac model in Equation (21) as follows. First, draw and set for . Then, for do
-
1.
Resample if needed.
-
2.
Draw and compute for .
-
3.
Normalise for .
At the final step , the tuples are weighted samples of .
Given only the marginal constraints (e.g., at and ), the generative Feynman–Kac model is not unique. A trivial example is by letting , , , and , where Algorithm 4.4 recovers naive importance sampling which is not useful in reality. The purpose is thus to design and so that continuously anneals to for while keeping the importance samples in Algorithm 4.4 effective. Wu et al. (2023) and Janati et al. (2024) show a particularly useful system of Markov kernels and potentials by setting
(22) |
where , and define that reduces to the target’s likelihood. Indeed, if we substitute Equation (22) back into Equation (21), the marginal constraint is satisfied. This choice has at least two advantages that makes it valuable in the generative diffusion context. First, this construction is locally optimal, in the sense that if we also fix then which is exactly the Markov transition of the conditional reversal in Equation (11). Consequently, all samples in Algorithm 4.4 are equally weighted, and hence the SMC sampler is perfect. However, simulating the optimal Markov kernel is hard, as is intractable (except when ). This gives rise to another advantage: even if we replace the exact by any approximation for , the Feynman–Kac model remains valid. Namely, the weighted samples of Algorithm 4.4 converge to in distribution as even if is approximate. However, the quality of the approximation significantly impacts the effective sample size of the algorithm, and recently there have been numerous approximations proposed.
Following Equation (22), Wu et al. (2023) choose to approximate by linearisation, and let the Markov kernel be that of a Langevin dynamic that approximately leaves invariant. More specifically, they choose
(23) |
where stands for any approximate sample from , and is the Langevin step size. Originally, Wu et al. (2023) use as a sample of the reversal at starting at , which often experiences high variance, but it can be improved by using as the conditional mean of via Tweedie’s formula (Song et al., 2023). See also Cardoso et al. (2024) for an alternative construction of the proposal. While Wu et al. (2023)’s construction of the Feynman–Kac model is empirically successful in a number of applications, its main problem lies in its demanding computation. In particular, the proposal asks to differentiate through usually containing a denoising neural network.
Janati et al. (2024) follow up the construction in Equation (22) and propose an efficient sampler based on partitioning the Feynman–Kac model into several contiguous sub-models, where each sub-model targets its next. As an example, when we use two blocks, is divided into and for some , where starts at and targets , whereas starts at and targets . Although the division looks trivial at a first glance, the catch is that within each sub-model we can form more efficient approximations to the Feynman–Kac Markov transition. More precisely, recall that in Equation (23), aims to approximate , the accuracy of which depends on the distance . Now inside the sub-model , the terminal is instead of , where is mostly smaller than for . In addition, instead of running the SMC in Algorithm 4.4, they also propose a variational approximation to the Markov transition of the Feynman–Kac model in order to directly sample the model. Note that although Janati et al. (2024) set roots on linear Gaussian likelihood , their method can straightforwardly generalise to other likelihoods as well, with suitable Gaussian quadratures applied.
A computationally simpler bootstrap construction of the generative Feynman–Kac model is
(24) |
where we simplify the Markov kernel by that of the unconditional reversal which is easier compared to in Equation (23). Additionally, the approximate is also simplified as the target likelihood evaluated at and a suitably scaled with justified by the heuristics in Janati et al. (2024, Sec. 4.1). However, since the Markov proposal is farther from the optimal one, the resulting algorithm is statistically less efficient. When the problem of interests is not difficult, this bootstrap model may indeed be useful in practice (see our example in Section 5).
Comparing the Feynman–Kac-based approaches to the joint bridging in Section 3.2.
As we have already pointed out, the Feynman–Kac model using Equation (22) indeed recovers the Anderson’s conditional SDE in Equation (11). The difference is that the Feynman–Kac model describes the evolution of the probability distribution (using an ensemble of weighted samples) between the reference and target, while the conditional SDE formulates on the individual path. Essentially, they resemble the Lagrangian and Eulerian specifications, respectively, of a flow field (Santambrogio, 2015). This gives the Feynman–Kac formalism an advantage in the way that it does not need to precisely compute the conditional score in Equation (11) which is the main blocker of the conditional SDE approach. In other words, Algorithm 4.4 transforms the errors in approximating the conditional score, to the effective sample size of the weighted samples. However, consequently the Feynman–Kac model pays additional computational costs, as it needs to store and process an ensemble of samples to describe the distribution. Another advantage of Feynman–Kac model is that it does not need to train a dedicated conditional model, leveraging an evaluable likelihood and a pre-trained reversal of . Although the filtering-based method in Section 3.3 can also work on pre-trained generative models, it’s range of applications is more limited than Feynman–Kac. It would be an interesting future question to ask whether the method can extend to unknown likelihood, for instance, by applying the methods in Sisson et al. (2007); Beaumont et al. (2009); Papamakarios et al. (2019); Middleton et al. (2019).
5 Pedagogical example


In this section we present an example to illustrate 1) the joint bridging method with the Schrödinger bridge construction in Equation (12) and 2) the Feynman–Kac method using the construction in Equation (24). Importantly, we release our implementations, written with pedagogical intent.222See the code at https://github.com/spdes/gdcs implemented in JAX.
The conditional sampling task we test here is a two-dimensional distribution given as follows. Let be a two-dimensional Gaussian mixture, with covariances and , and define the likelihood as . We brute-force compute the posterior distribution via trapezoidal integration for any given to benchmark the conditional samplers.
To apply the joint bridging method with the Schrödinger bridge construction (which we refer to as CDSB), the first step is to identify the forward Equation (12) and its reversal, which constitutes the training part of the algorithm. We choose a Brownian motion as the reference process, and apply the numerical method in Equation (15) to solve for the forward and reversal with 15 iterations. After training, the conditional sampling amounts to running Algorithm 3.2. As for the Feynman–Kac method using the heuristic Equation (24), we choose , stratified resampling, and train a DSB for with settings identical to those in CDSB. The same neural network is used for both methods with time span , Euler discretisation steps, and reference distribution . We also use Hamiltonian Monte Carlo (HMC) as a baseline to sample the posterior distribution with step size 0.35, unit mass, and 100 leap-frog integrations. Finally, we draw 10,000 Monte Carlo samples for the test.
The results are shown in Figures 1 and 2 tested with conditions , , and leading to challenging distribution shapes. We see that in general both the CDSB and Feynman–Kac methods recover the true distribution to good extents, and that they are comparable to the MCMC method HMC. However, we note that the two generative methods are biased, particularly evidenced from their histograms. This makes sense, as the training for generative models can hardly be done exactly. Moreover, we also see that CDSB with is particularly erroneous between the two modes, as the training of CDSB uses the joint samples of , while the likelihood of is relatively low. This reflects that CDSB (or any other joint bridging method) may fail under extreme conditions. On the other hand, the Feynman–Kac method with SMC may also fail under extreme conditions, but it can be improved by using more particles (perhaps inefficiently).
6 Conclusion
In this article we have reviewed a class of conditional sampling schemes built upon generative diffusion models. We started by two foundational constructions for unconditional diffusion models: Anderson and Schrödinger bridge. From there we explored two practical scenarios (i.e., whether the joint , or and of the data distribution is accessible) to formulate the generative conditional samplers. In summary we have mainly described three methodologies in details. The first one, called joint bridging, involves training a dedicated generative diffusion that directly targets the conditional distribution , framing the problem of conditional sampling as simulating a generative reverse diffusion. The second one, including diffusion Gibbs, also employs a generative diffusion but targets the joint , and it casts conditional sampling as solving a stochastic filtering problem. The last one, leveraging the likelihood and a (pre-trained) generative diffusion that targets , represents the conditional sampling as simulating a Feynman–Kac model. We have demonstrated the connections among these methodologies and highlighted their respective strengths in various applications. In the final section, we have provided a pedagogical example illustrating the joint bridging and Feynman–Kac methods.
Finally, we close this paper with a few remarks and ideas for future work. 1) Akin to MCMC, the generative diffusion samplers would also need convergence diagnosis. The difference is that for MCMC we need to check if the sampler converges to the stationary phase, while for generative diffusions we are instead interested in measuring the biases. For instance, in our experiment the training of the generative samplers are empirically assessed by an expert, while MCMC are backed by ergodic theory. Currently, to the best of our knowledge, it is not clear how to gauge if the trained generative sampler can be trusted in a principled way, in particular when it comes to high-dimensional problems. 2) As illustrated in our experiment, sampling with extreme conditions can be problematic for generative diffusions. Addressing outliers is an important area for improving generative models. 3) Apart from the base methodologies of generative diffusions, the deep learning techniques (e.g., U-net, time embedding, see Song and Ermon, 2020, for a longer list) are in our opinion the core that made them successful. Improving the structure and training of the involved neural networks are essential for making generative diffusion models functional in complex tasks, and we believe that this will remain a central focus in future developments.
This work was partially supported by the Kjell och Märta Beijer Foundation, the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation and by the project Deep probabilistic regression – new models and learning algorithms (contract number: 2021-04301), by the Swedish Research Council.
References
- Anderson (1982) Brian D. O. Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313–326, 1982.
- Andrieu et al. (2010) Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269–342, 2010. With discussion.
- Bågmark et al. (2022) Kasper Bågmark, Adam Andersson, and Stig Larsson. An energy-based deep splitting method for the nonlinear filtering problem. Partial Differential Equations and Applications, 4(2):14, 2022.
- Bain and Crisan (2009) Alan Bain and Dan Crisan. Fundamentals of stochastic filtering. Springer-Verlag New York, 2009.
- Baker et al. (2024) Elizabeth L. Baker, Moritz Schauer, and Stefan Sommer. Score matching for bridges without time-reversals. arXiv preprint arXiv:2407.15455, 2024.
- Beaumont et al. (2009) Mark A. Beaumont, Jean-Marie Cornuet, Jean-Michel Marin, and Christian P. Robert. Adaptive approximate Bayesian computation. Biometrika, 96(4):983–990, 2009.
- Benton et al. (2024) Joe Benton, Yuyang Shi, Valentin De Bortoli, George Deligiannidis, and Arnaud Doucet. From denoising diffusions to denoising Markov models. Journal of the Royal Statistical Society Series B: Statistical Methodology, 86(2):286–301, 2024.
- Beskos and Roberts (2005) Alexandros Beskos and Gareth O. Roberts. Exact simulation of diffusions. The Annals of Applied Probability, 15(4):2422–2444, 2005.
- Blanchet and Zhang (2020) Jose Blanchet and Fan Zhang. Exact simulation for multivariate Itô diffusions. Advances in Applied Probability, 52(4):1003–1034, 2020.
- Campbell et al. (2022) Andrew Campbell, Joe Benton, Valentin De Bortoli, Thomas Rainforth, George Deligiannidis, and Arnaud Doucet. A continuous time framework for discrete denoising models. In Advances in Neural Information Processing Systems, volume 35, pages 28266–28279. Curran Associates, Inc., 2022.
- Cardoso et al. (2024) Gabriel Cardoso, Yazid Janati, Sylvain Le Corff, and Eric Moulines. Monte Carlo guided denoising diffusion models for Bayesian linear inverse problems. In The 12th International Conference on Learning Representations, 2024.
- Chen et al. (2018) Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
- Chen et al. (2022) Tianrong Chen, Guan-Horng Liu, and Evangelos Theodorou. Likelihood training of Schrödinger bridge using forward-backward SDEs theory. In Proceedings of the 10th International Conference on Learning Representations, 2022.
- Chopin and Papaspiliopoulos (2020) Nicolas Chopin and Omiros Papaspiliopoulos. An introduction to sequential Monte Carlo. Springer Series in Statistics. Springer, 2020.
- Chung et al. (2023) Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In The 11th International Conference on Learning Representations, 2023.
- Corenflos et al. (2024) Adrien Corenflos, Zheng Zhao, Simo Särkkä, Jens Sjölund, and Thomas B Schön. Conditioning diffusion models by explicit forward-backward bridging. arXiv preprint arXiv:2405.13794, 2024.
- De Bortoli et al. (2021) Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion Schrödinger bridge with applications to score-based generative modeling. In Advances in Neural Information Processing Systems, volume 34, pages 17695–17709, 2021.
- Denker et al. (2024) Alexander Denker, Francisco Vargas, Shreyas Padhy, Kieran Didi, Simon Mathis, Vincent Dutordoir, Riccardo Barbano, Emile Mathieu, Urszula Julia Komorowska, and Pietro Lio. DEFT: Efficient finetuning of conditional diffusion models by learning the generalised -transform. arXiv preprint arXiv:2406.01781, 2024.
- Dou and Song (2024) Zehao Dou and Yang Song. Diffusion posterior sampling for linear inverse problem solving: A filtering perspective. In Proceedings of the 12th International Conference on Learning Representations, 2024.
- Janati et al. (2024) Yazid Janati, Alain Durmus, Eric Moulines, and Jimmy Olsson. Divide-and-conquer posterior sampling for denoising diffusion priors. arXiv preprint arXiv:2403.11407, 2024.
- Kawar et al. (2022) Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. In Advances in Neural Information Processing Systems, volume 35, pages 23593–23606. Curran Associates, Inc., 2022.
- Lipman et al. (2023) Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In Proceedings of the 11th International Conference on Learning Representations, 2023.
- Lugmayr et al. (2022) Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. RePaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11461–11471, 2022.
- Luo et al. (2023) Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sjölund, and Thomas B. Schön. Image restoration with mean-reverting stochastic differential equations. In Proceedings of the 40th International Conference on Machine Learning, volume 202, pages 23045–23066. PMLR, 2023.
- Meyn and Tweedie (2009) Sean P. Meyn and Richard L. Tweedie. Markov chains and stochastic stability. Cambridge University Press, 2nd edition, 2009.
- Middleton et al. (2019) Lawrece Middleton, George Deligiannidis, Arnaud Doucet, and Pierre E. Jacob. Unbiased smoothing using particle independent Metropolis-Hastings. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019.
- Papamakarios et al. (2019) George Papamakarios, David Sterratt, and Iain Murray. Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, volume 89, pages 837–848. PMLR, 2019.
- Papamakarios et al. (2021) George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22(57):1–64, 2021.
- Phillips et al. (2024) Angus Phillips, Hai-Dang Dau, Michael John Hutchinson, Valentin De Bortoli, George Deligiannidis, and Arnaud Doucet. Particle denoising diffusion sampler. arXiv preprint arXiv:2402.06320, 2024.
- Rogers and Williams (2000) Chris Rogers and David Williams. Diffusions, Markov Processes, and martingales. Cambridge University Press, 2nd edition, 2000.
- Santambrogio (2015) Filippo Santambrogio. Optimal transport for applied mathematicians, volume 87 of Progress in Nonlinear Differential Equations and Their Applications. Birkhäuser Cham, 2015.
- Schauer et al. (2017) Moritz Schauer, Frank van der Meulen, and Harry van Zanten. Guided proposals for simulating multi-dimensional diffusion bridges. Bernoulli, 23(4A):2917–2950, 2017.
- Shi et al. (2023) Yuyang Shi, Valentin De Bortoli, Andrew Campbell, and Arnaud Doucet. Diffusion Schrödinger bridge matching. In Advances in Neural Information Processing Systems, volume 36, 2023.
- Sisson et al. (2007) Scott A. Sisson, Y. Fan, and Mark M. Tanaka. Sequential Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 104(6):1760–1765, 2007.
- Somnath et al. (2023) Vignesh Ram Somnath, Matteo Pariset, Ya-Ping Hsieh, Maria Rodriguez Martinez, Andreas Krause, and Charlotte Bunne. Aligned diffusion Schrödinger bridges. In Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, volume 216, pages 1985–1995. PMLR, 2023.
- Song et al. (2023) Jiaming Song, Arash Vahdat, Morteza Mardani, and Jan Kautz. Pseudoinverse-guided diffusion models for inverse problems. In Proceedings of the 11th International Conference on Learning Representations, 2023.
- Song and Ermon (2020) Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. In Advances in Neural Information Processing Systems, volume 33, pages 12438–12448. Curran Associates, Inc., 2020.
- Song et al. (2021) Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In Proceedings of the 9th International Conference on Learning Representations, 2021.
- Titsias and Papaspiliopoulos (2018) Michalis K. Titsias and Omiros Papaspiliopoulos. Auxiliary gradient-based sampling algorithms. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80(4):749–767, 2018.
- Trippe et al. (2023) Brian L. Trippe, Jason Yim, Doug Tischer, David Baker, Tamara Broderick, Regina Barzilay, and Tommi S. Jaakkola. Diffusion probabilistic modeling of protein backbones in 3D for the motif scaffolding problem. In Proceedings of The 11th International Conference on Learning Representations, 2023.
- Vargas et al. (2023) Francisco Vargas, Will Sussman Grathwohl, and Arnaud Doucet. Denoising diffusion samplers. In Proceedings of the 11th International Conference on Learning Representations, 2023.
- Watson et al. (2022) Daniel Watson, William Chan, Jonathan Ho, and Mohammad Norouzi. Learning fast samplers for diffusion models by differentiating through sample quality. In Proceedings of the 10th International Conference on Learning Representations, 2022.
- Wu et al. (2023) Luhuan Wu, Brian L. Trippe, Christian A. Naesseth, David Blei, and John Patrick Cunningham. Practical and asymptotically exact conditional sampling in diffusion models. In ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling, 2023.