This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

State-observation augmented diffusion model for nonlinear assimilation

Zhuoyuan Li [email protected] School of Mathematical Sciences, Peking University, Beijing 100871, China Bin Dong Corresponding author. [email protected] Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China Center for Machine Learning Search, Peking University, Beijing 100871, China Pingwen Zhang Corresponding author. [email protected] School of Mathematical Sciences, Peking University, Beijing 100871, China School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
Abstract

Data assimilation has become a crucial technique aiming to combine physical models with observational data to estimate state variables. Traditional assimilation algorithms often face challenges of high nonlinearity brought by both the physical and observational models. In this work, we propose a novel data-driven assimilation algorithm based on generative models to address such concerns. Our State-Observation Augmented Diffusion (SOAD) model is designed to handle nonlinear physical and observational models more effectively. The marginal posterior associated with SOAD has been derived and then proved to match the real posterior under mild assumptions, which shows theoretical superiority over previous score-based assimilation works. Experimental results also indicate that our SOAD model may offer improved accuracy over existing data-driven methods.

1 Introduction

The advent of modern observational facilities and devices has led to an exponential increase in the amount of data collected from the physical world. Consequently, data assimilation (DA), as an uncertainty quantification technique capable of fusing observations with physical models, has gained significant attention. Over the past few decades, the DA community has developed a series of methods to address the assimilation problem under various assumptions and constraints. Beginning with early approaches such as the nudging method and optimal interpolation, modern assimilation techniques, including the Kalman filter (KF) [26] and its variants [25, 24], variational methods, and their hybrid versions [12, 60], have demonstrated both theoretical and practical effectiveness. Nonetheless, the high dimensionality and the intrinsic nonlinearity of both the physical and the observational models have consistently rendered the assimilation problem a challenging task.

In this work, we aim to reframe the assimilation problem through the lens of generative modeling. The connection between assimilation and generative models is evident. Any assimilation algorithm focuses on combining information from both the physical models and the observational data, essentially performing posterior estimation of the state variables given the observations. The physical models contribute to the prior knowledge, which is callibrated by the likelihood probability induced by the observational operators and data. Once the posterior distribution is obtained, the mean field and other uncertainty estimations can be derived. Generative models, which have been extensively studied for decades, strive to learn data distributions in a statistical manner. They typically rely on large datasets sampled from the target distribution without any physical knowledge, assuming that all hidden patterns are embedded within the data distribution. These patterns can be extracted by generative models given sufficient data. Combining the prior knowledge learned by a generative model with appropriate likelihood modeling will offer an alternative approach to solving assimilation tasks, particularly when the underlying physics are not fully understood.

As pointed out by [45, 18], the massive influx of observational data, such as satellite observations and in-situ scientific measurements, has encouraged researchers to explore the potential of data-driven approaches. Machine learning (ML), and more specifically, deep learning (DL), has shown its exceptional capability in learning nonlinear correlations and extracting high-dimensional features from training data. Inspired by the success of DL in fields such as computer vision (CV) [55, 29] and natural language processing (NLP) [14, 52, 59], scientists are increasingly applying DL techniques within the earth and atmospheric sciences community. Recent success in training large foundation models for weather forecasting, such as FourCastNet [37], Pangu [4], GraphCast [32], and FengWu [7], have demonstrated the potential of data-driven models in handling complex global atmospheric physics. Readers may refer to [8] for a comprehensive review of other advanced work. These advancements indicate a shift towards data-driven methodologies, which may enhance our understanding and predictive capabilities in atmospheric sciences.

However, unlike forecasting or nowcasting tasks that can be directly managed by common auto-regressive models such as RNNs [23, 10, 46] and Transformers [53, 34], the assimilation problem is inherently more complex. The core of assimilation involves merging observations into the physical prior rather than just predicting future states, which necessitates a more sophisticated model to handle the intricate interactions between physical models and observations. In this study, we propose a novel data-driven DL approach specially designed for the posterior estimation in order to handle the assimilation problem. With the establishment of state-observation augmented dynamics, our approach also relaxes the linear constraints typically imposed on physical and observational models in most existing assimilation methods, thereby enhancing flexibility and adaptability to real-world applications.

2 Related works

The general idea of our work is to train a deep generative model from the training data {𝒙i}i\{\bm{x}_{i}\}_{i} to provide an estimation of the physical prior, which is then used for assimilation of the state variables. Generative models have been consistently studied in the field of DL. As one of the earliest generative models with DL techniques, variational AutoEncoders (VAEs) [28, 17] learn the target data distribution by maximizing the evidence lower bound (ELBO). Generative Adversarial Networks (GANs) [19, 13] are another kind of generative models, which learn the distribution through a zero-sum game between generators and discriminators. Alternatively, normalizing flows are employed not only for generating samples but also for explicitly evaluating probability densities by ensuring each layer to be invertible [16, 41, 30]. Inspired by [48], the most recent diffusion models (DMs) [35, 56, 57] have achieved impressive performances across various tasks such as image/video generation, restoration, and inpainting [21, 49, 42, 47]. A DM considers the data generation process as a reversed procedure of a forward diffusion process sending clear states to Gaussian noise. To sample 𝒙𝒟\bm{x}\sim\mathscr{D} from a target distribution 𝒟\mathscr{D}, a denoising model is used to recover the unperturbed states from randomly generated Gaussian noise by solving a backward stochastic differential equation [50]. Thus, DMs need to learn the score function defined as the gradient of the marginal log-likelihood 𝒙tlogpt(𝒙t)\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t}) for each time step tt during training. In the context of conditional data generation, the gradient 𝒙tlogpt(𝒙t𝒚)\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t}\mid\bm{y}) can be decomposed into the sum of the prior score function 𝒙tlogpt(𝒙t)\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t}) and the noised likelihood score function 𝒙tlogpt(𝒚𝒙t)\nabla_{\bm{x}_{t}}\log p_{t}(\bm{y}\mid\bm{x}_{t}) by Bayes’ rule, and a DM should focus on establishing surrogate models for both terms. Some works [15, 22] directly replace the noised likelihood term 𝒙tlogpt(𝒚𝒙t)\nabla_{\bm{x}_{t}}\log p_{t}(\bm{y}\mid\bm{x}_{t}) by a neural network. Alternatively, other works like DPS [11] and DMPS [36] have proposed deterministic models in a zero-shot setting for lower computational cost and better performance.

Data assimilation with DL enhancements has received increasing attention in recent years. Some works treat neural networks as fast emulators [6] or correctors [20] for physical models, followed by a hybrid assimilation process. Another approach addresses the high-dimensionality problem using latent assimilation (LA) [2], which seeks a lower-dimensional representation of the state variables to be assimilated using either linear Reduced-Order Models (ROMs) [38, 9] or data-driven neural networks [40, 39]. Although the LAINR [33] method has pushed the LA approach towards a much more flexible settings, these methods still rely on well-designed classical assimilation algorithms for the latent space. Deep Kalman Filter (DKF) [31] is one of the first works that directly solving the assimilation problem independent of any existing classical assimilation algorithms, which employs autoregressive models to represent the joint distribution for the state variables at multiple time steps. Score-based Data Assimilation (SDA) [43, 44] introduces the idea of using score-based generative models, demonstrating effective handling of assimilation with subsampling observational models. However, as will be shown in our experiments, SDA is still limited to simple type of observations, and the stability of the reverse-time evolution is not guaranteed with highly nonlinear observational models.

To address the challenges of nonlinearity, we propose our State-Observation Augmented Diffusion (SOAD) model to handle nonlinear assimilation tasks. Similar to previous DL-based assimilation methods, our SOAD model is fully data-driven and does not require any knowledge of the physical models. Additionally, the state-observation augmented structure not only enables us to assimilate with observations from multiple sources simultaneously, but also drops the requirement of physical knowledge with sufficient training data. Such flexibility makes our approach more adaptive to the real-world applications.

The following sections are organized as follows. First, we present the mathematical formulation of the assimilation task we focus on. Next, some basic concepts of conditional score-based generative models are introduced briefly, which is followed by a detailed deduction of our SOAD model. Theoretical analysis of SOAD has been provided as well. Finally, experiments on a two-layer quasi-geostrophic model will be exhibited to demonstrate the effectiveness of our SOAD model.

3 Methodology

3.1 Problem settings

Consider a discrete dynamical system combined with an observational model, described by the following equations

{𝑿k=kX(𝑿k1)+𝜺kM,𝜺kM𝒟M,𝒀k=k(𝑿k)+𝜺kO,𝜺kO𝒟O,\begin{cases}\bm{X}_{k}=\mathcal{M}_{k}^{X}(\bm{X}_{k-1})+\bm{\varepsilon}_{k}^{M},\quad\bm{\varepsilon}_{k}^{M}\sim\mathscr{D}^{M},\\ \bm{Y}_{k}=\mathcal{H}_{k}(\bm{X}_{k})+\bm{\varepsilon}_{k}^{O},\quad\bm{\varepsilon}_{k}^{O}\sim\mathscr{D}^{O},\end{cases} (1)

where kk represents the time index. To maintain generality, we only make the following assumptions:

  • The forward propagation operator kX\mathcal{M}_{k}^{X} is either time-independent or exhibits warningperiodic behaviors over time.

  • The observation operator k\mathcal{H}_{k} can vary with time but can be decompsed as k=𝑺k\mathcal{H}_{k}=\bm{S}_{k}\circ\mathcal{H}, where 𝑺k\bm{S}_{k} is a time-varing linear operator and \mathcal{H} remains fixed.

These assumptions are pragmatic in many practical contexts. For instance, in numerical weather prediction, the forward propagation operator k\mathcal{M}_{k} is commonly assumed to have daily or seasonal periodicity. The observation operator k\mathcal{H}_{k} may vary at different time steps due to limitations of observational facilities, yet the set of all possible observational variables typically remains constant. This suggests that the variation in observational operators can be adequately captured by a subsampling matrix 𝑺k\bm{S}_{k} when \mathcal{H} outputs all possible variables everywhere. While traditional assimilation often presume Gaussian noise distributions 𝒟M\mathscr{D}^{M} and 𝒟O\mathscr{D}^{O}, we will demonstrate later that these assumptions can be relaxed as well.

Given a sequence of observations {𝒀k}k=SS+L\{\bm{Y}_{k}\}_{k=S}^{S+L}, our objective is to obtain the posterior distribution of the state variables {𝑿k}k=SS+L\{\bm{X}_{k}\}_{k=S}^{S+L}, denoted by p({𝑿k}k=SS+L{𝒀k}k=SS+L)p\left(\{\bm{X}_{k}\}_{k=S}^{S+L}\mid\{\bm{Y}_{k}\}_{k=S}^{S+L}\right). Next, we will detail the methodology we propose to estimate the posterior distribution effectively.

3.2 Score-based data assimilation

Score-based generative models [50] have received wide attention in the field of machine learning, offering an alternative perspective to diffusion models [21] when the number of time steps and noise levels are infinite. Here, we will briefly review the fundamental concepts behind score-based generative models.

3.2.1 score-based generative models

We start from considering the data distribution we aim to learn, denoted as 𝒙0p(𝒙0)\bm{x}_{0}\sim p(\bm{x}_{0}). By progressively adding Gaussian noise, the distribution undergoes an approximate transformation into a standard normal distribution, and the whole process can be modeled as a forward-time diffusion process. Mathematically, following the notation in [50], we describe the diffusion process as a linear stochastical differential equation (SDE) given by

d𝒙t=f(t)𝒙tdt+g(t)d𝒘t,t[0,1],\mathrm{d}\bm{x}_{t}=f(t)\bm{x}_{t}\mathrm{d}t+g(t)\mathrm{d}\bm{w}_{t},\,t\in[0,1], (2)

where f(t)f(t) and g(t)g(t) represent the drift term and diffusion coefficient, respectively, and 𝒘t\bm{w}_{t} signifies the standard Wiener process. Denoting by pst(𝒙t𝒙s)p_{st}(\bm{x}_{t}\mid\bm{x}_{s}) the transition kernel from 𝒙s\bm{x}_{s} to 𝒙t\bm{x}_{t} for 0s<t10\leq s<t\leq 1, we can derive the conditional distribution

p0t(𝒙t𝒙0)=𝒩(𝒙t;μt𝒙0,σt2𝑰)p_{0t}(\bm{x}_{t}\mid\bm{x}_{0})=\mathcal{N}(\bm{x}_{t};\mu_{t}\bm{x}_{0},\sigma_{t}^{2}\bm{I}) (3)

with the mean μt\mu_{t} and the variance σt2\sigma_{t}^{2} explicitly [51] given by

μt=exp(0tf(s)ds),σt2=0texp(2stf(u)du)g(s)2ds.\mu_{t}=\exp\left(\int_{0}^{t}f(s)\mathrm{d}s\right),\quad\sigma_{t}^{2}=\int_{0}^{t}\exp\left(2\int_{s}^{t}f(u)\mathrm{d}u\right)g(s)^{2}\mathrm{d}s. (4)

With appropriate choices of f(t)f(t) and g(t)g(t), the forward diffusion process (2) transforms the data distribution into standard Gaussian noise p(𝒙1)𝒩(0,𝑰)p(\bm{x}_{1})\approx\mathcal{N}(0,\bm{I}). Under such circumstances, the reverse-time evolution, characterized by

d𝒙t=(f(t)g(t)2𝒙tlogpt(𝒙t))dt+g(t)d𝒘t,\mathrm{d}\bm{x}_{t}=\left(f(t)-g(t)^{2}\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t})\right)\mathrm{d}t+g(t)\mathrm{d}\mathchoice{\mkern 3.0mu\reflectbox{$\displaystyle\vec{\reflectbox{$\mkern-3.0mu\displaystyle\bm{w}\mkern 3.0mu$}}$}\mkern-3.0mu}{\mkern 3.0mu\reflectbox{$\textstyle\vec{\reflectbox{$\mkern-3.0mu\textstyle\bm{w}\mkern 3.0mu$}}$}\mkern-3.0mu}{\mkern 2.0mu\reflectbox{$\scriptstyle\vec{\reflectbox{$\mkern-2.0mu\scriptstyle\bm{w}\mkern 2.0mu$}}$}\mkern-2.0mu}{\mkern 2.0mu\reflectbox{$\scriptscriptstyle\vec{\reflectbox{$\mkern-2.0mu\scriptscriptstyle\bm{w}\mkern 2.0mu$}}$}\mkern-2.0mu}_{t}, (5)

is also a diffusion process [3]. Consequently, the reverse-time evolution can be used to generate samples by transforming noise 𝒙1𝒩(0,𝑰)\bm{x}_{1}\sim\mathcal{N}(0,\bm{I}) back into data distribution 𝒙0p(𝒙0)\bm{x}_{0}\sim p(\bm{x}_{0}), provided the score function of each marginal distribution 𝒙tlogpt(𝒙t)\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t}) is available for all tt.

To develop an effective score-based generative model, the key lies in minimizing the denoising score matching loss

L(θ):=𝔼t𝒰(0,1)𝒙tp0t(𝒙t𝒙0)𝒙0p(𝒙0)σt2𝒔θ(𝒙t,t)𝒙tlogp(𝒙t𝒙0)2L(\theta):=\operatorname{\mathbb{E}}_{\begin{subarray}{c}t\sim\mathcal{U}(0,1)\\ \bm{x}_{t}\sim p_{0t}(\bm{x}_{t}\mid\bm{x}_{0})\\ \bm{x}_{0}\sim p(\bm{x}_{0})\end{subarray}}\sigma_{t}^{2}\left\|\bm{s}_{\theta}(\bm{x}_{t},t)-\nabla_{\bm{x}_{t}}\log p(\bm{x}_{t}\mid\bm{x}_{0})\right\|^{2} (6)

as discussed in [54]. In practice, 𝒔θ(𝒙t,t)\bm{s}_{\theta}(\bm{x}_{t},t) is usually parameterized as a denoising network

𝜺θ(𝒙t,t)=σt𝒔θ(𝒙t,t),\bm{\varepsilon}_{\theta}(\bm{x}_{t},t)=-\sigma_{t}\bm{s}_{\theta}(\bm{x}_{t},t), (7)

and we can rewrite the optimization problem as

minθ𝔼t𝒰(0,1)𝜺𝒩(0,𝑰)𝒙0p(𝒙0)𝜺θ(μt𝒙0+σt𝜺,t)𝜺2.\min_{\theta}\operatorname{\mathbb{E}}_{\begin{subarray}{c}t\sim\mathcal{U}(0,1)\\ \bm{\varepsilon}\sim\mathcal{N}(0,\bm{I})\\ \bm{x}_{0}\sim p(\bm{x}_{0})\end{subarray}}\left\|\bm{\varepsilon}_{\theta}(\mu_{t}\bm{x}_{0}+\sigma_{t}\bm{\varepsilon},t)-\bm{\varepsilon}\right\|^{2}. (8)

Upon successful training of the denoising network, we are abole to generate samples 𝒙0p0(𝒙0)\bm{x}_{0}\sim p_{0}(\bm{x}_{0}) by evolving the reverse-time process from an initial random sample 𝒙1𝒩(0,𝑰)\bm{x}_{1}\sim\mathcal{N}(0,\bm{I}). Classical numerical solvers for general SDEs such as Euler-Maruyama and stochastic Runge-Kutta methods, and the ancestral sampling [21, 50] are appliable. Recently, to increase the sampling efficiency, the exponential integrator (EI) discretization scheme [58]

𝒙tμtμt𝒙t+(σtσtμtμt)σt𝜺θ(𝒙t,t)\bm{x}_{t-}\leftarrow\frac{\mu_{t-}}{\mu_{t}}\bm{x}_{t}+\left(\frac{\sigma_{t-}}{\sigma_{t}}-\frac{\mu_{t-}}{\mu_{t}}\right)\sigma_{t}\bm{\varepsilon}_{\theta}(\bm{x}_{t},t) (9)

has also been proposed. These schemes often work together with the predictor-corrector sampling strategy by performing a few steps of Langevin Monte Carlo (LMC) sampling [50]

𝒙t𝒙t+δ2𝒔θ(𝒙t,t)+δ𝒛,𝒛𝒩(0,𝑰)\bm{x}_{t}\leftarrow\bm{x}_{t}+\frac{\delta}{2}\bm{s}_{\theta}(\bm{x}_{t},t)+\sqrt{\delta}\bm{z},\quad\bm{z}\sim\mathcal{N}(0,\bm{I}) (10)

for better performances.

3.2.2 conditional score models

While score-based generative models we have introduced above can be directly used for unconditional sampling with well-trained denoising models, assimilation tasks require estimating the conditional distribution p(𝒙𝒚)p(\bm{x}\mid\bm{y}), where we employ the abbreviations 𝒙=𝒙0={𝑿k}k=SS+L\bm{x}=\bm{x}_{0}=\{\bm{X}_{k}\}_{k=S}^{S+L} and 𝒚={𝒀k}k=SS+L\bm{y}=\{\bm{Y}_{k}\}_{k=S}^{S+L}. Note that the conditional score function can be decomposed as

𝒙tlogpt(𝒙t𝒚)\displaystyle\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t}\mid\bm{y}) =𝒙tlogp(𝒙t𝒙0)p(𝒙0𝒚)d𝒙0\displaystyle=\nabla_{\bm{x}_{t}}\log\int p(\bm{x}_{t}\mid\bm{x}_{0})p(\bm{x}_{0}\mid\bm{y})\mathrm{d}\bm{x}_{0} (11)
=𝒙tlogp(𝒙t𝒙0)p(𝒚𝒙0)p0(𝒙0)p(𝒚)d𝒙0\displaystyle=\nabla_{\bm{x}_{t}}\log\int\frac{p(\bm{x}_{t}\mid\bm{x}_{0})p(\bm{y}\mid\bm{x}_{0})p_{0}(\bm{x}_{0})}{p(\bm{y})}\mathrm{d}\bm{x}_{0}
=𝒙tlogp(𝒙0𝒙t)p(𝒚𝒙0)pt(𝒙t)p(𝒚)d𝒙0\displaystyle=\nabla_{\bm{x}_{t}}\log\int\frac{p(\bm{x}_{0}\mid\bm{x}_{t})p(\bm{y}\mid\bm{x}_{0})p_{t}(\bm{x}_{t})}{p(\bm{y})}\mathrm{d}\bm{x}_{0}
=𝒙tlogpt(𝒙t)+𝒙tlogp(𝒚𝒙0)p(𝒙0𝒙t)d𝒙0\displaystyle=\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t})+\nabla_{\bm{x}_{t}}\log\int p(\bm{y}\mid\bm{x}_{0})p(\bm{x}_{0}\mid\bm{x}_{t})\mathrm{d}\bm{x}_{0}

by Bayes’ formula, where the additional term is usually called the adversarial gradient

𝒙tlogpt(𝒚𝒙t)=𝒙tlogp(𝒚𝒙0)p(𝒙0𝒙t)d𝒙0.\nabla_{\bm{x}_{t}}\log p_{t}(\bm{y}\mid\bm{x}_{t})=\nabla_{\bm{x}_{t}}\log\int p(\bm{y}\mid\bm{x}_{0})p(\bm{x}_{0}\mid\bm{x}_{t})\mathrm{d}\bm{x}_{0}. (12)

To incorporate conditional information, one may introduce a separate network for the adversarial gradient alongside the original denoising network as suggested in [15, 22]. Alternatively, a conditional denoising network 𝜺θ(𝒙t,t;𝒚)\bm{\varepsilon}_{\theta}(\bm{x}_{t},t;\bm{y}) can be trained directly to approximate σt𝒙tlogpt(𝒙t𝒚)-\sigma_{t}\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t}\mid\bm{y}), and such an idea has been widely used in many class-conditional generation and text-to-image/video generation tasks [42, 47].

Instead of training a different denoising network, some other approaches focus on estimating the reverse-time transition kernel p(𝒙0𝒙t)p(\bm{x}_{0}\mid\bm{x}_{t}) in (12) to compute the time-dependent likelihood pt(𝒚𝒙t)p_{t}(\bm{y}\mid\bm{x}_{t}) explicitly. By Tweedie’s formula, DPS [11] proposes a delta-function approximation

pDPS(𝒙0𝒙t)δ(𝒙0𝒙^0(𝒙t,t)),p_{\mathrm{DPS}}(\bm{x}_{0}\mid\bm{x}_{t})\approx\delta(\bm{x}_{0}-\hat{\bm{x}}_{0}(\bm{x}_{t},t)), (13)

where the estimator 𝒙^0(𝒙t,t)\hat{\bm{x}}_{0}(\bm{x}_{t},t) is defined as

𝒙^0(𝒙t,t)=μt1(𝒙tσt𝜺θ(𝒙t,t))\hat{\bm{x}}_{0}(\bm{x}_{t},t)=\mu_{t}^{-1}\left(\bm{x}_{t}-\sigma_{t}\bm{\varepsilon}_{\theta}(\bm{x}_{t},t)\right) (14)

to approximate the conditional mean

𝔼𝒙p(𝒙0𝒙t)[𝒙]=μt1(𝒙t+σt2𝒙tlogpt(𝒙t)).\operatorname{\mathbb{E}}_{\bm{x}\sim p(\bm{x}_{0}\mid\bm{x}_{t})}[\bm{x}]=\mu_{t}^{-1}\left(\bm{x}_{t}+\sigma_{t}^{2}\nabla_{\bm{x}_{t}}\log p_{t}(\bm{x}_{t})\right). (15)

Based on the uninformative-prior assumption, DMPS [36] replaces the transition kernel with a Gaussian distribution

pDMPS(𝒙0𝒙t)=𝒩(𝒙0;μt1𝒙t,σt2μt2𝑰)p_{\mathrm{DMPS}}(\bm{x}_{0}\mid\bm{x}_{t})=\mathcal{N}\left(\bm{x}_{0};\mu_{t}^{-1}\bm{x}_{t},\frac{\sigma_{t}^{2}}{\mu_{t}^{2}}\bm{I}\right) (16)

to take the uncertainty into consideration. In particular, for a linear Gaussian observational model

p(𝒚𝒙0)=p(𝒚𝒙)=𝒩(𝒚;𝑯𝒙,𝚺O),p(\bm{y}\mid\bm{x}_{0})=p(\bm{y}\mid\bm{x})=\mathcal{N}\left(\bm{y};{\bm{H}}\bm{x},{\bm{\Sigma}}^{O}\right), (17)

the DPS and DMPS estimators become

pDPS(𝒚𝒙t)=𝒩(𝒚;𝑯𝒙^0,𝚺O),pDMPS(𝒚𝒙t)=𝒩(𝒚;μt1𝑯𝒙t,𝚺O+σt2μt2𝑯𝑯𝖳)p_{\mathrm{DPS}}(\bm{y}\mid\bm{x}_{t})=\mathcal{N}\left(\bm{y};{\bm{H}}\hat{\bm{x}}_{0},{\bm{\Sigma}}^{O}\right),\quad p_{\mathrm{DMPS}}(\bm{y}\mid\bm{x}_{t})=\mathcal{N}\left(\bm{y};\mu_{t}^{-1}{\bm{H}}\bm{x}_{t},{\bm{\Sigma}}^{O}+\frac{\sigma_{t}^{2}}{\mu_{t}^{2}}{\bm{H}}{\bm{H}}^{\mathsf{T}}\right) (18)

by substituting (14), (16) and (17) into (12). SDA [43, 44] suggests the approximation

pSDA(𝒚𝒙t)=𝒩(𝒚;𝑯𝒙^0,𝚺O+σt2μt2γ𝑰)p_{\mathrm{SDA}}(\bm{y}\mid\bm{x}_{t})=\mathcal{N}\left(\bm{y};{\bm{H}}\hat{\bm{x}}_{0},{\bm{\Sigma}}^{O}+\frac{\sigma_{t}^{2}}{\mu_{t}^{2}}\gamma\bm{I}\right) (19)

to seek a balance between stability and accuracy, where the hyperparameter γ\gamma related to the variance of the data distribution p(𝒙0)p(\bm{x}_{0}) is tuned empirically. Nonetheless, such an approximation is still based on the linearity assumption for the observational operator, which has shown limitations in our experiments.

3.3 State-Observation Augmented Diffusion model

3.3.1 augmented dynamical system

To handle the potential nonlinearity of the observational model, we propose to consider an augmented version of (1). Let 𝑾k\bm{W}_{k} denote the concatenation of the state variables 𝑿k\bm{X}_{k} and all possible observation variables (𝑿k)\mathcal{H}(\bm{X}_{k}), then we have

𝑾k=(𝑿k(𝑿k))=(kX(𝑿k1)+𝜺kM(kX(𝑿k1)+𝜺kM))=:kW(𝑾k1,𝜺kM),𝜺kM𝒟M.\bm{W}_{k}=\begin{pmatrix}\bm{X}_{k}\\ \mathcal{H}(\bm{X}_{k})\end{pmatrix}=\begin{pmatrix}\mathcal{M}_{k}^{X}(\bm{X}_{k-1})+\bm{\varepsilon}_{k}^{M}\\ \mathcal{H}\left(\mathcal{M}_{k}^{X}(\bm{X}_{k-1})+\bm{\varepsilon}_{k}^{M}\right)\end{pmatrix}=:\mathcal{M}_{k}^{W}\left(\bm{W}_{k-1},\bm{\varepsilon}_{k}^{M}\right),\quad\bm{\varepsilon}_{k}^{M}\sim\mathscr{D}^{M}. (20)

For the observed states, there exists 𝑻k\bm{T}_{k} linked to k\mathcal{H}_{k} such that

𝒀k=k(𝑿k)+𝜺kO=𝑺k(𝑿k)+𝜺kO=𝑻k𝑾k+𝜺kO,𝜺kO𝒟O.\bm{Y}_{k}=\mathcal{H}_{k}(\bm{X}_{k})+\bm{\varepsilon}_{k}^{O}=\bm{S}_{k}\mathcal{H}(\bm{X}_{k})+\bm{\varepsilon}_{k}^{O}=\bm{T}_{k}\bm{W}_{k}+\bm{\varepsilon}_{k}^{O},\quad\bm{\varepsilon}_{k}^{O}\sim\mathscr{D}^{O}. (21)

In this way, we have essentially established an equivalent augmented system

{𝑾k=kW(𝑾k1,𝜺kM),𝜺kM𝒟M,𝒀k=𝑻k𝑾k+𝜺kO,𝜺kO𝒟O.\begin{cases}\bm{W}_{k}=\mathcal{M}_{k}^{W}\left(\bm{W}_{k-1},\bm{\varepsilon}_{k}^{M}\right),&\bm{\varepsilon}_{k}^{M}\sim\mathscr{D}^{M},\\ \bm{Y}_{k}=\bm{T}_{k}\bm{W}_{k}+\bm{\varepsilon}_{k}^{O},&\bm{\varepsilon}_{k}^{O}\sim\mathscr{D}^{O}.\end{cases} (22)

Clearly, the forward propagation operator kW\mathcal{M}_{k}^{W} inherits the periodicity of kX\mathcal{M}_{k}^{X}. The observational operator 𝑻k\bm{T}_{k} corresponding to 𝑾k\bm{W}_{k} is now a time-dependent subsampling matrix, which simplifies the assimilation process.

By concatenating 𝑾\bm{W} and 𝒀\bm{Y} as 𝑾~\tilde{\bm{W}} and 𝒀~\tilde{\bm{Y}} over consecutive time steps, respectively, the assimilation problem we focus on is transformed into a posterior estimation for p(𝑾~𝒀~)p\left(\tilde{\bm{W}}\mid\tilde{\bm{Y}}\right). The prior p(𝑾~)p\left(\tilde{\bm{W}}\right) implicitly encodes the physical dynamics as well as the model error, and the observational model becomes a linear Gaussian model

𝒀~=𝑻~𝑾~+𝜺~O,𝜺~O𝒟~O\tilde{\bm{Y}}=\tilde{\bm{T}}\tilde{\bm{W}}+\tilde{\bm{\varepsilon}}^{O},\quad\tilde{\bm{\varepsilon}}^{O}\sim\tilde{\mathscr{D}}^{O} (23)

with 𝑻~\tilde{\bm{T}} and 𝜺~O\tilde{\bm{\varepsilon}}^{O} representing the concatenated versions of the subsampling operator 𝑻\bm{T} and the observation noise 𝜺O\bm{\varepsilon}^{O}, respectively.

3.3.2 estimation for the time-dependent likelihood

In general, our State-Observation Augmented Diffusion (SOAD) model follows the principles of score-based generative models described above. Our SOAD model estimates the joint unconditional distribution for 𝑾~\tilde{\bm{W}}, which includes the physical states to be assimilated, the clean observation outputs as well as their interactions induced by the observational operator. For notation convenience, we alias 𝑾~\tilde{\bm{W}} and 𝒀~\tilde{\bm{Y}} with 𝒙~\tilde{\bm{x}} and 𝒚~\tilde{\bm{y}}, respectively, and we use p(𝒙~)p(\tilde{\bm{x}}) and p(𝒚~)p(\tilde{\bm{y}}) for the probability densities of the augmented 𝑾~\tilde{\bm{W}} and the noised observation 𝒀~\tilde{\bm{Y}}.

To establish the conditional diffusion model, we continue from the Bayes’ rule

𝒙~tlogpt(𝒙~t𝒚~)=𝒙~tlogpt(𝒙~t)+𝒙~tlogpt(𝒚~𝒙~t)\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{x}}_{t}\mid\tilde{\bm{y}})=\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{x}}_{t})+\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t}) (24)

adapted from (11)(12) in the current context, where we still use the notation

𝒙~tlogpt(𝒚~𝒙~t)=𝒙~tlogp(𝒚~𝒙~0)p(𝒙~0𝒙~t)d𝒙~0.\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t})=\nabla_{\tilde{\bm{x}}_{t}}\log\int p(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{0})p(\tilde{\bm{x}}_{0}\mid\tilde{\bm{x}}_{t})\mathrm{d}\tilde{\bm{x}}_{0}. (25)

The prior component logpt(𝒙~t)\log p_{t}(\tilde{\bm{x}}_{t}) can be provided by a trained diffusion model for p(𝒙~)p(\tilde{\bm{x}}) as

𝜺θ(𝒙~t,t)σtlogpt(𝒙~t)\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)\approx-\sigma_{t}\log p_{t}(\tilde{\bm{x}}_{t}) (26)

according to (6)(7)(8), so it suffices to obtain a good estimation for the time-dependent likelihood component 𝒙~tlogpt(𝒚~𝒙~t)\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t}),

By Applying Bayes’ rule again, we have

logp(𝒙~0𝒙~t)logp(𝒙~0)+logp(𝒙~t𝒙~0)\log p(\tilde{\bm{x}}_{0}\mid\tilde{\bm{x}}_{t})\simeq\log p(\tilde{\bm{x}}_{0})+\log p(\tilde{\bm{x}}_{t}\mid\tilde{\bm{x}}_{0}) (27)

up to a constant. If we consider a Gaussian prior for p(𝒙~0)p(\tilde{\bm{x}}_{0}) with a covariance matrix 𝚺x\bm{\Sigma}_{x}, then p(𝒙~0𝒙~t)p(\tilde{\bm{x}}_{0}\mid\tilde{\bm{x}}_{t}) is also Gaussian, whose covariance matrix has a closed form

𝚺~0t=(𝚺x1+(σt2/μt2)1𝑰)1.\tilde{\bm{\Sigma}}_{0\mid t}=\left(\bm{\Sigma}_{x}^{-1}+(\sigma_{t}^{2}/\mu_{t}^{2})^{-1}\bm{I}\right)^{-1}. (28)

Similar to [11], the mean of p(𝒙~0𝒙~t)p(\tilde{\bm{x}}_{0}\mid\tilde{\bm{x}}_{t}) may be provided by Tweedie’s formula (15). Therefore, we use the following estimation for the reverse-time transition kernel

pSOAD(𝒙~0𝒙~t)=𝒩(𝒙~0;𝒙~^0(𝒙~t,t),𝚺~0t)=𝒩(𝒙~0;μt1(𝒙~tσt𝜺θ(𝒙~t,t)),(𝚺x1+(σt2/μt2)1𝑰)1).p_{\mathrm{SOAD}}(\tilde{\bm{x}}_{0}\mid\tilde{\bm{x}}_{t})=\mathcal{N}\left(\tilde{\bm{x}}_{0};\hat{\tilde{\bm{x}}}_{0}(\tilde{\bm{x}}_{t},t),\tilde{\bm{\Sigma}}_{0\mid t}\right)=\mathcal{N}\left(\tilde{\bm{x}}_{0};\mu_{t}^{-1}\left(\tilde{\bm{x}}_{t}-\sigma_{t}\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)\right),\left(\bm{\Sigma}_{x}^{-1}+(\sigma_{t}^{2}/\mu_{t}^{2})^{-1}\bm{I}\right)^{-1}\right). (29)

It follows that pt(𝒚~𝒙~t)p_{t}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t}) can be computed analytically once the observation likelihood p(𝒚~𝒙~0)p(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{0}) is provided explicitlys. For instance, given the observational model with Gaussian noise

p(𝒚~𝒙~0)=p(𝒚~𝒙~)=𝒩(𝒚~;𝑻~𝒙~,𝚺~O),p(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{0})=p(\tilde{\bm{y}}\mid\tilde{\bm{x}})=\mathcal{N}\left(\tilde{\bm{y}};\tilde{\bm{T}}\tilde{\bm{x}},\tilde{\bm{\Sigma}}^{O}\right), (30)

the time-dependent likelihood becomes

pSOAD(𝒚~𝒙~t)=𝒩(𝒚~;𝑻~𝒙~^0(𝒙~t,t),𝚺~O+𝑻~𝚺~0t𝑻~𝖳).p_{\mathrm{SOAD}}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t})=\mathcal{N}\left(\tilde{\bm{y}};\tilde{\bm{T}}\hat{\tilde{\bm{x}}}_{0}(\tilde{\bm{x}}_{t},t),\tilde{\bm{\Sigma}}^{O}+\tilde{\bm{T}}\tilde{\bm{\Sigma}}_{0\mid t}\tilde{\bm{T}}^{\mathsf{T}}\right). (31)

To sum up, we have

Theorem 3.1.

Under Gaussian assumption for the prior p(𝐱~)p(\tilde{\bm{x}}) with covariance 𝚺x\bm{\Sigma}_{x}, our estimator given by (31) equals the marginal posterior pt(𝐲~𝐱~t)p_{t}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t}) for the Gaussian observational model (21), or equivalently, an augmented formulation (30). Furthermore, for scalar prior covariance 𝚺x=σx2𝐈\bm{\Sigma}_{x}=\sigma_{x}^{2}\bm{I}, our estimator becomes

pSOAD(𝒚~𝒙~t)=𝒩(𝒚~;𝑻~𝒙~^0(𝒙~t,t),𝚺~O+σx2rt2σx2+rt2𝑰),p_{\mathrm{SOAD}}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t})=\mathcal{N}\left(\tilde{\bm{y}};\tilde{\bm{T}}\hat{\tilde{\bm{x}}}_{0}(\tilde{\bm{x}}_{t},t),\tilde{\bm{\Sigma}}^{O}+\frac{\sigma_{x}^{2}r_{t}^{2}}{\sigma_{x}^{2}+r_{t}^{2}}\bm{I}\right), (32)

where we define σt/μt=rt\sigma_{t}/\mu_{t}=r_{t}.

Proof.

The first part is evident from the previous deduction. To show the second part, we may notice that 𝚺~0t\tilde{\bm{\Sigma}}_{0\mid t} is scalar as long as 𝚺x\bm{\Sigma}_{x} is scalar, so it follows that

𝑻~𝚺~0t𝑻~𝖳=σx2rt2σx2+rt2𝑻~𝑻~𝖳=σx2rt2σx2+rt2𝑰,\tilde{\bm{T}}\tilde{\bm{\Sigma}}_{0\mid t}\tilde{\bm{T}}^{\mathsf{T}}=\frac{\sigma_{x}^{2}r_{t}^{2}}{\sigma_{x}^{2}+r_{t}^{2}}\tilde{\bm{T}}\tilde{\bm{T}}^{\mathsf{T}}=\frac{\sigma_{x}^{2}r_{t}^{2}}{\sigma_{x}^{2}+r_{t}^{2}}\bm{I}, (33)

where we use the fact that 𝑻~\tilde{\bm{T}} is a subsampling matrix and thus has orthonormal rows. ∎

In practice, we treat σx\sigma_{x} as a hyperparameter to be tuned as it is hard to evaluate explicitly for a trained diffusion model. Meanwhile, a certain clipping mechanism for the covariance is required to stabilize the generation process when the diffusion step tt is near 1. Fortunately, the model performances are not sensitive to the choice of σx\sigma_{x} as well as the clipping mechanism. See Algorithm 1 for one of the possible implementations.

Remark.

We need to clarify that one should not confuse our SOAD estimator with the SDA estimator (19) proposed by [43, 44]. First of all, we only apply our estimator to the augmented model (20), where the observational operator 𝑻~\tilde{\bm{T}} is linear. The linearity makes the time-dependent marginal likelihood pt(𝒚~𝒙~t)p_{t}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t}) a Gaussian distribution under mild assumptions (See Theorem 3.1). Contrastly, the SDA estimator is just a Gaussian approximation of the real pt(𝒚~𝒙~t)p_{t}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t}), and the linearization of \mathcal{H} may contribute to unneglectable error since the observational operator \mathcal{H} may show high nonlinearity. Besides, the rt2γ𝑰r_{t}^{2}\gamma\bm{I} approximation term in SDA does not prevent the covariance from exploding when the diffusion step tt approaches 1, resulting a prior for p(𝒚)p(\bm{y}) with unreasonable infinite covariance.

3.3.3 forward-diffusion corrector

Since in our augmented model, 𝒀~\tilde{\bm{Y}} is a linear transformation of 𝑾~\tilde{\bm{W}}, or more specifically, a subsampling version of 𝑾~\tilde{\bm{W}} up to an observational error, part of the denoised results can be directly replaced by a perturbed version of 𝒀~\tilde{\bm{Y}} to stabilize the reverse-time diffusion process. To be more precise, let 𝒙~t\tilde{\bm{x}}_{t} be the diffusion state at time tt with the initial state 𝒙~0=𝑾~\tilde{\bm{x}}_{0}=\tilde{\bm{W}}, then

𝑻~𝒙~t=𝑻~(μt𝒙~0+σt𝜺)𝒩(μt𝑻~𝑾~,σt2𝑰)\tilde{\bm{T}}\tilde{\bm{x}}_{t}=\tilde{\bm{T}}(\mu_{t}\tilde{\bm{x}}_{0}+\sigma_{t}\bm{\varepsilon})\sim\mathcal{N}\left(\mu_{t}\tilde{\bm{T}}\tilde{\bm{W}},\sigma_{t}^{2}\bm{I}\right) (34)

with a standard Gaussian noise 𝜺\bm{\varepsilon}. We propose to substitute 𝑻~𝒙~t\tilde{\bm{T}}\tilde{\bm{x}}_{t} at diffusion time tt with μt(𝒀~+𝜺t)\mu_{t}(\tilde{\bm{Y}}+\bm{\varepsilon}_{t}^{\prime}), where 𝜺t𝒟t\bm{\varepsilon}_{t}^{\prime}\sim\mathscr{D}_{t}^{\prime}. To align their means and covariances, we set

𝒟t=𝒩(𝔼𝒟~O,rt2𝑰Cov𝒟~O)\mathscr{D}_{t}^{\prime}=\mathcal{N}\left(-\operatorname{\mathbb{E}}\tilde{\mathscr{D}}^{O},r_{t}^{2}\bm{I}-\operatorname{\mathrm{Cov}}\tilde{\mathscr{D}}^{O}\right) (35)

whenever σt2𝑰μt2Cov𝒟~O\sigma_{t}^{2}\bm{I}-\mu_{t}^{2}\operatorname{\mathrm{Cov}}\tilde{\mathscr{D}}^{O} is positive-semidefinite.

We need to emphasize that, such a condition only fails for small time steps especially when the observation noise is not too large since limt1rt=+\lim_{t\to 1^{-}}r_{t}=+\infty under typical choices of diffusion schedulers. In other words, our replacements provide reliable estimation of 𝑻~𝒙~t\tilde{\bm{T}}\tilde{\bm{x}}_{t} most of the time.

3.3.4 implementation

We describe the detailed denoising procedure in this subsection, where we set 𝒟~O=𝒩(0,(σo)2𝑰)\tilde{\mathscr{D}}^{O}=\mathcal{N}(0,(\sigma^{o})^{2}\bm{I}) for simplicity. Similar deduction can be applied to other more complex settings.

Let rσ=σt/σtr_{\sigma}=\sigma_{t_{-}}/\sigma_{t} and rμ=μt/μtr_{\mu}=\mu_{t_{-}}/\mu_{t} be the ratios of the diffusion parameters at adjoint time step tt_{-} and tt, respectively. We choose the EI discretization scheme (9)

𝒙~t=rμ𝒙~t+(rσrμ)𝝅θ(𝒙~t,t)\tilde{\bm{x}}_{t-}=r_{\mu}\tilde{\bm{x}}_{t}+\left(r_{\sigma}-r_{\mu}\right)\bm{\pi}_{\theta}(\tilde{\bm{x}}_{t},t) (36)

for the reverse-time stepper, where we define 𝝅θ(𝒙~t,t)=σt2𝒙~tlogpt(𝒙~t𝒚~)\bm{\pi}_{\theta}(\tilde{\bm{x}}_{t},t)=-\sigma_{t}^{2}\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{x}}_{t}\mid\tilde{\bm{y}}). By plugging (24), (26) and (33) into the definition of 𝝅θ\bm{\pi}_{\theta}, we have

𝝅θ(𝒙~t,t)\displaystyle\bm{\pi}_{\theta}(\tilde{\bm{x}}_{t},t) =σt2𝒙~tlogpt(𝒙~t𝒚~)\displaystyle=-\sigma_{t}^{2}\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{x}}_{t}\mid\tilde{\bm{y}}) (37)
=σt2𝒙~tlogpt(𝒙~t)σt2𝒙~tlogpt(𝒚~𝒙~t)\displaystyle=-\sigma_{t}^{2}\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{x}}_{t})-\sigma_{t}^{2}\nabla_{\tilde{\bm{x}}_{t}}\log p_{t}(\tilde{\bm{y}}\mid\tilde{\bm{x}}_{t})
=σt𝜺θ(𝒙~t,t)+σt2/2(σo)2+σx2rt2σx2+rt2𝒚~𝑻~𝒙~^0(𝒙~t,t)2\displaystyle=\sigma_{t}\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)+\frac{\sigma_{t}^{2}/2}{(\sigma^{o})^{2}+\frac{\sigma_{x}^{2}r_{t}^{2}}{\sigma_{x}^{2}+r_{t}^{2}}}\left\|\tilde{\bm{y}}-\tilde{\bm{T}}\hat{\tilde{\bm{x}}}_{0}(\tilde{\bm{x}}_{t},t)\right\|^{2}
=σt𝜺θ(𝒙~t,t)+rt2/2(σo)2+σx2rt2σx2+rt2μt𝒚~𝑻~(𝒙~tσt𝜺θ(𝒙~t,t))2\displaystyle=\sigma_{t}\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)+\frac{r_{t}^{2}/2}{(\sigma^{o})^{2}+\frac{\sigma_{x}^{2}r_{t}^{2}}{\sigma_{x}^{2}+r_{t}^{2}}}\left\|\mu_{t}\tilde{\bm{y}}-\tilde{\bm{T}}\left(\tilde{\bm{x}}_{t}-\sigma_{t}\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)\right)\right\|^{2}
=σt𝜺θ(𝒙~t,t)+rt2/2(σo)2+σx2rt2σx2+rt2μt𝒚~𝑻~𝒙~t+𝑻~σt𝜺θ(𝒙~t,t)2\displaystyle=\sigma_{t}\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)+\frac{r_{t}^{2}/2}{(\sigma^{o})^{2}+\frac{\sigma_{x}^{2}r_{t}^{2}}{\sigma_{x}^{2}+r_{t}^{2}}}\left\|\mu_{t}\tilde{\bm{y}}-\tilde{\bm{T}}\tilde{\bm{x}}_{t}+\tilde{\bm{T}}\sigma_{t}\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)\right\|^{2}

The whole denoising process is summarized in Algorithm 1.

Algorithm 1 Assimilation with the SOAD model for Gaussian noise
1:Input: The subsampling matrix 𝑻~\tilde{\bm{T}}, the observation 𝒀~\tilde{\bm{Y}}, and the noise covariance 𝒟~O=𝒩(0,(σo)2𝑰)\tilde{\mathscr{D}}^{O}=\mathcal{N}(0,(\sigma^{o})^{2}\bm{I});
2:Input: Trained noise estimator 𝜺θ(𝒙~t,t)\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t), the denoising diffusion time step δt\delta t;
3:Hyperparameters: σx=1.0\sigma_{x}=1.0, δ=0.25\delta=0.25, Nc=5N_{c}=5.
4:function AssimilationWithSOAM
5:    t1t\leftarrow 1; 𝒙~t𝒩(0,𝑰)\tilde{\bm{x}}_{t}\sim\mathcal{N}(0,\bm{I});
6:    while t>0t>0 do
7:        ttδtt_{-}\leftarrow t-\delta t; rσσt/σtr_{\sigma}\leftarrow\sigma_{t_{-}}/\sigma_{t}; rμμt/μtr_{\mu}\leftarrow\mu_{t_{-}}/\mu_{t};
8:        Calculate the gradient term and its coefficient in (37)
ct=rt2/2(σo)2+σx2rt2σx2+rt2;𝑸t=𝒙~tμt𝒚~𝑻~𝒙~t+𝑻~σt𝜺θ(𝒙~t,t)2;c_{t}=\frac{r_{t}^{2}/2}{(\sigma^{o})^{2}+\frac{\sigma_{x}^{2}r_{t}^{2}}{\sigma_{x}^{2}+r_{t}^{2}}};\quad\bm{Q}_{t}=\nabla_{\tilde{\bm{x}}_{t}}\left\|\mu_{t}\tilde{\bm{y}}-\tilde{\bm{T}}\tilde{\bm{x}}_{t}+\tilde{\bm{T}}\sigma_{t}\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)\right\|^{2};
9:        Clip for the coefficient:
c^t=max(1,1/𝑸t);\hat{c}_{t}=\max\left(1,1/\|\bm{Q}_{t}\|_{\infty}\right);
10:        Compute 𝝅θ(𝒙~t,t)\bm{\pi}_{\theta}(\tilde{\bm{x}}_{t},t) by (37):
𝝅θ(𝒙~t,t)=σt𝜺θ(𝒙~t,t)+c^t𝑸t;\bm{\pi}_{\theta}(\tilde{\bm{x}}_{t},t)=\sigma_{t}\bm{\varepsilon}_{\theta}(\tilde{\bm{x}}_{t},t)+\hat{c}_{t}\bm{Q}_{t};
11:        Converse-time evolution:
𝒙~trμ𝒙~t+(rσrμ)𝝅θ(𝒙~t,t);\tilde{\bm{x}}_{t-}\leftarrow r_{\mu}\tilde{\bm{x}}_{t}+\left(r_{\sigma}-r_{\mu}\right)\bm{\pi}_{\theta}(\tilde{\bm{x}}_{t},t);
12:        𝒙~t\tilde{\bm{x}}_{t-}\leftarrow ForwardDiffusionCorrector(𝒙~t\tilde{\bm{x}}_{t-}, tt_{-});
13:    end while
14:    /// Apply the corrector by making a few steps of LMC sampling (10).
15:    Cache the conditional score function:
𝒔t=σt1𝝅θ(𝒙~t,t);\bm{s}_{t-}=\sigma_{t-}^{-1}\bm{\pi}_{\theta}(\tilde{\bm{x}}_{t-},t);
16:    for i=1i=1 to NcN_{c} do
17:        Sample 𝜺′′𝒩(0,𝑰)\bm{\varepsilon}^{\prime\prime}\sim\mathcal{N}(0,\bm{I});
18:        Update
𝒙~t𝒙~t+δ2𝒔t+δ𝜺′′;\tilde{\bm{x}}_{t-}\leftarrow\tilde{\bm{x}}_{t-}+\frac{\delta}{2}\bm{s}_{t-}+\sqrt{\delta}\bm{\varepsilon}^{\prime\prime};
19:        𝒙~t\tilde{\bm{x}}_{t-}\leftarrow ForwardDiffusionCorrector(𝒙~t\tilde{\bm{x}}_{t-}, tt_{-});
20:    end for
21:end function
22:function ForwardDiffusionCorrector(𝒙~t\tilde{\bm{x}}_{t}, tt)
23:    rtσt/μtr_{t}\leftarrow\sigma_{t}/\mu_{t};
24:    if σort\sigma^{o}\leq r_{t} then
25:        Sample 𝜺𝒩(0,𝑰)\bm{\varepsilon}^{\prime}\sim\mathcal{N}(0,\bm{I});
26:        Update
𝑻~𝒙~tμt(𝒚+rt2(σo)2𝜺);\tilde{\bm{T}}\tilde{\bm{x}}_{t-}\leftarrow\mu_{t}\left(\bm{y}+\sqrt{r_{t}^{2}-(\sigma^{o})^{2}}\bm{\varepsilon}^{\prime}\right);
27:    end if
28:    return 𝒙~t\tilde{\bm{x}}_{t}
29:end function

4 Experiments

To show the advantages of our proposed SOAD model, we conduct experiments on the two-layer quasi-geostrophic model. The Score-based Data Assimilation (SDA) method [43, 44] is used as the baseline due to its effectiveness, particularly for linear observations. Our experimental results indicate that our SOAD model is more suitable for dealing with highly nonlinear observations.

This section is structured as follows. We start by outlining the dataset and the network training procedures for the experiments. Next, we detail the observation operators as well as the assimilation settings we used for testing. A comparative analysis of our SOAD model and the SDA baseline with various observational operators is presented. Besides, we also try to explore the long-term behavior of our model and its capability of handling multiple observations simultaneously.

4.1 dataset

Consider the two-layer quasi-geostrophic (QG) evolution equation

tq1+J(ψ1,q1)+β1ψ1x\displaystyle\partial_{t}q_{1}+J(\psi_{1},q_{1})+\beta_{1}\psi_{1x} =ssd,\displaystyle=\mathrm{ssd}, (38)
tq2+J(ψ2,q2)+β2ψ2x\displaystyle\partial_{t}q_{2}+J(\psi_{2},q_{2})+\beta_{2}\psi_{2x} =rek2ψ2+ssd\displaystyle=-r_{ek}\nabla^{2}\psi_{2}+\mathrm{ssd}

with the potential vorticities

q1=2ψ1+F1(ψ2ψ1),q2=2ψ2+F2(ψ1ψ2)q_{1}=\nabla^{2}\psi_{1}+F_{1}(\psi_{2}-\psi_{1}),\quad q_{2}=\nabla^{2}\psi_{2}+F_{2}(\psi_{1}-\psi_{2}) (39)

and the mean potential vorticity gradients

β1=β+F1(U1U2),β2=βF2(U1U2).\beta_{1}=\beta+F_{1}(U_{1}-U_{2}),\quad\beta_{2}=\beta-F_{2}(U_{1}-U_{2}). (40)

The horizontal Jacobian is defined as

J(ψ,q)=ψxqyψyqx,J(\psi,q)=\psi_{x}q_{y}-\psi_{y}q_{x}, (41)

and “ssd\mathrm{ssd}” indicates the small-scale dissipation. To accelerate the preparation process, the dataset has been generated by a Python package pyqg-jax111https://github.com/karlotness/pyqg-jax as a Jax [5] implementation of the original pyqg [1] library, and all the other equation parameters follow the default configurations (See Appendix A).

To create our dataset, we set the solution domain as 512×512512\times 512 with 15-minute time steps. The QG equation is evolved from 10 different random initial states separately. After a warm-up period of 5 years, the reference states are downsampled from the trajectories to a spatial resolution of 256×256256\times 256 with a tempporal resolution of Δt=24\Delta t=24 hours for 100,000 model days, which leads to a dataset of size 10×100000×2×256×25610\times 100000\times 2\times 256\times 256 in total.

Since the evolution of the two-layer QG model does not depend on time, we assumes that the vorticity snapshots (q1,q2)𝖳(q_{1},q_{2})^{\mathsf{T}} of shape (2,256,256)(2,256,256) within a fixed time range follow the same data distribution. Consequently, we divide each trajectory into 1,000 windows and retain only the first 32 snapshots from every 100 snapshots to avoid temporal correlations. Such a process transforms the dataset into 10×100010\times 1000 windows of size 32×2×256×25632\times 2\times 256\times 256. Figure 1 visualizes the generated data for a single time step.

Refer to caption
Figure 1: Visualization of the vorticities for the two-layer quasi-geostrophic model.

4.2 network structures and training procedures

To ensure a fair comparison with the SDA baseline, we adopt the same network structure for the denoising network 𝜺θ(𝒙t,t)\bm{\varepsilon}_{\theta}(\bm{x}_{t},t), which ultilizes a U-Net structure mixed with temporal convolutions and embeddings for diffusion time steps tt. The only difference between our network and the SDA baseline lies in the input channels because we require additional channels to concatenate the state variables with the clean observations to form the augmented states 𝑾~\tilde{\bm{W}}.

We use the first 8,000 windows for training and reserve the remaining 2,000 windows for valiation and testing. During training, a chunk of length 9 from each window of length 32 is selected randomly to create a mini-batch for the networks. All networks are trained with the Adam optimizer [27], employing a learning rate of 2×1042\times 10^{-4} and a weight decay of 1×1051\times 10^{-5}. To stabilize the training process, all inputs are normalized using the empirical mean and variance of the entire training dataset.

4.3 observations

To test our SOAD model and make comparisons with the baseline, we consider the following observational model

𝒀k=𝑺k(𝑿k)+𝜺kO,𝜺kO𝒩(0,0.12𝑰),\bm{Y}_{k}=\bm{S}_{k}\circ\mathcal{H}(\bm{X}_{k})+\bm{\varepsilon}_{k}^{O},\quad\bm{\varepsilon}_{k}^{O}\sim\mathcal{N}(0,0.1^{2}\bm{I}), (42)

where kk denotes the time step index. 𝑿k\bm{X}_{k} consists of the two-layer vorticities at time step kk, and 𝒀k\bm{Y}_{k} stands for the associated noisy observations.

We consider two different types of element-wise observations defined as

e:𝑿arctan3𝑿,h:𝑿32sin3𝑿.\mathcal{H}^{e}:\bm{X}\mapsto\arctan 3\bm{X},\quad\mathcal{H}^{h}:\bm{X}\mapsto\frac{3}{2}\sin 3\bm{X}. (43)

We refer to e\mathcal{H}^{e} as the “easy” case since it is injective and monotonic, while h\mathcal{H}^{h} is termed the “hard” case due to its highly nonlinear behavior. Apart from these straightforward choices, we also consider the vorticity-to-velocity mapping

𝗏𝟤𝗏:𝑿=(q1,q2)𝖳C(u1,v1,u2,v2)𝖳\mathcal{H}^{\mathsf{v2v}}:\bm{X}=(q_{1},q_{2})^{\mathsf{T}}\mapsto C\odot(u_{1},v_{1},u_{2},v_{2})^{\mathsf{T}} (44)

to explore the potential of our model for real applications. Here, (ui,vi)(u_{i},v_{i}) are the velocity components for the ii-th layer and CC\odot denotes a multiplication with component-wise scalars so that the outputs approximately have zero means and unit deviations. Such a choice is inspired by the fact that velocity components are often directly observed rather than the related vorticity in the real world. To compute 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}, (39) is solved for the streamfunctions (ψ1,ψ2)𝖳(\psi_{1},\psi_{2})^{\mathsf{T}} in the spectral space, and then the velocity components are calculated by the streamfunctions. Note that we cannot conclude whether 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}} is easier or harder, as the operator is linear but the mapping rule is much more complex.

We test with subsampling matrices 𝑺k\bm{S}_{k} of two different types: random sampling from the grid with ratio pp, and uniform strided observation with stride ss, corresponding to a (1/s)2(1/s)^{2} observational ratio222For spatial stride s=10s=10, the actual observational ratio is slightly above 1% since the snapshot size is not divisible by 10, but we omit this minor difference for simplicity.. To study assimilation performance using obserations with various time intervals, 𝑺k\bm{S}_{k} is set for every NΔtN\Delta t time steps. See Figure 2 for detailed illustrations.

Refer to caption
Figure 2: Illustration of the data provided for the assimilation processes. (zoom-in for better visualization)

4.4 assimilation

In the assimilation experiments, we fix our assimilation window to 9 time steps. This means we have observations subsampled from 9 consecutive steps, and our goal is to assimilate all the state variables within this time interval. We choose random ratios p{1.0,0.25,0.625,0.01}p\in\{1.0,0.25,0.625,0.01\} and strides s{1,2,4,10}s\in\{1,2,4,10\} for the subsampling 𝑺k\bm{S}_{k} with time intervals N{1,2,4,8}N\in\{1,2,4,8\}.

To make quantitative comparisons, we use the rooted mean-square error (RMSE)

RMSE(𝑿r,𝑿a)=(12HresWresi=12j=1Hresk=1Wres(q¯ir(j,k)q¯ia(j,k))2)1/2\mathrm{RMSE}(\bm{X}^{r},\bm{X}^{a})=\left(\frac{1}{2H_{\mathrm{res}}W_{\mathrm{res}}}\sum_{i=1}^{2}\sum_{j=1}^{H_{\mathrm{res}}}\sum_{k=1}^{W_{\mathrm{res}}}(\bar{q}_{i}^{r}(j,k)-\bar{q}_{i}^{a}(j,k))^{2}\right)^{1/2} (45)

to evaluate the assimilated state 𝑿a\bm{X}^{a}. Recall that 𝑿\bm{X} consists of vorticities (q1,q2)𝖳(q_{1},q_{2})^{\mathsf{T}} on two levels, and here we use (q¯1,q¯2)𝖳(\bar{q}_{1},\bar{q}_{2})^{\mathsf{T}} for their normalized versions. 𝑿r\bm{X}^{r} is the reference solution, and (Hres,Wres)(H_{\mathrm{res}},W_{\mathrm{res}}) stands for the spatial resolution. Since all the data have been normalized in advance, the RMSE results from different models and various assimilation settings are comparable. To decrease randomness, all the following experiments are run 5 times with different random seeds, and the averaged RMSEs are reported unless specified otherwise.

4.4.1 assimilation with prior

In practical applications, assimilation is usually performed sequentially once new observations become available, so we start with the classical assimilation settings that additionally provide the models with a background prior for the first step. We choose the background prior as an ideal Gaussian perturbation with a known covariance (σb)2=0.12(\sigma^{b})^{2}=0.1^{2}. Formally, the observational model for the first step is modified as

𝒀0=𝑿0+𝜺0O,𝜺0O𝒩(0,(σb)2𝑰),\bm{Y}_{0}=\bm{X}_{0}+\bm{\varepsilon}_{0}^{O},\quad\bm{\varepsilon}_{0}^{O}\sim\mathcal{N}(0,(\sigma^{b})^{2}\bm{I}), (46)

where 𝒀0\bm{Y}_{0} is the background prior with noise variance 0.120.1^{2}. See Figure 2 (3rd and 4th rows) for visualization.

Our experiments start with the element-wise observations e\mathcal{H}^{e} and h\mathcal{H}^{h}. The averaged RMSEs over all the 9 time steps for the assimilated states are summarized in Figure 3.

Refer to caption
(a) SOAD (ours) for e\mathcal{H}^{e}
Refer to caption
(b) SDA (baseline) for e\mathcal{H}^{e}
Refer to caption
(c) SOAD (ours) for h\mathcal{H}^{h}
Refer to caption
(d) SDA (baseline) for h\mathcal{H}^{h}
Figure 3: Averaged RMSEs for the assimilated states by using background prior.

Our SOAD model performs slightly worse as the time interval NN increases and the observational ratio pp or (1/s)2(1/s)^{2} decreases, which aligns with our intuition that the assimilation task becomes more difficult with sparser observations. In the most challenging case, where the observations are available only every 8 time steps with a ratio of 1%1\%, the RMSEs are around 0.2 and 0.3 for the easy and hard observations, respectively. This indicates that our method is still able to extract the physical features from rare observations to some extent. In contrast, the SDA baseline shows a significant performance drop even when the observations are available everywhere for each time step. The inferior performance is likely due to the linearity assumption for the observation operator in the SDA formulations, which is unsuitable for highly nonlinear observations.

Additionally, the step-wise RMSEs for our SOAD model are exhibited in Figure 4. In both observation cases, The increments of RMSEs at observational ratios of 100% and 25% are mild, and no significant changes in RMSEs are observed when the easier e\mathcal{H}^{e} is replaced with the harder h\mathcal{H}^{h}. Moreover, relatively lower assimilation errors are obtained whenever observational data are available, supporting the idea that more observations enhance the assimilation process. By comparing results with the same observational ratios (p=(1/s)2p=(1/s)^{2}), we may conclude that regular (uniform) observations are more beneficial for the assimilation process than irregular (random) ones.

Refer to caption
(a) the easy observation e\mathcal{H}^{e}
Refer to caption
(b) the hard observation h\mathcal{H}^{h}
Figure 4: Step-wise RMSEs of the assimilated states via our SOAD approach.

To explore the potential of our SOAD model in real applications, we also test it with the vorticity-to-velocity mapping 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}, and the results are shown in Figure 5. We do not include the performance of the SDA baseline since it diverges in all cases. Performances of the SDA baseline for 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}} are not included since it ends with divergence for all the cases. The averaged RMSEs in Figure 5(a) are lower than those in Figure 3(a) and Figure 3(c), which may be attributed to the linearity of the observational operator 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}. The results indicate that our SOAD approach is capable of handling the vorticity-to-velocity mapping,a more practical observational model in real applications. Although SOAD shows faster increasing RMSEs with higher uncertainty for 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}} (Figure 5(b)) compared with e\mathcal{H}^{e} and h\mathcal{H}^{h} (Figure 4), the subsequent experiments may offer some evidences that the assimilation process is unlikely to diverge.

Refer to caption
(a) averaged RMSEs for SOAD with 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}
Refer to caption
(b) step-wise RMSEs for SOAD with 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}
Figure 5: Performances of our SOAD for assimilation with prior when the observational model is 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}.

In practice, we cannot always expect the availability of the exact probability distribution for background prior. Background errors may result from many aspects, such as unresolved sub-scale features, temporal interpolations and physical parameterizations. Therefore, we investigate whether the discrepancy between our background prior model (46) and the real distribution affects the assimilation performance. We conduct experiments with the same settings as before but feed a background prior with unknown noise

𝒀0=(𝑿1+𝜼),𝜼𝒩(0,0.012𝑰).\bm{Y}_{0}^{\prime}=\mathcal{M}(\bm{X}_{-1}+\bm{\eta}),\quad\bm{\eta}\sim\mathcal{N}(0,0.01^{2}\bm{I}). (47)

into the model instead of 𝒀0\bm{Y}_{0}. Here, \mathcal{M} is the forward propagation model of the QG equations, and 𝑿1\bm{X}_{-1} stands for the snapshot at the previous time step. Comparisons between the step-wise RMSEs are displayed in Figure 6, where we fix N=1N=1 and p=0.25p=0.25. Even with incorrect background prior assumptions, our SOAD model has the ability of self-correction with a few time steps of observations. Another interesting observation is that the RMSEs for the hard observation h\mathcal{H}^{h} are lower than those for e\mathcal{H}^{e} and 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}. One possible explanation is that when the observational data is sufficient for the model to capture the dynamics, observations from a harder operator h\mathcal{H}^{h} may provide more detailed information since h\mathcal{H}^{h} is much more sensitive to the input.

Refer to caption
Figure 6: Step-wise RMSEs for SOAD with different data for background prior.

4.4.2 assimilation without background prior

To explore the assimilation performance in the absence of any background prior, we remove the modification (46) and conduct the assimilation process solely from observational data. Even if starting with a background prior, any assimilation method will gradually forget the initial state information as more observations are assimilated. Therefore, the results shown in this section are likely to reveal the long-term behaviors of the assimilation methods.

All assimilation results are recorded as heatmaps of RMSEs in Figure 7. In general, our SOAD model performs well for both the e\mathcal{H}^{e} and the 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}} observations, with slightly higher accuracy for the latter. The observation h\mathcal{H}^{h} appears to be the most challenging across all the testing observational operators.

To visualize the assimilated states, we have plotted the assimilated vorticities with random observational ratios p{0.25,0.0625,0.01}p\in\{0.25,0.0625,0.01\} in Figure 8, Figure 9 and Figure 10 for e\mathcal{H}^{e}, h\mathcal{H}^{h} and 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}, respectively. For clarity, we only include the vorticities and the associated observations for the first layer, as they contain more small-scale details. For the e\mathcal{H}^{e} observation, our SOAD is capable of recovering most of the physical features until pp reaches 1%1\%, in which case the assimilated states still share much similarity with the ground truth. With the harder h\mathcal{H}^{h}, the SOAD model fails to reconstruct the real system states when p0.0625=(1/4)2p\leq 0.0625=(1/4)^{2}, consistent with the previous results shown in Figure 7(b). Surprisingly, the assimilated states with rare observations do not collapse and still follow certain dynamics, suggesting that the lack of observational data, rather than incapacity of learning the dynamics, leads to the failure. Finally, when the observation is the the vorticity-to-velocity mapping 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}, the assimilated states are closer to the ground truth even when observations are available at only 1%1\% of the grids. We believe that the success may result from the linearity of 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}. The results indicate that our SOAD approach is stable when handling highly nonlinear observations in the long term performs well on non-element-wise observations such as the vorticity-to-velocity mapping 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}, which is a promising sign for real applications.

Refer to caption
(a) the easy observation e\mathcal{H}^{e}
Refer to caption
(b) the hard observation e\mathcal{H}^{e}
Refer to caption
(c) the vorticity-to-velocity 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}
Figure 7: Averaged RMSEs of our SOAD without background prior.
Refer to caption
Figure 8: Visualization of the assimilated vorticities by our SOAD model. The reference vorticities are attached at the first row for comparisons. The last column contains the observations from e\mathcal{H}^{e} with different random mask ratios used for assimilation for the last time step.
Refer to caption
Figure 9: Visualization of the assimilated vorticities by our SOAD model. The reference vorticities are attached at the first row for comparisons. The last column contains the observations from h\mathcal{H}^{h} with different random mask ratios used for assimilation for the last time step.
Refer to caption
Figure 10: Visualization of the assimilated vorticities by our SOAD model. The reference vorticities are attached at the first row for comparisons. The last column contains the uu-components from 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}} with different random mask ratios used for assimilation for the last time step.

4.4.3 assimilation with multiple observations

In this subsection, we discuss another important features of our SOAD approach: the ability to handle observations from different modalities. Such a capability is crucial for real applications since the observations are usually collected from various sources like satellites, weather radars, and in-situ observations. Each source may provides different physical variables in different formats. We have tested our SOAD approach with various combinations of the three observations e\mathcal{H}^{e}, h\mathcal{H}^{h} and 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}. Same as the previous subsection, we do not add any background prior for the first step in order to study the long-term behaviors. The results are displayed in Figure 11.

Refer to caption
(a) averaged RMSEs for collections containing e\mathcal{H}^{e}
Refer to caption
(b) averaged RMSEs for collections containing h\mathcal{H}^{h}
Refer to caption
(c) averaged RMSEs for collections containing 𝗏𝟤𝗏\mathcal{H}^{\mathsf{v2v}}
Refer to caption
(d) The best RMSEs for different combinations of observations
Figure 11: RMSEs for multi-modal assimilation with our SOAD model. The error uncertainty introduced by different random seeds are marked on the top of each bar. Duplicate results may appear in multiple subplots for illustration purposes.

As the collection of observations expands, the uncertainties of the assimilation errors shrink, and the averaged RMSEs decrease as well due to the additional information inferred from the observations. This phenomenon implies that our SOAD approach also follows the intuitive principle that more observations are beneficial for the assimilation process. Figure 11(d) gathers the best performances for each observation collections when s=p=1s=p=1, and the overlapping areas are marked with the corresponding observation combinations.

5 Conclusion

In this study, we have proposed a noval data assimilation approach using the State-Observation Augmented Diffusion (SOAD) model, which does not rely on any traditional assimilation algorithms. Our SOAD model does not assume linearity for eith the physical or observational models, but it can still achieve the posterior probability under mild conditions, potentially offering an advantage over most traditional algorithms. The flexibility and applicability of our SOAD have been demonstrated in various testing settings on the two-layer quasi-geostrophic model, indicating its promise for future real-world applications.

One of the most significant features of our SOAD is the use of augmented dynamical models. The augmented model is essentially a linearized equivalent form associated with the original assimilation task, enabling us to focus on linear observational models even if the original problem are highly nonlinear. By using the score-based generative model as the backbone, our SOAD can learn the prior distribution of the physical states. Additionally, a deterministic estimation for the time-dependent marginal likelihood as well as a forward-diffusion corrector is proposed to help fuse the physical information embedded within the available observations with the state variables, which is crucial for the assimilation process.

Further improvements and extensions of our approach are also necessary. One possible direction is to analyze the convergence rates and conditions when considering training errors. The stability of the assimilation is one of the major concerns particularly in real-world applications. Moreover, the current SOAD approach has only been tested on the two-layer quasi-geostrophic model. Therefore, comprehensive tests on other idealized and real-world models is required to fully evaluate its effectiveness. Generalizing our SOAD approach to more complex tasks other than assimilation could also be a promising direction for future research.

Data availability

All the codes for data generation, network training and assimilation experiments are available at https://github.com/zylipku/SOAD.

References

  • [1] Ryan Abernathey, rochanotes, Andrew Ross, Malte Jansen, Ziwei Li, Francis J. Poulin, Navid C. Constantinou, Anirban Sinha, Dhruv Balwada, SalahKouhen, Spencer Jones, Cesar B Rocha, Christopher L. Pitt Wolfe, Chuizheng Meng, Hugo van Kemenade, James Bourbeau, James Penn, Julius Busecke, Mike Bueti, and Tobias. pyqg/pyqg: v0.7.2, may 2022.
  • [2] Maddalena Amendola, Rossella Arcucci, Laetitia Mottet, Cesar Quilodran Casas, Shiwei Fan, Christopher Pain, Paul Linden, and Yi-Ke Guo. Data assimilation in the latent space of a neural network. arXiv preprint arXiv:2012.12056, 2020.
  • [3] Brian D.O. Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313–326, 1982.
  • [4] Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Pangu-weather: A 3d high-resolution model for fast and accurate global weather forecast. arXiv preprint arXiv:2211.02556, 2022.
  • [5] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018.
  • [6] Ashesh Chattopadhyay, Ebrahim Nabizadeh, Eviatar Bach, and Pedram Hassanzadeh. Deep learning-enhanced ensemble-based data assimilation for high-dimensional nonlinear dynamical systems. Journal of Computational Physics, 477:111918, 2023.
  • [7] Kang Chen, Tao Han, Junchao Gong, Lei Bai, Fenghua Ling, Jing-Jia Luo, Xi Chen, Leiming Ma, Tianning Zhang, Rui Su, et al. Fengwu: Pushing the skillful global medium-range weather forecast beyond 10 days lead. arXiv preprint arXiv:2304.02948, 2023.
  • [8] Shengchao Chen, Guodong Long, Jing Jiang, Dikai Liu, and Chengqi Zhang. Foundation models for weather and climate data understanding: A comprehensive survey. arXiv preprint arXiv:2312.03014, 2023.
  • [9] Sibo Cheng, Jianhua Chen, Charitos Anastasiou, Panagiota Angeli, Omar K. Matar, Yi-Ke Guo, Christopher C. Pain, and Rossella Arcucci. Generalised latent assimilation in heterogeneous reduced spaces with machine learning surrogate models. J. Sci. Comput., 94(1), jan 2023.
  • [10] Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
  • [11] Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In International Conference on Learning Representations, 2023.
  • [12] A. M. Clayton, A. C. Lorenc, and D. M. Barker. Operational implementation of a hybrid ensemble/4d-var global data assimilation system at the met office. Quarterly Journal of the Royal Meteorological Society, 139(675):1445–1461, 2013.
  • [13] Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A. Bharath. Generative adversarial networks: An overview. IEEE Signal Processing Magazine, 35(1):53–65, 2018.
  • [14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  • [15] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021.
  • [16] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
  • [17] Carl Doersch. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908, 2016.
  • [18] A. J. Geer. Learning earth system models from observations: machine learning or data assimilation? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 379(2194):20200089, 2021.
  • [19] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Commun. ACM, 63(11):139–144, oct 2020.
  • [20] Mohamad Abed El Rahman Hammoud, Naila Raboudi, Edriss S. Titi, Omar Knio, and Ibrahim Hoteit. Data assimilation in chaotic systems using deep reinforcement learning, 2024.
  • [21] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arxiv:2006.11239, 2020.
  • [22] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
  • [23] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, nov 1997.
  • [24] Brian R Hunt, Eric J Kostelich, and Istvan Szunyogh. Efficient data assimilation for spatiotemporal chaos: A local ensemble transform kalman filter. Physica D: Nonlinear Phenomena, 230(1-2):112–126, 2007.
  • [25] Andrew H Jazwinski. Stochastic processes and filtering theory. Courier Corporation, 2007.
  • [26] Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. 1960.
  • [27] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
  • [28] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [29] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015–4026, 2023.
  • [30] Ivan Kobyzev, Simon JD Prince, and Marcus A Brubaker. Normalizing flows: An introduction and review of current methods. IEEE transactions on pattern analysis and machine intelligence, 43(11):3964–3979, 2020.
  • [31] Rahul G. Krishnan, Uri Shalit, and David Sontag. Deep kalman filters, 2015.
  • [32] Remi Lam, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Alexander Pritzel, Suman Ravuri, Timo Ewalds, Ferran Alet, Zach Eaton-Rosen, et al. Graphcast: Learning skillful medium-range global weather forecasting. arXiv preprint arXiv:2212.12794, 2022.
  • [33] Zhuoyuan Li, Bin Dong, and Pingwen Zhang. Latent assimilation with implicit neural representations for unknown dynamics. Journal of Computational Physics, 506:112953, 2024.
  • [34] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022, 2021.
  • [35] Calvin Luo. Understanding diffusion models: A unified perspective. arXiv preprint arXiv:2208.11970, 2022.
  • [36] Xiangming Meng and Yoshiyuki Kabashima. Diffusion model based posterior samplng for noisy linear inverse problems. arXiv preprint arXiv:2211.12343, 2022.
  • [37] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022.
  • [38] Suraj Pawar and Omer San. Equation-free surrogate modeling of geophysical flows at the intersection of machine learning and data assimilation. Journal of Advances in Modeling Earth Systems, 14(11):e2022MS003170, 2022.
  • [39] S. G. Penny, T. A. Smith, T.-C. Chen, J. A. Platt, H.-Y. Lin, M. Goodliff, and H. D. I. Abarbanel. Integrating recurrent neural networks with data assimilation for scalable data-driven state estimation. Journal of Advances in Modeling Earth Systems, 14(3):e2021MS002843, 2022.
  • [40] Mathis Peyron, Anthony Fillion, Selime Gürol, Victor Marchais, Serge Gratton, Pierre Boudier, and Gael Goret. Latent space data assimilation by using deep learning. Quarterly Journal of the Royal Meteorological Society, 147(740):3759–3777, 2021.
  • [41] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International conference on machine learning, pages 1530–1538. PMLR, 2015.
  • [42] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
  • [43] François Rozet and Gilles Louppe. Score-based data assimilation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  • [44] François Rozet and Gilles Louppe. Score-based data assimilation for a two-layer quasi-geostrophic model. 2023.
  • [45] M. G. Schultz, C. Betancourt, B. Gong, F. Kleinert, M. Langguth, L. H. Leufen, A. Mozaffari, and S. Stadtler. Can deep learning beat numerical weather prediction? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 379(2194):20200097, 2021.
  • [46] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems, 28, 2015.
  • [47] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792, 2022.
  • [48] Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. Advances in neural information processing systems, 29, 2016.
  • [49] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
  • [50] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021.
  • [51] Simo Särkkä and Arno Solin. Applied Stochastic Differential Equations. Institute of Mathematical Statistics Textbooks. Cambridge University Press, 2019.
  • [52] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  • [53] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  • [54] Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661–1674, 2011.
  • [55] Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopapadakis. Deep learning for computer vision: A brief review. Computational Intelligence and Neuroscience, 2018(1):7068349, 2018.
  • [56] Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. Diffusion models: A comprehensive survey of methods and applications. ACM Computing Surveys, 56(4):1–39, 2023.
  • [57] Yiyuan Yang, Ming Jin, Haomin Wen, Chaoli Zhang, Yuxuan Liang, Lintao Ma, Yi Wang, Chenghao Liu, Bin Yang, Zenglin Xu, et al. A survey on diffusion models for time series and spatio-temporal data. arXiv preprint arXiv:2404.18886, 2024.
  • [58] Qinsheng Zhang and Yongxin Chen. Fast sampling of diffusion models with exponential integrator. arXiv preprint arXiv:2204.13902, 2022.
  • [59] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
  • [60] Shujun Zhu, Bin Wang, Lin Zhang, Juanjuan Liu, Yongzhu Liu, Jiandong Gong, Shiming Xu, Yong Wang, Wenyu Huang, Li Liu, Yujun He, Xiangjun Wu, Bin Zhao, and Fajing Chen. A 4denvar-based ensemble four-dimensional variational (en4dvar) hybrid data assimilation system for global nwp: System description and primary tests. Journal of Advances in Modeling Earth Systems, 14(8):e2022MS003023, 2022. e2022MS003023 2022MS003023.

Appendix A Configurations of the two-layer quasi-geostrophic model

We list the parameters (as defaults in pyqg-jax333https://pyqg-jax.readthedocs.io/en/latest/reference.models.qg_model.html) we used to generate the QG test dataset.

  • Domain size: 106×10610^{6}\times 10^{6};

  • Linear drag in lower layer: rek=5.767×107r_{ek}=5.767\times 10^{-7};

  • Amplitude of the spectral spherical filter with parameter 23.623.6 to obtain the small-scale dissipation;

  • Gravity: g=9.81g=9.81;

  • Gradient of coriolis parameter: β=1.5×1011\beta=1.5\times 10^{-11};

  • Deformation radius: rd=1.5×104r_{d}=1.5\times 10^{4};

  • Layer thickness: H1=500H_{1}=500, H2=2000H_{2}=2000;

  • Upper/Lower layer flow: U1=0.025U_{1}=0.025, U2=0U_{2}=0.

Readers may refer to pyqg444https://pyqg.readthedocs.io/en/latest/equations/notation_twolayer_model.html for the detailed numerical stepper.