Recurrent Interpolants for Probabilistic Time Series Prediction
Abstract
Sequential models like recurrent neural networks and transformers have become standard for probabilistic multivariate time series forecasting across various domains. Despite their strengths, they struggle with capturing high-dimensional distributions and cross-feature dependencies. Recent work explores generative approaches using diffusion or flow-based models, extending to time series imputation and forecasting. However, scalability remains a challenge. This work proposes a novel method combining recurrent neural networks’ efficiency with diffusion models’ probabilistic modeling, based on stochastic interpolants and conditional generation with control features, offering insights for future developments in this dynamic field.
1 Introduction
Autoregression models [Box et al., 2015], such as recurrent neural networks [Graves, 2013, Sutskever et al., 2014, Hochreiter and Schmidhuber, 1997] or transformer models [Vaswani et al., 2017], have been the go-to methods for neural time series forecasting. They are widely applied in finance, biological statistics, medicine, geophysical applications, etc., effectively showcasing their ability to capture short-term and long-term sequential dependencies [Morrill et al., 2021]. These methods can also provide an assessment of prediction uncertainty through probabilistic forecasting by incorporating specific parametric probabilistic models into the output layer of the neural network. For instance, a predictor can model the Gaussian distribution by predicting both mean and covariance. However, the probabilistic output layer is confined within a simple probability family because the density needs to be parameterized by neural networks, and the loss must be differentiable with respect to neural network parameters.
To better capture sophisticated distributions in time series modeling and learn both the temporal and cross-feature dependencies, a common strategy involves exploring the generative modeling of time series using efficient distribution transportation plans, especially via diffusion or flow-based models. For example, recent works such as Li et al. [2020] propose using latent neural SDE as latent states for modeling time series in a stochastic manner, while Spantini et al. [2022] summarize non-linear extensions of state space models using both deterministic and stochastic transformation plans. Tashiro et al. [2021], Biloš et al. [2023], Chen et al. [2023a], Miguel et al. [2022], Li et al. [2022] studied the application of diffusion models in probabilistic time series imputation and forecasting. The generative model is trained to learn the joint density of the time series window with variates given . is the size of the context window and is the size of the subsequent prediction window. During inference, the model performs conditional generation given only the context, similar to the inpainting task in computer vision [Song et al., 2021]. Compared to a recurrent model, where the model size is only proportional to the number of features, but not the length of the time window, such generative model predictors may suffer from scalability issues because the model size is related to both feature dimension and the size of the window. A more computational-friendly framework is needed for large-scale generative model-based time series prediction problems.
Generative modeling excels at modeling complicated high-dimension distributions, but most models require learning a mapping from noise distribution to data distribution. If the generative procedure starts from an initial distribution proximate to the terminal data distribution, it can remarkably alleviate learning challenges, reduce inference complexity, and enhance the quality of generated samples, which is also supported by previous studies [Rubanova et al., 2019, Rasul et al., 2021a, b, Chen et al., 2023b, Deng et al., 2024a, b, Chen et al., 2024]. Time series data is typically continuous and neighboring time points exhibit strong correlations, indicating that the distribution of future time points is close to that of the current time point.
These observations inspire the creation of a time series prediction model under the generative framework that maps between dependent data points: initiating the prediction of future time point’s distribution with the current time point is more straightforward and yields better quality; meanwhile, the longer temporal dependency is encoded by a recurrent neural network and the embedded history is passed to the generative model as the guidance of the prediction for the future time points. The new framework benefits from the efficient training and computation inherited from the recurrent neural network, while enjoying the high quality of probabilistic modeling empowered by the diffusion model.
Our contributions include:
-
•
extending the theory of stochastic interpolants to a more general conditional generation framework with extra control features;
-
•
adopting a conditional stochastic interpolants module for the sequential modeling and multivariate probabilistic time series prediction tasks, which is computational-friendly and achieves high-quality modeling of the future time point’s distribution.
2 Background
As we formalize probabilistic time series forecasting within the generative framework in Section 4, this section is dedicated to reviewing commonly used generative methods and their extensions for conditional generation. These models will serve as baseline models in subsequent sections. For a broader overview of time series forecasting problems, refer to Salinas et al. [2019], Alexandrov et al. [2020], and the references therein.
2.1 Denoising Diffusion Probabilistic Model (DDPM)
DDPM [Sohl-Dickstein et al., 2015, Ho et al., 2020] adds Gaussian noise to the observed data point at different scales, indexed by , such that the first noisy value is close to the clean data , and the final value is indistinguishable from noise. The generative model learns to revert this process allowing sampling new points from pure noise samples.
Following previous convention, we define , with . Then when the transition kernel is Gaussian it can be computed directly from :
(1) |
The posterior distribution is available in closed form:
(2) |
where depends on , and a choice of -scheduler. The generative model approximates the reverse process. The actual model is usually reparameterized to predict the noise added to a clean data point, from the noisy data point . The loss function can be simply written as:
(3) |
Sampling new data is performed by first sampling a point from the pure noise and then gradually denoising it using the above model to get a sample from the data distribution via calls of the model [Ho et al., 2020].
2.2 Score-based Generative Model (SGM)
SGM [Song et al., 2021], like DDPM, considers a pair of forward and backward dynamics between :
(4) | ||||
(5) |
where is the so-called score function. The forward process usually is scheduled as simple processes, such as Brownian motion or Ornstein–Uhlenbeck process, which can transport data distribution to standard Gaussian distribution. The generative process is achieved by the backward process that walks from Gaussian prior distribution to the data distribution of interest. Now, Equation 5 gives a way to generate new points by starting at and solving the SDE backward in time giving as a sample from data distribution. In practice, the only missing piece is obtaining the score. A standard approach is to approximate the score with a neural network.
Since during training, we have access to clean data, the score function is available in closed form. The model learns to approximate the score from noisy data only, resulting in a loss function similar to Equation 3:
(6) |
2.3 Flow Matching (FM)
Flow matching [Lipman et al., 2023] constructs a probability path by learning the vector field that generates it. Given a data point , the conditional probability path is denoted with for . We put the constraints on such that and , with small . That is, the distribution corresponds to the noise distribution and the distribution is centered around the data point with small variance.
Then there exists a conditional vector field which generates . Our goal is to learn the vector field with a neural network which amounts to learning the generative process. This can be done by minimizing the flow matching objective:
(7) |
Going back to Equation 6 we notice that the two approaches have similarities. Flow matching differs in the path constructions and it learns the vector field directly, instead of learning the score, potentially offering a more stable alternative.
One choice for the noising function is transporting the values into noise as a linear function of transport time:
(8) |
The probability path is generated by the following conditional vector field which is available in closed form:
(9) |
By learning the field with a neural network we can sample new points by sampling an initial value from the noise distribution and solve an ODE to obtain the new sample .
2.4 Stochastic Interpolants (SI)
Stochastic interpolants [Albergo et al., 2024] aims to model the dependent couplings between with their joint density , and establish a two-way generative SDEs mapping from one data distribution to another. The method constructs a straightforward stochastic mapping from to given the values at two ends to , which provides a means of transport between two densities and , while maintaining the dependency between and .
(10) |
where is the marginal density of at diffusion time . Such a stochastic mapping is characterized by a pair of functions: velocity function and score function :
(11) | |||
(12) |
, , and satisfy the equality below,
(13) | |||
(14) |
where and schedule the deterministic interpolant. We set . schedules the variance of the stochastic component . We set , so the two ends of the interpolant are fixed at and . Figure 1 shows one example of the interpolant schedule, where , , .

The velocity function and the score function can be modeled by a rich family of functions, such as deep neural networks. The model is trained to match the above equality by minimizing the mean squared error loss functions,
(15) | ||||
(16) |
More details of training will be shown in section 4.
During inference, usually, one side of the diffusion trajectory at or is given, the goal is to infer the sample distribution on the other side. The interpolant in equation 10 results in elegant forward and backward SDEs and corresponding Fokker-Planck equations, which offer convenient tools for inference. The SDEs are composed of and , which are learned from the data. For any , define the forward and backward SDEs
(17) | ||||
(18) |
where is the backward Brownian motion. The SDEs satisfy the forward and backward Fokker-Plank equations,
(19) | |||
(20) |
These properties imply that one can draw samples from the conditional density following the forward SDE in equation 17 starting from at . It can also draw samples from the joint density by initially drawing a sample (if feasible, for example, pick one sample from the dataset), then using the forward SDE to generate samples at . The method guarantees that follows marginal distribution and the sample pair satisfies the joint density . Drawing samples using the backward SDE is similar: one can draw samples from and the joint density as well. Details of inference will be shown in section 4.

3 Conditional generation with extra features
All the aforementioned methods can be adapted for conditional generation with additional features. The conditions may range from simple categorical values [Song et al., 2021] to complex prompts involving multiple data types, including partial observations of a sample’s entries (e.g., image inpainting, time series imputation) [Tashiro et al., 2021, Song et al., 2021], images [Zheng et al., 2023, Rombach et al., 2022], text [Rombach et al., 2022, Zhang et al., ], etc. A commonly employed technique to handle diverse conditions is to integrate condition information through feature embedding, where the embedding is injected into various layers of neural networks [Song et al., 2021, Rombach et al., 2022]. For instance, conditional SGM can be trained with
(21) | ||||
where the data is given by pairs of a sample and the corresponding condition . This simple scheme approach showcases its effectiveness in various tasks, achieving state-of-the-art performance [Rombach et al., 2022, Zhang et al., ].

Likewise, SI can be expanded for conditional generation by substituting the velocity function and score function with and [Albergo et al., 2024]. The model is trained using samples of tuples , where is the extra condition feature. Consequently, the inference using forward or backward SDEs becomes
(22) | ||||
(23) |
where both velocity and score functions depend on the condition . The loss functions are similar to equation 15 and equation 16.
Regarding the time series prediction task, we will encode a large context window as the conditional information, and the prediction or generation of future time points will rely on such a conditional generation mechanism.
Next, we demonstrate that the probability distribution of as simulated by equation 24, results in a dynamic density function. This density serves as a solution to a transport equation 25, which smoothly transitions between and .
Theorem 1.
(Extension of Stochastic Interpolants to Arbitrary Joint Distributions). Let be the joint distribution and let the stochastic interpolant be
(24) |
where , , and for all . We define to be the noise-dependent density of , which satisfies the boundary conditions at and the transport equation follows that
(25) |
for all with the velocity defined as
(26) |
where the expectation is based on the density given and the extra information .
The score function follows the relation such that
The proof is in a spirit similar to Theorem 2 in Albergo et al. [2024] and detailed in section B. The key difference is that we consider a continuous-time interpretation and avoid using characteristic functions, which makes the analysis more friendly to users. Additionally, the score function is optimized in a simple quadratic objective function as indicated in Theorem 2 in the Appendix.
4 Stochastic interpolants for time series prediction
We formulate the multivariate probabilistic time series prediction tasks through the conditional probability for some chosen context length and prediction length . The model diagram is illustrated in Figure 2. Here, represents the multivariate time series at date-time index with variates. is the context window and during training the subsequent prediction window is available, typically sampled randomly from within the train split of a dataset.
For this problem, we employ the conditional Stochastic Interpolants (SI) method as follows. In the training phase, the generative model learns the joint distribution of the pair given the past observations , where and for all , so the marginal distributions are equal: . The model aims to learn the coupling relation between and conditioning on the history . This is achieved by training the conditional velocity and score functions in equation 22.
8gaussians | Circles | Moons | Rings | Swissroll | |
---|---|---|---|---|---|
DDPM (cosine) | 2.58 | 0.20 | 0.20 | 0.12 | 0.24 |
DDPM (linear) | 0.70 | 0.18 | 0.12 | 0.11 | 0.14 |
SGM | 1.10 | 0.30 | 0.35 | 0.32 | 0.14 |
FM | 0.58 | 0.10 | 0.11 | 0.09 | 0.15 |
SI (quad, linear) | 0.52 | 0.15 | 0.32 | 0.12 | 0.16 |
SI (sqrt, linear) | 0.59 | 0.29 | 0.51 | 0.22 | 0.37 |
SI (sqrt, trig) | 0.75 | 0.25 | 0.50 | 0.48 | 0.36 |
SI (trig, linear) | 0.52 | 0.13 | 0.29 | 0.21 | 0.16 |
As the sample spaces of and must be the same, the generative model can not directly map the whole context window to the target due to different tensor sizes. Instead, a recurrent neural network is used to encode the context into a history prompt vector. Subsequently, the score function and velocity function perform conditional generation diffusing from with the condition input following equation 22.
The training loss consists of tuples for each time step . It is worth noting that the loss values become larger when is close to the two ends. To address this, importance sampling is leveraged to better handle the integral over diffusion time in the loss functions equation 15 and equation 16 to stabilize the training, where we use Beta distribution for our proposal distribution. The algorithm is outlined in Algorithm 1. Additional details can be found in Appendix C.
In the inference phase, the RNN first encodes the context into the history prompt , then SI transports the context vector to the target distribution with the condition , following the forward SDE. Regarding the multiple-step prediction, we recursively run the step-by-step prediction in an autoregressive manner as outlined in Algorithm 2. By repeating the inference loop (in the batch dimension) we can obtain empirical samples from the predicted distribution which are used to quantify uncertainty.

5 Experiments
We first verify the method on synthetic datasets and then apply it to the time series forecasting tasks with real data.

Baseline models such as DDPM, SGM, FM, and SI all involve modeling field functions, where the inputs are the state vector (in the same space of the data samples), diffusion time, and condition embedding, and the output is the generated sample. The field functions correspond to the noise prediction function in DDPM; the score function in SGM; the vector field in FM; and the velocity and score functions in SI. To make a fair comparison between these models, we use the same neural networks for these models. Details of the models are discussed in Appendix C.
Exchange rate | Solar | Traffic | Wiki | |
---|---|---|---|---|
Vec-LSTM | 0.0080.001 | 0.3910.017 | 0.0870.041 | 0.1330.002 |
DDPM | 0.0090.004 | 0.3590.061 | 0.0580.014 | 0.0840.023 |
FM | 0.0090.001 | 0.4190.027 | 0.0380.002 | 64.25662.596 |
SGM | 0.0080.002 | 0.3640.029 | 0.0710.05 | 0.1080.026 |
SI | 0.0070.001 | 0.3590.06 | 0.0830.005 | 0.0800.007 |
5.1 Synthetic datasets
We synthesize several two-dimensional datasets with regular patterns, such as Circles, Moons, etc. Details can be found in Chen et al. [2022], Lipman et al. [2023] and their published code repositories. Models introduced in section 2 are compared to the SI as baselines. For diffusion-like models, we implement DPPM with a linear or cosine noise scheduler. We explore on synthetic datasets to determine a good range of hyperparameters which will be used in later time series experiments. This experiment is used to investigate the properties with respect to the varying data sizes, model sizes, and training lengths.
To fairly compare the generation quality, all models are assigned to generate data in the same setting by mapping from standard Gaussian to the target distribution. The neural networks and hyperparameters are also set as the same, such as batch size, training epochs, etc. The generated samples from different methods are shown in Figure 3. Table 1 measures the sample quality with Wasserstein distance [Ramdas et al., 2017]. It shows that all the models can capture the true distribution. The same holds when we use different metrics such as Sliced Wasserstein Distance (SWD) [Rabin et al., 2012] and Maximum Mean Discrepancy (MMD) [Gretton et al., 2012].
We also test out different schedulers for a stochastic interpolant model. For example, “SI (sqrt, linear)” means we use square root gamma-function and a linear interpolant. Other gamma-functions that we consider are quad: , and trig: . We show that most of the gamma-interpolant function combinations achieve good results in modeling the target distribution.
5.2 Multivariate probabilistic forecasting
In this section, we will empirically verify that: 1) SI is a suitable generative module for the prediction compared with other baselines with different generative methods under the same framework; 2) the whole framework can achieve competitive performance in time series forecasting.
Models.
The baseline models include DDPM, SGM, and FM-based generative models adopted for step-by-step (autoregressive) prediction. DDPM and SGM-based models can only generate samples by transporting Gaussian noise distribution to data distribution. So we modify the framework by replacing the context time point with Gaussian noise, as shown in Figure 5. Flow matching can easily fit into this framework by replacing the denoising objective with the flow matching objective. The modified framework is shown in Figure 2. We model the map from the previous time series observation to the next (forecasted) value. We argue this is a more natural choice than mapping from noise for each time series prediction step. Finally, Vec-LSTM from Salinas et al. [2019] is compared as a pure recurrent neural network model whose probabilistic layer is a multivariate Gaussian.
Setup.
The real-world time series datasets include Solar [Lai et al., 2018], Exchange [Lai et al., 2018], Traffic111https://archive.ics.uci.edu/ml/datasets/PEMS-SF, and Wiki222https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets which have been commonly used for probabilistic forecasting tasks. We follow the preprocessing steps as in Salinas et al. [2019]. The probabilistic forecasting is evaluated by Continuous Ranked Probability Score (CRPS-sum) [Koochali et al., 2022], normalized root mean square error via the median of the samples (NRMSE), and point-metrics normalized deviance (ND). The metrics calculation is provided by gluonts package [Alexandrov et al., 2020]. In all of the cases, smaller values indicate better performance.
Results.
The results for CRPS-sum are shown in Table 2. The results for other metrics are consistent with CRPS-sum and are shown in Tables 4 and 5, in Appendix C. We outperform or match other models on three out of four datasets, only on Traffic FM model achieves better performance. Note that on Wiki data FM cannot capture the data distribution. We ran a search over flow matching hyperparameters without being able to get satisfying results. Therefore, we conclude that stochastic interpolants are a strong candidate for conditional generation, in particular for multivariate probabilistic forecasting. By comparing to the RNN-based model Vec-LSTM, our model and other baselines such as SGM and DDPM get better performance, which implies that carefully modeling the probability distribution is critical for large dimension time series prediction. Figure 4 demonstrates the quality of the forecast on the Solar dataset. We can see that our model can make precise predictions and capture the uncertainty, even when the scale of the different dimensions varies considerably.
6 Conclusions
This study presents an innovative method that effectively merges the computational efficiency of recurrent neural networks with the high-quality probabilistic modeling of the diffusion model, specifically applied to probabilistic time series forecasting. Grounded in stochastic interpolants and an expanded conditional generation framework featuring control features, the method undergoes empirical evaluation on both synthetic and real datasets, showcasing its compelling performance.
References
- Albergo et al. [2023] Michael S. Albergo, Nicholas M. Boffi, and Eric Vanden-Eijnden. Stochastic Interpolants: A Unifying Framework for Flows and Diffusions. In arXiv:2303.08797v3, 2023.
- Albergo et al. [2024] Michael Samuel Albergo, Mark Goldstein, Nicholas Matthew Boffi, Rajesh Ranganath, and Eric Vanden-Eijnden. Stochastic interpolants with data-dependent couplings. In Forty-first International Conference on Machine Learning, 2024. URL https://openreview.net/forum?id=FFILRGD0jG.
- Alexandrov et al. [2020] Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Rangapuram, David Salinas, Jasper Schulz, Lorenzo Stella, Ali Caner Türkmen, and Yuyang Wang. Gluonts: Probabilistic and neural time series modeling in python. Journal of Machine Learning Research, 21(116):1–6, 2020. URL http://jmlr.org/papers/v21/19-820.html.
- Biloš et al. [2023] Marin Biloš, Kashif Rasul, Anderson Schneider, Yuriy Nevmyvaka, and Stephan Günnemann. Modeling Temporal Data as Continuous Functions with Process Diffusion. In Proc. of the International Conference on Machine Learning (ICML), 2023.
- Box et al. [2015] George E. P. Box, Gwilym M. Jenkins, Gregory C. Reinsel, and Greta M. Ljung. Time Series Analysis: Forecasting and Control. WILEY, 2015.
- Chen et al. [2022] Tianrong Chen, Guan-Horng Liu, and Evangelos Theodorou. Likelihood training of schrödinger bridge using forward-backward SDEs theory. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nioAdKCEdXB.
- Chen et al. [2024] Yifan Chen, Mark Goldstein, Mengjian Hua, Michael S. Albergo, Nicholas M. Boff, and Eric Vanden-Eijnden. Probabilistic Forecasting with Stochastic Interpolants and Föllmer Processes. In Proc. of the International Conference on Machine Learning (ICML), 2024.
- Chen et al. [2023a] Yu Chen, Wei Deng, Shikai Fang, Fengpei Li, Nicole Tianjiao Yang, Yikai Zhang, Kashif Rasul, Shandian Zhe, Anderson Schneider, and Yuriy Nevmyvaka. Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation. In International Conference on Machine Learning (ICML), 2023a.
- Chen et al. [2023b] Zehua Chen, Guande He, Kaiwen Zheng, Xu Tan, and Jun Zhu. Schrodinger Bridges Beat Diffusion Models on Text-to-Speech Synthesis. arXiv preprint arXiv:2312.03491, 2023b.
- Chen et al. [2023c] Zonglei Chen, Minbo Ma, Tianrui Li, Hongjun Wang, and Chongshou Li. Long Sequence Time-Series Forecasting with Deep Learning: A Survey. In Information Fusion, 2023c.
- Deng et al. [2024a] Wei Deng, Yu Chen, Nicole Tianjiao Yang, Hengrong Du, Qi Feng, and Ricky T. Q. Chen. Reflected Schrödinger Bridge for Constrained Generative Modeling. In Proc. of the Conference on Uncertainty in Artificial Intelligence (UAI), 2024a.
- Deng et al. [2024b] Wei Deng, Weijian Luo, Yixin Tan, Marin Biloš, Yu Chen, Yuriy Nevmyvaka, and Ricky T. Q. Chen. Variational Schrödinger Diffusion Models. In Proc. of the International Conference on Machine Learning (ICML), 2024b.
- Graves [2013] Alex Graves. Generating Sequences with Recurrent Neural Networks. 2013.
- Gretton et al. [2012] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.
- Ho et al. [2020] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
- Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8), 1997.
- Koochali et al. [2022] Alireza Koochali, Peter Schichtel, Andreas Dengel, and Sheraz Ahmed. Random noise vs. state-of-the-art probabilistic forecasting methods: A case study on crps-sum discrimination ability. Applied Sciences, 12(10):5104, 2022.
- Lai et al. [2018] Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. In The 41st international ACM SIGIR conference on research & development in information retrieval, pages 95–104, 2018.
- Li et al. [2020] Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, and David Duvenaud. Scalable Gradients for Stochastic Differential Equations. In Proc. of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
- Li et al. [2022] Yan Li, Xinjiang Lu, Yaqing Wan, and Dejing Do. Generative Time Series Forecasting with Diffusion, Denoise, and Disentanglement. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
- Lipman et al. [2023] Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=PqvMRDCJT9t.
- Miguel et al. [2022] Juan Miguel, Lopez Alcaraz, and Nils Strodthoff. Diffusion-based Time Series Imputation and Forecasting with Structured State Space Models. In Transactions on Machine Learning Research, 2022.
- Morrill et al. [2021] James Morrill, Cristopher Salvi, Patrick Kidger, James Foster, and Terry Lyons. Neural Rough Differential Equations for Long Time Series. In Proc. of the International Conference on Machine Learning (ICML), 2021.
- Rabin et al. [2012] Julien Rabin, Gabriel Peyré, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision: Third International Conference, SSVM 2011, Ein-Gedi, Israel, May 29–June 2, 2011, Revised Selected Papers 3, pages 435–446. Springer, 2012.
- Ramdas et al. [2017] Aaditya Ramdas, Nicolás García Trillos, and Marco Cuturi. On wasserstein two-sample testing and related families of nonparametric tests. Entropy, 19(2):47, 2017.
- Rasul et al. [2021a] Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland Vollgraf. Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting. In International Conference on Machine Learning, pages 8857–8868. PMLR, 2021a.
- Rasul et al. [2021b] Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann, and Roland Vollgraf. Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows. In Proc. of the International Conference on Learning Representation (ICLR), 2021b.
- Rombach et al. [2022] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
- Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
- Rubanova et al. [2019] Yulia Rubanova, Ricky T. Q. Chen, and David Duvenaud. Latent ODEs for Irregularly-Sampled Time Series. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
- Salinas et al. [2019] David Salinas, Michael Bohlke-Schneider, Laurent Callot, Roberto Medico, and Jan Gasthaus. High-dimensional Multivariate Forecasting with Low-rank Gaussian Copula Processes. Advances in neural information processing systems, 32, 2019.
- Shishkov [2023] Vladislav Shishkov. Time Diffusion. In https://github.com/timetoai/TimeDiffusion, 2023.
- Sohl-Dickstein et al. [2015] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In International Conference on Machine Learning (ICML), 2015.
- Song et al. [2021] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations (ICLR), 2021.
- Spantini et al. [2022] Alessio Spantini, Ricardo Baptista, and Youssef Marzouk. Coupling Techniques for Nonlinear Ensemble Filtering. In SIAM Review, volume 64:4, page 10.1137, 2022.
- Sutskever et al. [2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to Sequence Learning with Neural Networks. In NIPS, 2014.
- Tashiro et al. [2021] Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
- Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
- Wen et al. [2023] Qingsong Wen, Tian Zhou, Chaoli Zhang, Weiqi Chen, Ziqing Ma, Junchi Yan, and Liang Sun. Transformers in Time Series: a Survey. Thirty-Second International Joint Conference on Artificial Intelligence, 2023.
- [40] Chenshuang Zhang, Chaoning Zhang, Mengchun Zhang, and In So Kweon. Text-to-image diffusion models in generative ai: A survey. arxiv 2023. arXiv preprint arXiv:2303.07909.
- Zheng et al. [2023] Guangcong Zheng, Xianpan Zhou, Xuewei Li, Zhongang Qi, Ying Shan, and Xi Li. Layoutdiffusion: Controllable diffusion model for layout-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22490–22499, 2023.
Appendix A Related Works
A plethora of papers focus on auto-regression models, particularly transformer-based models. For a more comprehensive review, we refer to Wen et al. [2023], Chen et al. [2023c]. While our work does not aim to replace RNN- or transformer-based architectures, we emphasize that one of the main motivations behind our work is to develop a probabilistic module building upon these recent advancements. Due to limited resources, we did not extensively explore all underlying temporal architectures but instead selected relatively simpler models as defaults.
The authors were aware of other diffusion-based probabilistic models, as highlighted in the introduction. Unlike our lightweight model, which models the transition between adjacent time points, these selected works model the entire time window, requiring both high memory and computational complexity. With our computation budget restricted to a 32 GB GPU device, effectively training these diffusion models on large datasets with hundreds of features is challenging.
Additionally, several relevant works are related to our idea. For instance, Rasul et al. [2021b] incorporates the DDPM structure, aligning with our DDPM baseline structure. During inference, the prediction diffuses from pure noise to the target distribution. TimeDiff [Shishkov, 2023] introduces two modifications to the established diffusion model: during training it mixes target and context data and it adds an AR model for more precise initial prediction. Both of these can be incorporated into our model as well.
The existing probabilistic forecasters model the distribution of the next value from scratch, meaning they start with normal distribution in the case of normalizing flows and diffusion models or output parametric distribution in the case of transformers and deep AR. We propose modeling the transformation between the previously observed value and the next value we want to predict. We believe this is a more natural way to forecast which can be seen from requiring fewer solver steps to reach the target distribution.
The second row (DDPM) in Table 2 is an exact implementation of Rasul et al. [2021b]. The results might be different due to slightly different training setups but all the models share the same training parameters so the rank should remain the same. We also include ND-sum and NRMSE-sum in the appendix for completeness.
Discussion of Vec-LSTM baseline
In terms of the neural network architecture, we use a similar architecture for the LSTM encoder. But to be clear, Vec-LSTM [Salinas et al., 2019] and our SI framework are not the same mainly due to different ways of probabilistic modeling. Vec-LSTM considers the multivariate Gaussian distribution for the time points, where the mean and covariance matrices are modeled using separate LSTMs. Especially, the covariance matrix is modeled through a low-dimensional structure , where is the latent variable from LSTM. The SI framework does not explicitly model the output distribution in any parametric format. Instead, the latent variable from RNN output is used as the condition variable to guide the diffusion model in Eq.22. Thus, the architectures of RNNs in the two frameworks are not quite strictly comparable.
Appendix B Proof: Conditional Stochastic Interpolant
The proof is in spirit similar to Theorem 2 in Albergo et al. [2024]. The key difference is that we consider a continuous-time interpretation, which makes the analysis more friendly to users.
Proof [Proof of Theorem 1]
Given the conditional information and simulated from equation 24, the conditional stochastic interpolant for equation 24 follows that (where the index is over the noise index and not date-times):
(27) |
where the expectation takes over the density for , , and .
We next show equation 27 is a solution of a stochastic differential equation as follows
(28) |
where and .
To prove the above argument, we proceed to verify the drift and diffusion terms respectively:
-
•
Drift: It is straightforward to verify the drift by taking the gradient of the conditional expectation with respect to .
-
•
Diffusion: For the diffusion term, the proof hinges on showing , which boils down to prove the stochastic calculus follows that . Note that . Invoking the Itô isometry, we have (given ). In other words, is a normal random variable with mean 0 and variance , which proves that equation 27 is a solution of the stochastic differential equation 28.
Define , we know the Fokker-Planck equation associated with equation 28 follows that
(29) |
where .
Further setting and rewrite , we have .
Further define , where and . We have that
where the first equality follows by equation 29 and the last one follows by taking derivative to equation 27 w.r.t. the index .
We also observe that .
∎
Theorem 2.
The loss functions used for estimating the vector field follow that
where , the expectation takes over the density for , , and .
Proof To show the loss is effective to estimate , , and . It suffices to show
where the last equality follows by definition. The unique minimizer is attainable by setting .
The proof of and follows a similar fashion.
Appendix C Experiment details
C.1 Time series data
The time series datasets include: Solar [Lai et al., 2018], Exchange [Lai et al., 2018], Traffic333https://archive.ics.uci.edu/ml/datasets/PEMS-SF, and Wikipedia4445https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets. We follow the preprocessing steps as in Salinas et al. [2019]. Details of the datasets are listed in Table 3.
Datasets | Dimension | Frequency | Total time points | Prediction length |
---|---|---|---|---|
Exchange | 8 | Daily | 6,071 | 30 |
Solar | 137 | Hourly | 7,009 | 24 |
Traffic | 963 | Hourly | 4,001 | 24 |
Wiki | 2000 | Daily | 792 | 30 |
The probabilistic forecasting is evaluated by Continuous Ranked Probability Score (CRPS-sum) [Koochali et al., 2022], normalized root mean square error via the median of the samples (NRMSE), and point-metrics normalized deviance (ND). The metrics calculation is provided by gluonts package [Alexandrov et al., 2020] by calling module gluonts.evaluation.MultivariateEvaluator.
C.2 Models and hyperperameters
Baseline models such as DDPM, SGM, FM, and SI all involve modeling field functions, where the inputs are the state vector (in the same space as the data samples), diffusion time, and condition embedding, and the output is the generated sample. The field functions correspond to the “noise prediction” function in DDPM; the score function in SGM; the vector field in FM; the velocity and score functions in SI. To make a fair comparison between these models, we use the same neural networks for these models.
In the synthetic datasets experiments, we model the field functions with a 4-layer ResNet, each layer has 256 intermediate dimensions. The batch size is 10,000 for all models and a model is trained with 20,000 iterations. The learning rate is .
In the time series forecasting experiments, the RNN for the history encoder has 1 layer and 128 latent dimensions; The field function is modeled with a Unet-like structure [Ronneberger et al., 2015] with 8 residual blocks, and each block has 64 dimensions. To stabilize the training, we also use paired sampling for the stochastic interpolants introduced by [Albergo et al., 2023, Appendix C]:
The baseline models are trained with 200 epochs and 64 batch sizes with a learning rate . The SI model is trained with 100 epochs and 128 batch sizes with a learning rate . We find if the learning rate is too large, SI may not converge properly.
C.3 Importance sampling
The loss functions for training the velocity and score functions are
(30) | ||||
Both loss functions involve the integral over diffusion time in the form of
(31) |
However, the loss value has a large variance, especially when is near or . Figure 6 shows an example of the distribution of across multiple . The large variance slows down the convergence of training. To overcome this issue, we apply importance sampling, a similar technique used by [Song et al., 2021, Sec. 5.1 ], to stabilize the training. Instead of drawing diffusion time from a uniform distribution, importance sampling considers,
(32) |
Ideally, one wants to keep as constant as possible such that the variance of the estimation is minimum. The loss value is very large when is close to or , and is relatively flat in the middle, and the domain of is , so we choose Beta distribution as the proposal distribution . As shown in Figure 6, the values of are plotted against their , which becomes more concentrated in a small range.


C.4 Additional forecasting results
Exchange rate | Solar | Traffic | Wiki | |
---|---|---|---|---|
DDPM | 0.0110.004 | 0.3770.061 | 0.0640.014 | 0.0930.023 |
FM | 0.0110.001 | 0.4450.031 | 0.0410.002 | 80.62489.804 |
SGM | 0.010.002 | 0.3880.026 | 0.080.053 | 0.1220.026 |
SI | 0.0080.002 | 0.3990.065 | 0.0890.006 | 0.0910.011 |
Exchange rate | Solar | Traffic | Wiki | |
---|---|---|---|---|
DDPM | 0.0130.005 | 0.720.08 | 0.0940.029 | 0.1230.026 |
FM | 0.0140.002 | 0.8490.072 | 0.0590.007 | 165.128147.682 |
SGM | 0.0190.004 | 0.760.066 | 0.1090.064 | 0.1640.03 |
SI | 0.010.003 | 0.7220.132 | 0.1270.003 | 0.1170.011 |
C.5 Baseline Model using Unconditional SI
Additionally, we introduced a new experiment to verify the necessity of conditional SI over unconditional SI. The unconditional SI diffuses from pure noise and does not utilize the prior distribution from the previous time point. In this case, the context for the prediction is provided exclusively by the RNN encoder. The new results are shown in the following tables. When compared with the conditional SI framework, the unconditional model shows slightly inferior performance.
Exchange rate | Solar | Traffic | |
---|---|---|---|
SI | 0.0070.001 | 0.3590.06 | 0.0830.005 |
Vanilla SI | 0.0100.001 | 0.3830.010 | 0.0820.006 |
Exchange rate | Solar | Traffic | |
---|---|---|---|
SI | 0.0080.002 | 0.3990.065 | 0.0890.006 |
Vanilla SI | 0.0100.003 | 0.4300.113 | 0.0930.007 |
Exchange rate | Solar | Traffic | |
---|---|---|---|
SI | 0.0100.003 | 0.7220.132 | 0.1270.003 |
Vanilla SI | 0.0120.003 | 0.8150.135 | 0.1320.015 |