This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Learning Stochastic Dynamics from Data

Ziheng Guo Department of Applied Mathematics, Illinois Institute of Technology Igor Cialenco Department of Applied Mathematics, Illinois Institute of Technology Ming Zhong Department of Applied Mathematics, Illinois Institute of Technology
Abstract

We present a noise guided trajectory based system identification method for inferring the dynamical structure from observation generated by stochastic differential equations. Our method can handle various kinds of noise, including the case when the the components of the noise is correlated. Our method can also learn both the noise level and drift term together from trajectory. We present various numerical tests for showcasing the superior performance of our learning algorithm.

1 Introduction

Stochastic Differential Equation (SDE) is a fundamental modeling tool in various science and engineering fields Evans (2013); Särkkä” & Solin (2019). Compared with traditional deterministic models which often fall short in capturing the stochasticity in the nature, the significance of SDE lies in the ability to model complex systems influenced by random perturbations. Hence, SDE provides insights into the behavior of such systems under uncertainty. By incorporating a random component, typically through a Brownian motion, SDE provides a more realistic and flexible framework for simulating and predicting the behavior of these complex and dynamic systems.

We consider the models following the SDE of the following form, d𝐱t=𝒇(𝐱t)dt+d𝐰td{\mathbf{x}}_{t}={\bm{f}}({\mathbf{x}}_{t})dt+d{\mathbf{w}}_{t}, where the state vector 𝐱td{\mathbf{x}}_{t}\in\mathbb{R}^{d}, the drift term 𝒇:dd{\bm{f}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}, and the stochastic noise 𝐰t{\mathbf{w}}_{t} is a Brownian motion with covariance matrix 𝑫(𝒙){\bm{D}}({\bm{x}}) which also depends on the state. In the field of mathematical finance, there are several SDE models that are widely used, for example, Black-Scholes Model Black & Scholes (1973) in option pricing, Vasicek Model Vasicek (1977) for analyzing interest rate dynamic and Heston Model Heston (2015) for volatility studies. Apart from the field of finance, SDE has its application in physics Coffey et al. (2004); Ebeling et al. (2008) and biology Székely & Burrage (2014); Dingli & Pacheco (2011) where SDE is used to study evolution of particles subjected to random forces and modeling physical and biological systems.

The calibration of these models is crucial for their effective application in areas like derivative pricing and asset allocation in finance, as well as in the analysis of particle systems in quantum mechanics and the study of biological system behaviors in biology. This requires the use of diverse statistical and mathematical techniques to ensure the models’ outputs align with empirical data, thereby enhancing their predictive and explanatory power. Since these SDE models have explicit function form of both drift and diffusion terms, the calibration or estimation of parameters is mostly done by minimizing the least square error between observation and model prediction Mrázek & Pospíšil (2017); Abu-Mostafa (2001). SDE has also been studied for more general cases where the drift term does not have an explicit form where it satisfies 𝒇=𝒇(𝐱t,θ){\bm{f}}={\bm{f}}({\mathbf{x}}_{t},\theta) with θ\theta being a vector a unknown parameters. These models can be estimated with the maximum likelihood estimator approach presented in Phillips (1972). The other approach is to obtain the likelihood function by Radon Nikodym derivative over the whole observation {𝐱t}t=0T\{{\mathbf{x}}_{t}\}_{t=0}^{T}. Inspired by theorem 7.47.4 in Särkkä” & Solin (2019) and the Girsanov theorem, we derive a likelihood function that incorporates state-dependent correlated noise. By utilizing such likelihood function, we are able to capture the essential structure of 𝒇{\bm{f}} from data with complex noise structure.

The remainder of the paper is structured as follows, section 2 outlines the framework we use to learn the drift term and the noise level (if not state dependent), we demonstrates the effectiveness of our learning by testing it on various cases summarized in section 4 and additional examples presented in Appendix B, we conclude our paper in section 5 with a few pointers for ongoing and future developments.

2 Learning Framework

We consider the following SDE

d𝐱t=𝒇(𝐱t)dt+d𝐰t,𝐱t,𝐰td,d{\mathbf{x}}_{t}={\bm{f}}({\mathbf{x}}_{t})dt+d{\mathbf{w}}_{t},\quad{\mathbf{x}}_{t},{\mathbf{w}}_{t}\in\mathbb{R}^{d}, (1)

where 𝒇:dd{\bm{f}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d} is a drift term, and 𝐰{\mathbf{w}} represents the Brownian noise with a symmetric positive definite covariance matrix 𝑫=𝑫(𝒙):dd×d{\bm{D}}={\bm{D}}({\bm{x}}):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d\times d}.

We consider the scenario when we are given continuous observation data in the form of {𝐱t,d𝐱t}t[0,T]\{{\mathbf{x}}_{t},d{\mathbf{x}}_{t}\}_{t\in[0,T]} for 𝐱0μ0{\mathbf{x}}_{0}\sim\mu_{0}. We will find the minimizer to the following loss function

(𝒇~)=𝔼𝐱0μ0[12Tt=0T<𝒇~(𝐱t),𝑫1(𝐱t)𝒇~(𝐱t)>dt2t=0T<𝒇~(𝐱t),𝑫1(𝐱t)d𝐱t>],\mathcal{E}_{{\mathcal{H}}}(\tilde{\bm{f}})=\mathbb{E}_{{\mathbf{x}}_{0}\in\mu_{0}}\Big{[}\frac{1}{2T}\int_{t=0}^{T}<\tilde{\bm{f}}({\mathbf{x}}_{t}),{\bm{D}}^{-1}({\mathbf{x}}_{t})\tilde{\bm{f}}({\mathbf{x}}_{t})>\,dt-2\int_{t=0}^{T}<\tilde{\bm{f}}({\mathbf{x}}_{t}),{\bm{D}}^{-1}({\mathbf{x}}_{t})d{\mathbf{x}}_{t}>\Big{]}, (2)

for 𝒇~\tilde{\bm{f}}\in{\mathcal{H}}; the function space {\mathcal{H}} is designed to be convex and compact and it is also data-driven. This loss functional is derived from the Girsanov theorem as well as inspiration from Theorem 7.47.4 from Särkkä” & Solin (2019).

In the case of uncorrelated noise, i.e. 𝑫(𝒙)=σ2(𝒙)𝐈{\bm{D}}({\bm{x}})=\sigma^{2}({\bm{x}}){\mathbf{I}}, where 𝐈{\mathbf{I}} is the d×dd\times d identity matrix and σ:d+\sigma:\mathbb{R}^{d}\rightarrow\mathbb{R}^{+} is a scalar function depends on the state, representing the noise level, 2 can be simplified to

Sim(𝒇~)=𝔼𝐱0μ0[12Tt=0T𝒇~(𝐱t)2σ2(𝐱t)𝑑t2t=0T<𝒇~(𝐱t),d𝐱t>σ2(𝐱t)].\mathcal{E}_{{\mathcal{H}}}^{\text{Sim}}(\tilde{\bm{f}})=\mathbb{E}_{{\mathbf{x}}_{0}\in\mu_{0}}\Big{[}\frac{1}{2T}\int_{t=0}^{T}\frac{||\tilde{\bm{f}}({\mathbf{x}}_{t})||^{2}}{\sigma^{2}({\mathbf{x}}_{t})}\,dt-2\int_{t=0}^{T}\frac{<\tilde{\bm{f}}({\mathbf{x}}_{t}),d{\mathbf{x}}_{t}>}{\sigma^{2}({\mathbf{x}}_{t})}\Big{]}. (3)

Moreover, we provide three different performance measures of our estimated drift. First, if we have access to original drift function 𝒇{\bm{f}}, then we will use the following error to compute the difference between 𝒇^\hat{\bm{f}} (our estimator) to 𝒇{\bm{f}} with the following norm

𝒇𝒇^L2(ρ)2=1T𝐱Ω𝒇(𝐱)𝒇^(𝐱)2(d)2𝑑ρ(𝐱).||{\bm{f}}-\hat{\bm{f}}||_{L^{2}(\rho)}^{2}=\frac{1}{T}\int_{{\mathbf{x}}\in\Omega}||{\bm{f}}({\mathbf{x}})-\hat{\bm{f}}({\mathbf{x}})||_{\ell^{2}(\mathbb{R}^{d})}^{2}\,d\rho({\mathbf{x}}). (4)

Here the weighted measure ρ\rho is defined on Ω\Omega, where it defines the region of 𝐱{\mathbf{x}} explored due to the dynamics defined by equation 1; therefore ρ\rho is given as follows

ρ(𝐱)=𝔼𝐱0μ0[1Tt=0Tδ𝐱t(𝐱)],where 𝐱t evolves from 𝐱0 by equation 1.\rho({\mathbf{x}})=\mathbb{E}_{{\mathbf{x}}_{0}\sim\mu_{0}}\Big{[}\frac{1}{T}\int_{t=0}^{T}\delta_{{\mathbf{x}}_{t}}({\mathbf{x}})\Big{]},\quad\text{where ${\mathbf{x}}_{t}$ evolves from ${\mathbf{x}}_{0}$ by equation~{}\ref{eq:original_model}.} (5)

The norm given by equation 4 is useful only from the theoretical perspective, i.e. showing convergence. In real life situation, 𝒇{\bm{f}} is most likely non-accessible. Hence we will look at a performance measure that compare the difference between 𝐗(𝒇,𝐱0,T)={𝐱t}t[0,T]{\mathbf{X}}({\bm{f}},{\mathbf{x}}_{0},T)=\{{\mathbf{x}}_{t}\}_{t\in[0,T]} (the observed trajectory evolves from 𝐱0μ0{\mathbf{x}}_{0}\sim\mu_{0} with the unknown 𝒇{\bm{f}}) and 𝐗^(𝒇^,𝐱0,T)={𝐱^t}t[0,T]\hat{\mathbf{X}}(\hat{\bm{f}},{\mathbf{x}}_{0},T)=\{\hat{\mathbf{x}}_{t}\}_{t\in[0,T]} (the estimated trajectory evolves from the same 𝐱0{\mathbf{x}}_{0} with the learned 𝒇^\hat{\bm{f}} and the same random noise as used by the original dynamics). Then, the difference between the two trajectories is measured as follows

𝐗𝐗^=𝔼𝐱0μ0[1Tt=0T𝐱t𝐱^t2(d)2𝑑t].||{\mathbf{X}}-\hat{\mathbf{X}}||=\mathbb{E}_{{\mathbf{x}}_{0}\sim\mu_{0}}\Big{[}\frac{1}{T}\int_{t=0}^{T}||{\mathbf{x}}_{t}-\hat{\mathbf{x}}_{t}||_{\ell^{2}(\mathbb{R}^{d})}^{2}\,dt\Big{]}. (6)

However, comparing two sets of trajectories (even with the same initial condition) on the same random noise is not realistic. We compare the distribution of the trajectories over different initial conditions and all possible noise at some chosen time snapshots using the Wasserstein distance.

3 Comparison to Others

When the noise level becomes a constant, i.e. σ(𝐱)=σ>0\sigma({\mathbf{x}})=\sigma>0, we end up a much simpler loss

Simpler(𝒇~)=𝔼𝐱0μ0[12Tσ2t=0T||𝒇~(𝐱t)||2dt2t=0T<𝒇~(𝐱t),d𝐱t>],\mathcal{E}_{{\mathcal{H}}}^{\text{Simpler}}(\tilde{\bm{f}})=\mathbb{E}_{{\mathbf{x}}_{0}\in\mu_{0}}\Big{[}\frac{1}{2T\sigma^{2}}\int_{t=0}^{T}||\tilde{\bm{f}}({\mathbf{x}}_{t})||^{2}\,dt-2\int_{t=0}^{T}<\tilde{\bm{f}}({\mathbf{x}}_{t}),d{\mathbf{x}}_{t}>\Big{]},

which has been investigated in Lu et al. (2022). System identification of the drift term has been studied in many different scenarios, e.g. identification by enforcing sparsity such as SINDy Brunton et al. (2016), neural network based methods such as NeuralODE Chen et al. (2018) and PINN Raissi et al. (2019), regression based Cucker & Smale (2002), and high-dimensional reduction framework Lu et al. (2019). The uniqueness of our method is that we incorporate the covariance matrix into the learning and hence improving the estimation especially when the noise is correlated.

4 Examples

In this section, we demonstrate the application of our trajectory-based method for estimating drift functions, showcasing a variety of examples. We explore drift functions ranging from polynomials in one and two dimensions to combinations of polynomials and trigonometric functions, as well as a deep learning approach in one dimension. The observations, serving as the input dataset for testing our method, are generated by the Euler-Maruyama scheme, utilizing the drift functions as we just mentioned. The basis space {\mathcal{H}} is constructed employing either B-spline or piecewise polynomial methods for maximum degree p-max equals 22. For higher order dimensions where d2d\geq 2, each basis function is derived through a tensor grid product, utilizing one-dimensional basis defined by knots that segment the domain in each dimension. The parameters for the following examples are listed in table 1. Due to space constraints, additional examples are provided in appendix B.

The estimation results are evaluated using several different metrics. We record the noise terms, d𝐰td{\mathbf{w}}_{t}, from the trajectory generation process and compare the trajectories produced by the estimated drift functions, 𝒇^\hat{{\bm{f}}}, under identical noise conditions. We examine trajectory-wise errors using equation 6 with relative trajectory error and plot both 𝒇{\bm{f}} and 𝒇^\hat{{\bm{f}}} to calculate the relative L2L^{2} error using 4, where ρ\rho is derived by 5. Additionally, we assess the distribution-wise discrepancies between observed and estimated results, computing the Wasserstein distance at various time steps.

Table 1: Parameters Setup for Examples
Simulation Scheme Euler-Maruyama
TT 1 DD (d=1)(d=1) 0.6
dtdt 0.001 DD (d=2)(d=2) (0.6000.8)\big{(}\begin{smallmatrix}0.6&0\\ 0&0.8\end{smallmatrix}\big{)}
𝐱0{\mathbf{x}}_{0} Uniform(0,10) MM 10000
p-max 2 Basis Type B-Spline / PW-Polynomial

4.1 Example: sine/cosine drift

We initiate our numerical study with a one-dimensional (d=1d=1) drift function that incorporates both polynomial and trigonometric components, given by 𝒇=2+0.08𝐱0.05sin(𝐱)+0.02cos2(𝐱){\bm{f}}=2+0.08{\mathbf{x}}-0.05\sin({\mathbf{x}})+0.02\cos^{2}({\mathbf{x}}).

Figure 1 illustrates the comparison between the true drift function 𝒇{\bm{f}} and the estimated drift function 𝒇^\hat{{\bm{f}}}, alongside a comparison of trajectories. Notably, Figure 1(a) on the left includes a background region depicting the histogram of 𝐱t{\mathbf{x}}_{t}, which represents the distribution of observations over the domain of 𝐱{\mathbf{x}}. This visualization reveals that in regions where 𝐱{\mathbf{x}} has a higher density of observations—indicated by higher histogram values—the estimation of 𝒇^\hat{{\bm{f}}} tends to be more accurate. Conversely, in less dense regions of the dataset (two ends of the domain), the estimation accuracy of 𝒇^\hat{{\bm{f}}} diminishes.

Table 2 presents a detailed quantitative analysis of the estimation results, including the L2L^{2} norm difference between 𝒇{\bm{f}} and 𝒇^\hat{{\bm{f}}}, as well as the trajectory error. Furthermore, the table compares the distributional distances between 𝐱t{\mathbf{x}}_{t} and 𝐱^t\hat{{\mathbf{x}}}_{t} at selected time steps, with the Wasserstein distance results included.

Refer to caption
Figure 1: Left: Comparison of 𝒇{\bm{f}} and 𝒇^\hat{{\bm{f}}}. Middle: 5 trajectories generated by 𝒇{\bm{f}}. Right: 5 trajectories generated by 𝒇^\hat{{\bm{f}}} with same noise.
Table 2: One-dimensional Drift Function Estimation Summary
Number of Basis 8 Wasserstein Distance
Maximum Degree 2 t=0.25t=0.25 0.0291
Relative L2(ρ)L^{2}(\rho) Error 0.007935 t=0.50t=0.50 0.0319
Relative Trajectory Error 0.0020239 ±\pm 0.002046 t=1.00t=1.00 0.0403

4.2 Example: Deep Neural Networks

We continue our numerical investigation with a one-dimensional (d=1d=1) drift function which is given by 𝒇=0.08𝐱{\bm{f}}=0.08{\mathbf{x}}. Figure 2 illustrates the comparison between the true drift function 𝒇{\bm{f}} and the estimated drift function 𝒇^\hat{{\bm{f}}}, alongside a comparison of trajectories. Recall the setup of figures are similar to the ones presented in previous section.

Refer to caption
Figure 2: Left: Comparison of 𝒇{\bm{f}} and 𝒇^\hat{{\bm{f}}}. Middle: 5 trajectories generated by 𝒇{\bm{f}}. Right: 5 trajectories generated by 𝒇^\hat{{\bm{f}}} with same noise.

The error for learning 𝒇{\bm{f}} turns out to be bigger, especially towards the two end points of the interval. However, the errors happen mostly during the two end points of the data interval, where the distribution of the data appears to be small, i.e. few data present in the learning. We are able to recover most the trajectory.

5 Conclusion

We have presented a novel approach of learning the drift and diffusion from a noise-guide likelihood function. Such learning can handle general noise structure as long as the covariance information of the noise is known. Unknown noise structure when the covariance is a simple scalar is still learnable through quadratic variation. However learning state-dependent covariance noise is ongoing. Testing on high-dimensional drift term is ongoing. Our loss only incorporates likelihood on the observation data. It is possible to add various penalty terms to enhance the model performance, such as the sparsity assumption enforced in SINDy and other related deep learning methods such as NeuralODE and PINN.

Acknowledgments

ZG developed the algorithm, analyzed the data and implemented the software package. ZG develops the theory with IC and MZ. MZ designed the research. All authors wrote the manuscript. MZ is partially supported by NSF-22255072225507 and the startup fund provided by Illinois Tech.

References

  • Abu-Mostafa (2001) Y.S. Abu-Mostafa. Financial model calibration using consistency hints. IEEE Transactions on Neural Networks, 12(4):791–808, 2001. doi: 10.1109/72.935092.
  • Black & Scholes (1973) Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. Journal of Political Economy, 81(3):637–654, 1973. ISSN 00223808, 1537534X. URL http://www.jstor.org/stable/1831029.
  • Brunton et al. (2016) S. L. Brunton, J. L. Proctor, and N. Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. PNAS, 113(15):3932 – 3937, 2016.
  • Chen et al. (2018) Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/69386f6bb1dfed68692a24c8686939b9-Paper.pdf.
  • Coffey et al. (2004) William Coffey, Yuri Kalmykov, and John Waldron. The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering. 09 2004. ISBN 978-981-238-462-1. doi: 10.1142/9789812795090.
  • Cucker & Smale (2002) Felipe Cucker and Steve Smale. On the mathematical foundations of learning. Bulletin of the American mathematical society, 39(1):1–49, 2002.
  • Dingli & Pacheco (2011) D. Dingli and J.M. Pacheco. Stochastic dynamics and the evolution of mutations in stem cells. BMC Biol, 9:41, 2011.
  • Ebeling et al. (2008) Werner Ebeling, Ewa Gudowska-Nowak, and Igor M. Sokolov. On Stochastic Dynamics in PHysics - Remarks on History and Terminology. Acta Physica Polonica B, 39(5):1003 – 1018, 2008.
  • Evans (2013) Lawrence C. Evans. An Introduction to Stochastic Differential Equations. American Mathematical Society, 2013.
  • Heston (2015) Steven L. Heston. A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options. The Review of Financial Studies, 6(2):327–343, 04 2015. ISSN 0893-9454. doi: 10.1093/rfs/6.2.327. URL https://doi.org/10.1093/rfs/6.2.327.
  • Lu et al. (2019) Fei Lu, Ming Zhong, Sui Tang, and Mauro Maggioni. Nonparametric inference of interaction laws in systems of agents from trajectory data. Proceedings of the National Academy of Sciences, 116(29):14424–14433, 2019.
  • Lu et al. (2022) Fei Lu, Maruo Maggioni, and Sui Tang. Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories. Foundations of Computational Mathematics, 22:1013 – 1067, 2022.
  • Mrázek & Pospíšil (2017) Milan Mrázek and Jan Pospíšil. Calibration and simulation of heston model. Open Mathematics, 15(1):679–704, 2017. doi: doi:10.1515/math-2017-0058. URL https://doi.org/10.1515/math-2017-0058.
  • Phillips (1972) P. C. B. Phillips. The structural estimation of a stochastic differential equation system. Econometrica, 40(6):1021–1041, 1972. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/1913853.
  • Raissi et al. (2019) M. Raissi, P. Perdikaris, and G.E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019. ISSN 0021-9991. doi: https://doi.org/10.1016/j.jcp.2018.10.045. URL https://www.sciencedirect.com/science/article/pii/S0021999118307125.
  • Särkkä” & Solin (2019) Simo Särkkä” and Arno Solin. Applied Stochastic Differential Equations. Institute of Mathematical Statistics Textbooks. Cambridge University Press, 2019.
  • Székely & Burrage (2014) Tamás Székely and Kevin Burrage. Stochastic simulation in systems biology. Computational and Structural Biotechnology Journal, 12(20):14–25, 2014. ISSN 2001-0370. doi: https://doi.org/10.1016/j.csbj.2014.10.003. URL https://www.sciencedirect.com/science/article/pii/S2001037014000403.
  • Vasicek (1977) Oldrich Vasicek. An equilibrium characterization of the term structure. Journal of Financial Economics, 5(2):177–188, 1977. ISSN 0304-405X. doi: https://doi.org/10.1016/0304-405X(77)90016-2. URL https://www.sciencedirect.com/science/article/pii/0304405X77900162.

Appendix A Implementation

In this section, we will discuss in details how the algorithm is implemented for our learning framework presented in section 2. In real applications, continuous data is rarely obtainable, we will have to deal with discrete observation data, i.e. {𝐱lm}l,m=1L,M\{{\mathbf{x}}_{l}^{m}\}_{l,m=1}^{L,M} with 𝐱0m{\mathbf{x}}_{0}^{m} being i.i.d sample from μ0\mu_{0}, where 𝐱lm=𝐱(m)(tl){\mathbf{x}}_{l}^{m}={\mathbf{x}}^{(m)}(t_{l}) and 0=t1<<tL=T0=t_{1}<\cdots<t_{L}=T. We use a discretized version of 2,

L,M,(𝒇~)\displaystyle\mathcal{E}_{L,M,{\mathcal{H}}}(\tilde{\bm{f}}) =12TMl,m=1L1,M(<𝒇~(𝐱lm),𝑫1(𝐱lm)𝒇~(𝐱lm)>(tl+1tl)\displaystyle=\frac{1}{2TM}\sum_{l,m=1}^{L-1,M}\Big{(}<\tilde{\bm{f}}({\mathbf{x}}_{l}^{m}),{\bm{D}}^{-1}({\mathbf{x}}_{l}^{m})\tilde{\bm{f}}({\mathbf{x}}_{l}^{m})>(t_{l+1}-t_{l}) (7)
2<𝒇~(𝐱lm),𝑫1(𝐱lm)(𝐱l+1m𝐱lm)>),\displaystyle\quad-2<\tilde{\bm{f}}({\mathbf{x}}_{l}^{m}),{\bm{D}}^{-1}({\mathbf{x}}_{l}^{m})({\mathbf{x}}_{l+1}^{m}-{\mathbf{x}}_{l}^{m})>\Big{)},

for 𝒇~\tilde{\bm{f}}\in{\mathcal{H}}. Moreover, we also assume that {\mathcal{H}} is a finite-dimensional function space, i.e. dim()=n<\operatorname{dim}({\mathcal{H}})=n<\infty. Then for any 𝒇~\tilde{\bm{f}}\in{\mathcal{H}}, 𝒇~(𝒙)=i=1n𝒂iψi(𝒙)\tilde{\bm{f}}({\bm{x}})=\sum_{i=1}^{n}{\bm{a}}_{i}\psi_{i}({\bm{x}}), where 𝒂id{\bm{a}}_{i}\in\mathbb{R}^{d} is a constant vector coefficient and ψi:Ω\psi_{i}:\Omega\rightarrow\mathbb{R} is a basis of {\mathcal{H}} and the domain Ω\Omega is constructed by finding out the min/max\min/\max of the components of 𝐱td{\mathbf{x}}_{t}\in\mathbb{R}^{d} for t[0,T]t\in[0,T]. We consider two scenarios for constructing ψi\psi_{i}, one is to use pre-determined basis such as piecewise polynomials, Clamped B-spline, Fourier basis, or a mixture of all of the aforementioned ones; the other is to use neural networks, where the basis functions are also trained from data. Next, we can put the basis representation of 𝒇~\tilde{\bm{f}} back to equation 7, we obtain the following loss based on the coefficients

L,M,({𝒂η}i=1n)\displaystyle\mathcal{E}_{L,M,{\mathcal{H}}}(\{{\bm{a}}_{\eta}\}_{i=1}^{n}) =12TMl,m=1L1,M(i=1nj=1n<𝒂iψi(𝐱lm),𝑫1(𝐱lm)𝒂jψj(𝐱lm)>(tl+1tl)\displaystyle=\frac{1}{2TM}\sum_{l,m=1}^{L-1,M}\Big{(}\sum_{i=1}^{n}\sum_{j=1}^{n}<{\bm{a}}_{i}\psi_{i}({\mathbf{x}}_{l}^{m}),{\bm{D}}^{-1}({\mathbf{x}}_{l}^{m}){\bm{a}}_{j}\psi_{j}({\mathbf{x}}_{l}^{m})>(t_{l+1}-t_{l}) (8)
2i=1n<𝒂iψi(𝐱lm),D1(𝐱lm)(𝐱l+1m𝐱lm)>),\displaystyle\quad-2\sum_{i=1}^{n}<{\bm{a}}_{i}\psi_{i}({\mathbf{x}}_{l}^{m}),D^{-1}({\mathbf{x}}_{l}^{m})({\mathbf{x}}_{l+1}^{m}-{\mathbf{x}}_{l}^{m})>\Big{)},

In the case of covariance matrix 𝑫{\bm{D}} being a diagonal matrix, i.e.

𝑫(𝒙)=[σ12(𝒙)000σ22(𝒙)000σd2(𝒙)]d×d,σi>0,i=1,,d.{\bm{D}}({\bm{x}})=\begin{bmatrix}\sigma_{1}^{2}({\bm{x}})&0&\cdots&0\\ 0&\sigma_{2}^{2}({\bm{x}})&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\sigma_{d}^{2}({\bm{x}})\end{bmatrix}\in\mathbb{R}^{d\times d},\quad\sigma_{i}>0,\quad i=1,\cdots,d.

Then equation 8 can be re-written as

L,M,({𝒂η}i=1n)\displaystyle\mathcal{E}_{L,M,{\mathcal{H}}}(\{{\bm{a}}_{\eta}\}_{i=1}^{n}) =12TMl,m=1L1,M(i=1nj=1nk=1d(𝒂i)k(𝒂j)kσk2(𝐱lm)ψi(𝐱lm)ψj(𝐱lm)(tl+1tl)\displaystyle=\frac{1}{2TM}\sum_{l,m=1}^{L-1,M}\Big{(}\sum_{i=1}^{n}\sum_{j=1}^{n}\sum_{k=1}^{d}\frac{({\bm{a}}_{i})_{k}({\bm{a}}_{j})_{k}}{\sigma_{k}^{2}({\mathbf{x}}_{l}^{m})}\psi_{i}({\mathbf{x}}_{l}^{m})\psi_{j}({\mathbf{x}}_{l}^{m})(t_{l+1}-t_{l})
2i=1nk=1d(𝒂i)k(𝐱l+1m𝐱lm)kσk2(𝐱lm)ψi(𝐱lm)),\displaystyle\quad-2\sum_{i=1}^{n}\sum_{k=1}^{d}\frac{({\bm{a}}_{i})_{k}({\mathbf{x}}_{l+1}^{m}-{\mathbf{x}}_{l}^{m})_{k}}{\sigma_{k}^{2}({\mathbf{x}}_{l}^{m})}\psi_{i}({\mathbf{x}}_{l}^{m})\Big{)},

Here (𝒙)k({\bm{x}})_{k} is the kthk^{th} component of any vector 𝒙d{\bm{x}}\in\mathbb{R}^{d}. We define 𝜶k=[(𝒂1)k(𝒂n)k]n\bm{\alpha}_{k}=\begin{bmatrix}({\bm{a}}_{1})_{k}&\cdots&({\bm{a}}_{n})_{k}\end{bmatrix}^{\top}\in\mathbb{R}^{n}, and Akn×nA_{k}\in\mathbb{R}^{n\times n} as

Ak(i,j)=12TMl,m=1L1,M(i=1nj=1n(𝒂i)k(𝒂j)kσk2(𝐱lm)ψi(𝐱lm)ψj(𝐱lm)(tl+1tl),A_{k}(i,j)=\frac{1}{2TM}\sum_{l,m=1}^{L-1,M}\Big{(}\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{({\bm{a}}_{i})_{k}({\bm{a}}_{j})_{k}}{\sigma_{k}^{2}({\mathbf{x}}_{l}^{m})}\psi_{i}({\mathbf{x}}_{l}^{m})\psi_{j}({\mathbf{x}}_{l}^{m})(t_{l+1}-t_{l}),

and 𝒃kn{\bm{b}}_{k}\in\mathbb{R}^{n} as

𝒃k(i)=i=1n(𝒂i)k(𝐱l+1m𝐱lm)kσk2(𝐱lm)ψi(𝐱lm)).{\bm{b}}_{k}(i)=\sum_{i=1}^{n}\frac{({\bm{a}}_{i})_{k}({\mathbf{x}}_{l+1}^{m}-{\mathbf{x}}_{l}^{m})_{k}}{\sigma_{k}^{2}({\mathbf{x}}_{l}^{m})}\psi_{i}({\mathbf{x}}_{l}^{m})\Big{)}.

Then equation 8 can be re-written as

L,M,({𝒂η}i=1n)=k=1d(𝜶kAk𝜶k2𝜶k𝒃k).\mathcal{E}_{L,M,{\mathcal{H}}}(\{{\bm{a}}_{\eta}\}_{i=1}^{n})=\sum_{k=1}^{d}(\bm{\alpha}_{k}^{\top}A_{k}\bm{\alpha}_{k}-2\bm{\alpha}_{k}^{\top}{\bm{b}}_{k}).

Since each 𝜶kAk𝜶k2𝜶k𝒃k\bm{\alpha}_{k}^{\top}A_{k}\bm{\alpha}_{k}-2\bm{\alpha}_{k}^{\top}{\bm{b}}_{k} is decoupled from each other, we just need to solve simultaneously

Ak𝜶^k𝒃k=0,k=1,,d.A_{k}\hat{\bm{\alpha}}_{k}-{\bm{b}}_{k}=0,\quad k=1,\cdots,d.

Then we can obtain 𝒇^(𝒙)=i=1n𝒂^iψk(𝒙)\hat{\bm{f}}({\bm{x}})=\sum_{i=1}^{n}\hat{\bm{a}}_{i}\psi_{k}({\bm{x}}). However when 𝑫{\bm{D}} does not have a diagonal structure, we will have to resolve to gradient descent methods to minimize equation 8 in order to find the coefficients {𝒂i}i=1n\{{\bm{a}}_{i}\}_{i=1}^{n} for a total number of ndnd parameters. However, if a data-driven basis is desired, then we set {\mathcal{H}} to be the space neural networks with the same depth, same number of neurons and same activation functions on the hidden layers. Furthermore, we find 𝒇^\hat{\bm{f}} from minimizing equation 7 using any deep learning optimizer such as Stochastic Gradient Descent or Adam from well-known deep learning packages.

Appendix B Additional Examples

For d=1d=1, we also worked on polynomial drift function 𝒇=2+0.08𝐱0.01𝐱2{\bm{f}}=2+0.08{\mathbf{x}}-0.01{\mathbf{x}}^{2}. The estimation results are depicted in 3 and detailed in Table 3.

Refer to caption
Figure 3: One-dimensional Polynomial Drift Comparison Summary
Table 3: One-dimensional Polynomial Drift Function Estimation Summary
Number of Basis 10 Wasserstein Distance
Maximum Degree 2 t=0.25t=0.25 0.0153
Relative L2(ρ)L^{2}(\rho) Error 0.0087649 t=0.50t=0.50 0.0154
Relative Trajectory Error 0.00199719 ±\pm 0.00682781 t=1.00t=1.00 0.0278

For d=2d=2, we examine two types of drift function 𝒇{\bm{f}}: polynomial and trigonometric. Denote

𝒇=(𝒇1𝒇2)and𝐱=(𝐱1𝐱2){\bm{f}}=\begin{pmatrix}{\bm{f}}_{1}\\ {\bm{f}}_{2}\end{pmatrix}\quad\text{and}\quad{\mathbf{x}}=\begin{pmatrix}{\mathbf{x}}_{1}\\ {\mathbf{x}}_{2}\end{pmatrix}

where 𝒇i:2{\bm{f}}_{i}:\mathbb{R}^{2}\rightarrow\mathbb{R} and 𝐱i{\mathbf{x}}_{i}\in\mathbb{R} for i{1,2}i\in\{1,2\}.

For polynomial drift function 𝒇{\bm{f}}, we set

𝒇1\displaystyle{\bm{f}}_{1} =0.4𝐱10.1𝐱1𝐱2\displaystyle=0.4{\mathbf{x}}_{1}-0.1{\mathbf{x}}_{1}{\mathbf{x}}_{2}
𝒇2\displaystyle{\bm{f}}_{2} =0.8𝐱2+0.2𝐱12.\displaystyle=-0.8{\mathbf{x}}_{2}+0.2{\mathbf{x}}_{1}^{2}.

Figure 4, Figure 5(a), 5(b) and Table 4 shows evaluation of the polynomial drift function estimation result.

For trigonometric drift function 𝒇{\bm{f}}, we set

𝒇1\displaystyle{\bm{f}}_{1} =2sin(0.2𝐱1)+1.5cos(0.1𝐱2)\displaystyle=2\sin(0.2{\mathbf{x}}_{1})+1.5\cos(0.1{\mathbf{x}}_{2})
𝒇2\displaystyle{\bm{f}}_{2} =3sin(0.3𝐱1)cos(0.1𝐱2).\displaystyle=3\sin(0.3{\mathbf{x}}_{1})\cos(0.1{\mathbf{x}}_{2}).

Figure 6, Figure 7(a), 7(b) and Table 5 shows evaluation of the trigonometric drift function estimation result.

Refer to caption
Figure 4: Two-dimensional Polynomial Trajectory Comparison
Refer to caption
(a)
Refer to caption
(b)
Figure 5: Two-dimensional Polynomial Comparison of 𝒇{\bm{f}} and 𝒇^\hat{{\bm{f}}}. (a) Surface of Dimension 1 (b) Surface of Dimension 2
Table 4: Two-dimensional Polynomial Drift Function Estimation Summary
Number of Basis 36 Wasserstein Distance
Maximum Degree 2 t=0.25t=0.25 0.0891
Relative L2(ρ)L^{2}(\rho) Error 0.02118531 t=0.50t=0.50 0.0872
Relative Trajectory Error 0.00306613 ±\pm 0.00375144 t=1.00t=1.00 0.0853
Refer to caption
Figure 6: Two-dimensional Trigonometric Trajectory Comparison
Refer to caption
(a)
Refer to caption
(b)
Figure 7: Two-dimensional Trigonometric Comparison of 𝒇{\bm{f}} and 𝒇^\hat{{\bm{f}}}. (a) Surface of Dimension 1 (b) Surface of Dimension 2
Table 5: Two-dimensional Trigonometric Drift Function Estimation Summary
Number of Basis 36 Wasserstein Distance
Maximum Degree 2 t=0.25t=0.25 0.1011
Relative L2(ρ)L^{2}(\rho) Error 0.02734505 t=0.50t=0.50 0.1119
Relative Trajectory Error 0.0041613 ±\pm 0.0079917 t=1.00t=1.00 0.1293