Learning governing physics from output only measurements
Abstract
Extracting governing physics from data is a key challenge in many areas of science and technology. The existing techniques for equations discovery are dependent on both input and state measurements; however, in practice, we only have access to the output measurements only. We here propose a novel framework for learning governing physics of dynamical system from output only measurements; this essentially transfers the physics discovery problem from the deterministic to the stochastic domain. The proposed approach models the input as a stochastic process and blends concepts of stochastic calculus, sparse learning algorithms, and Bayesian statistics. In particular, we combine sparsity promoting spike and slab prior, Bayes law, and Euler Maruyama scheme to identify the governing physics from data. The resulting model is highly efficient and works with sparse, noisy, and incomplete output measurements. The efficacy and robustness of the proposed approach is illustrated on several numerical examples involving both complete and partial state measurements. The results obtained indicate the potential of the proposed approach in identifying governing physics from output only measurement.
Keywords Sparse Bayesian learning explainable artificial intelligence stochastic differential equation equation discovery probabilistic machine learning
1 Introduction
Physical systems are governed by the physical laws, often represented either in form of ordinary differential equations (ODE) or partial differential equations (PDE). The literature on techniques for the solution of ODE/PDE is quite matured and given sufficient computational resources, it is possible to solve the governing ODE/PDE with sufficient accuracy. Modern techniques such as physics-informed deep learning [1, 2, 3, 4] and neural network based operator learning [5, 6] can also be used. However, due to various assumptions and approximations, the governing equations often fails to represent the exact physics of the system and results in modelling and prediction errors. With advancement in the sensor technology and the internet of things (IoT), we have access to data in abundance whereas the underlying governing physics often remains elusive specifically is domains such as climate science [7], biology [8], physics [9], chemistry [10] and finance [11] to name a few. One possible alternative is obviously to train a conventional data-driven machine learning model to obtain an input-output mapping; however, such models often do not generalize to unseen environment and out-of-distribution inputs. To address this issue, methods for data-driven equation discovery has recently been proposed. A seminal work in this area was carried out by [12] wherein symbolic regression was utilized for discovering underlying structure of the data. Another important work in this area includes work by Brunton et al.[13]. However, most of the work in this area is limited by the fact that information on both input and state vector are needed. In practice, the input is often not measurable and only few states are accurately observable; this greatly limits the applicability of the available physics discovery techniques. To address this issue, we propose a novel approach that leverages sparse learning, Bayesian statistics, and stochastic calculus and enables discovery of governing physics from output only measurements.
The development in the equation discovery techniques has made significant progress from early 1970s equation discovery was mostly dependent on the expertise in the interested area, to the modern machine learning techniques that can extract the physical law of the underlying process with only little human supervision. In the early 1970s the models were selected by maintaining a balance between the model complexity and its fitness. The fitness of the models were measured using Akaike (AIC) and Bayesian information criterion (BIC) [14, 15]. Later advances in the machine learning tools in combination with new sophisticated data measurement instruments gave rise to the data-driven techniques for discovery of governing physics. Towards this, first attempt was made by combining symbolic regression with genetic programming to select the best combination of the functions from candidate functional forms that accurately represent the data [12, 16]. There are also equation-free methods that bypasses the need of formulating constitutive equations to track the time evolution of the dynamical systems in closed form. These methods provide a practical recipe for multiscale simulations through local microscopic simulations in time and space [17].
Following the work of symbolic regression [12, 16], Sparse Identification of Nonlinear Dynamics (SINDy) was proposed for discovery of the governing physics of non-linear dynamical systems [13]. In terms of accuracy and intepretability of the discovered physics, the SINDy framework overcame the shortcomings of the previous discovery techniques. The idea behind SINDy was to use sparse linear regression in the purview of least-square fitting for selecting the most dominating candidate functions that best represent the data. Due to sparsity promoting approach, SINDy proved to be computationally efficient and scalable with the increase in the input dimension. In the later years, SINDy has shown tremendous applications and area-specific developments of sparse linear regression in various fields. Few examples are, sparse identification of biological networks in biology [18] and sparse identification of chemical reaction in chemistry [19, 20], sparse modelling for state estimation in fluid mechanics [21, 22], system identification of structures with hysteresis and sparse learning of aerodynamics of bridges in structural systems [23, 24], sparse model selection using an integral formulation [25], sparse model selection of dynamical system using information criteria [26], sparse identification for predictive control [27], identification of structured dynamical systems with limited data [28], model identification using recovery of differential equations from short impulse response time-series data [29], identifying stochastic dynamic equation [30], and discovery of partial differential equations [31, 32], among others.
Apart from SINDy, identification of non-linear dynamical systems using deep neural networks were also proposed by the researchers [33, 2, 34]. The deep learning approaches for non-linear dynamical systems exists as a black-box which takes the data as a input and provides the prediction as output. Later, a different approach within the Bayesian framework for discovery of governing physics from noisy and/or limited data was proposed [35, 36, 37]. In this approach the least-square based sparse linear regression was replaced by more accurate sparse Bayesian linear regression technique. Due to the ability of the Bayesian inference to perform simultaneous model selection and parameter estimation, this approach eliminated the possibility of overfitting of the governing model. The sparsity in the solution was promoted by assigning the appropriate sparse promoting priors over the weight vector of the sparse regression problem. This approach provided a natural elimination of the basis functions that do not present the data accurately. As a output, this framework provides the posterior distributions of the weight vector; this means one could accurately identify the uncertainty in the parameters and construct a confidence interval for future predictions.
Despite the progress in equation discovery from data in recent times, almost all the approaches require measurements for both state and input variables; the only exceptions is perhaps the Stochastic SINDy [30]. Unfortunately, we often only have access to the state variables only; this significantly limits the applicability of the available approaches. We hereby propose an equation discovery framework rooted in stochastic calculus, sparse learning, and Bayesian statistics, that require only the state estimates. The basic premise here is to treat the unknown input as a stochastic process, model it as a Gaussian white noise, and identify the stochastic differential equation (SDE); this essentially leads to identifying the drift component and diffusion component of a stochastic differential equation. We utilize stochastic calculus to decouple the drift and diffusion part; this allows parallel identification of the two components. Sparse learning and Bayesian statistics, on the other hand, ensures that the governing equation identified is sparse and interpretable; this is achieved by using the spike and slab prior [38, 39]. The proposed approach has several key features that can be encapsulated into the following points:
-
•
First and foremost, unlike other approaches, the proposed approach require only the state measurements; no input measurements are needed.
-
•
The proposed approach being Bayesian in nature estimates the posterior distribution. This allows computation of the epistemic uncertainty (aka predictive uncertainty) due to limited data. This feature is particularly important in design and decision making.
-
•
Unlike non-Bayesian approaches, the proposed approach is autonomous in the sense that it doesn’t require any human calibration and cross-validation.
-
•
The proposed framework falls under the broad umbrella of interpretable machine learning and explainable AI. In other words, the model identified generalizes to unseen environment and out-of-distribution data.
Remaining of the paper is structured as follows: in section 2, the proposed data-driven framework for learning governing physics without the measurement of input is briefly presented. In section 3, numerical experiments using fully and partially observed output measurements are undertaken for the demonstration of the robustness of the proposed framework. In section 4, key contributions of the proposed novel framework is summarised and finally the paper is concluded.

2 Discovery of governing physics without input measurement
The overarching framework of the proposed Bayesian physics discovery framework is presented in the Fig. 1. The proposed framework treats the state measurements from the sensors as stochastic process and applies Kramers-Moyal formula to obtain the target vector for sparse linear regression. In the sparse regression, the library is obtained by taking ensemble mean over all the libraries constructed from the collected data set. Then the proposed framework constructs two sparse regression problem using these libraries and target vectors for identification of the governing physics, which are independent and can be solved simultaneously. The discovery of governing physics here involves identification of the system dynamics and the volatility associated arising due to environmental uncertainty. The system dynamics and the volatility are represented in this framework as drift and diffusion components of an SDE.
Before formulating the mathematical backbone of the proposed framework let us formalize the problem through following terminologies. Let be the library of candidate functions, where is the total number of independent basis functions in the library and is the sample length. Then the key idea of the sparse Bayesian linear regression is to select basis functions from the total number of candidate library functions such that . If is the target vector then the linear combinations of the identified basis functions should represent the target vector in best possible coefficients in linear regression are denoted by a weight vector . For the Bayesian model discovery, the condition is maintained by assigning certain type of prior distributions on the weight vector , such that they favour the sparsity in the solution. In this context, the SS-prior have shown high shrinkage property due to its sharp spike at zero and a diffused density spanned over a large range of possible parameter value. The Dirac-delta spike concentrates most of the probability mass at zero thus allowing most of the samples to take value zero. On the other hand, the diffused tail distributes a small amount of probability mass over a large range of possible values allowing only very few samples with very high probability to escape the shrinkage. The alternate flavours of the SS-prior constructed from various combination of candidate spike and slab functions are well documented in the literature [38, 39, 40]. In this work, the authors have considered the discontinuous spike and slab prior (DSS-prior) where the spike at zero is modeled as Dirac-delta function and the tail distribution as independent Student’s-t distribution. Next, a sparse regression framework is formulated for SDEs.
2.1 Sparse learning of Stochastic differential equations
Dynamical processes are generally expressed in terms of higher order differential equations. However, in most of the data-driven machine learning techniques the actual system is not observed through its original space but identified in a projected space. The projected space contains all the states of a system which are directly observable. For instance, the second order dynamical systems are often expressed in terms of its displacement and velocity components. Let the projection be realized by a map such that . If there exists a mapping , then any higher order system can be reduced to a set of SDEs of the form:
(1) |
where is the deterministic dynamics of the underlying process, is the volatility associated with the dynamics arising due to the stochastic external input, and is the white noise. A more appropriate representation of the above equation can be derived through first order Itô SDEs which arises naturally in non-linear dynamical systems subjected to stochastic excitation such as earthquake, wind force, wave force etc. [41, 42]. In non-linear stochastic dynamical systems the SDEs provide an splendid approach for expressing the behavior of the underlying dynamical system. The SDEs are generally defined in term of its deterministic drift and the additive stochastic diffusion components where both the components are allowed to depend on time and systems states. Let be the probability space and be the natural filtration constructed from sub -algebras of . Under the probability space an -dimensional -factor SDE driven by -dimensional Brownian motion {} is defined as,
(2) |
where denotes the -measurable state vector, is the drift vector, is the diffusion matrix and is the Brownian motion. The white noises are generalized derivative Brownian motion i.e. ; this ensures that there exists a corresponding SDE of Eq. (2). In a compact matrix notation, the SDEs are can be expressed as,
(3) |
The solution to Eq. (3) can be obtained using various stochastic integration schemes [43, 44, 45, 46]. The discovery of governing physics in terms of an SDE requires the identification of the drift and diffusion components from output measurements. One of the challenges arising in the identification of the SDEs from output data is the non-differentiability of the Brownian motion. In general, the stochastic Brownian motion is not a well defined mathematical function since it is not differentiable everywhere with respect to the process . However, its continuity is assumed to exists in mean square sense. Formally, if the interval is partitioned into -parts as, , then for a random process , the quadratic variation converges to . From the property of the Brownian motions, it can be found that ; this states that has zero finite variation but non-vanishing quadratic variation. On the other hand, the deterministic functions have finite variation but zero quadratic variations. Under these properties, the Kramers-Moyal formula suggests that the drift and diffusion components of an SDE can be extracted from the sample time history in terms of their linear and quadratic variations, respectively. Since the continuity of Brownian motions are mathematically defined in the mean square sense, as a consequence, the diffusion components does not have finite variations but are bounded by their quadratic variations. Thus, the diffusion components, unlike the drifts of an SDE are recoverable only through its covariation terms. The linear and quadratic variations denotes the first and second moments of the sample increments. Leveraging this facts, the drift and diffusion components of an SDE given in Eq. (3) can be expressed in terms of the sample time history as follows (the simplified detailed derivation is presented in A):
(4) |
where is the drift component and is the component of the diffusion covaraince matrix . We assume that the analytical form of the drift and diffusion components can be obtained from measurement sample paths in terms of some basis functions. Let represents the various linear and non-linear mathematical functions evaluated on the system states, and is the length of sample path. Also let and are the library of candidate functions constructed from the set . Here the candidate functions can contain several functional forms such as polynomial, trigonometric, etc. Then, the drift component and the term of diffusion covariance matrix can be expressed as linear combination of the library functions as, ; and ; , briefly,
(5) |
where and are the weights associated with the basis functions of drift and diffusion covariance components, respectively. In general, the libraries and can be different but one can construct a library by taking into account of all the basis functions for discovery of both drift and diffusion terms. In a general setting, the discovery of the drift terms culminates in the following regression problem,
(6) |
The problem for discovery of diffusion components can also be formulated in the same way as above, thus omitted here for brevity. As a subset of the above regression problems, one can solve the following equations and identify the SDE components independently,
(7a) | ||||
(7b) |
where and are the target vectors of the sparse regression problem associated with -drift component and -diffusion covariance term, respectively, and, and are the corresponding residual error vectors. For the discovery of the drift component in Eq. (7a), the target vector is defined as, , and in the formulation of the sparse regression problem for diffusion components in Eq. (7b), the target vector is set as, . The weights vectors for drift component and term of diffusion covariance matrix are given as, , and , respectively. The straightforward application of the Gibbs sampling mentioned in the Eqs. (17) (22) can be performed to solve the regression problems in Eq. 7. Noting that a -dimensional process has numbers of drift components it requires -independent regressions to identify all the drifts. Further it can be noted that in a -dimensional process with -diffusion matrix the diffusion covariance matrix has a dimension . However, due to the symmetry of the covariance matrix, it requires numbers of independent regressions to be performed to completely identify the diffusion space.
2.2 Discovery of SDE by sparse Bayesian regression
Since Eq. (7a) and (7b) reflects the same sparse regression problem to be solved, for the brevity they can be represented using the following one-dimensional regression equation:
(8) |
where denotes the -dimensional target vector, denotes the library of candidate functions, is the weight vector, and is the residual error vector representing the model mismatch error. In the Bayesian inference, given the output measurements the aim is to find the posterior distribution of the weight vector . For estimating the posterior distribution of the weight vector , one can apply the Bayes formula in Eq. (8), which yields,
(9) |
Modeling the mismatch error as i.i.d Gaussian random variable with zero mean and variance , the likelihood function is written as,
(10) |
where denotes the identity matrix. The discovery of governing physics demands that the resulting model should be interpretable i.e. the solution of weight vector should contain only few terms in the final model. In this work, the discontinuous spike and slab (DSS) distributions are used for promotion of sparsity in the solution. The equation discovery using DSS-prior requires the classification of weights of the basis functions into either of the spike and slab component. This is done by introducing a latent indicator variable for each of the component of the weight vector . The latent indicator variable is assigned a value 1 if the weight corresponds to the slab component, otherwise, it takes a value 0 when the weight belongs to spike component. It is to be noted that the components of the weight vector that belongs to spike does not contribute to discovery of governing physics. Therefore, let us consider a vector , composed from the elements of the weight vector for which . Denoting as the weight vector containing only those variables from for which , the DSS-prior is defined as [36, 38],
(11) |
where the spike and slab distributions are defined as, and with . It can be observed that the spike and slab distributions are assumed to be independent. To increase the effectiveness and faster convergence to the sparse solution the noise variance and the slab variance are treated as random variables. The noise variance is assigned the Inverse-gamma distribution with the hyperparameters and , the the slab variance is assigned the Inverse-gamma prior with the hyperparameters and , the latent variables takes a value from the set , thus each of the element in the latent vector is assigned the Bernoulli prior with common hyperparameter . The hyperparameter is simulated from the Beta prior with the hyperparameters and . This can be summarized as,
(12) | |||
(13) | |||
(14) | |||
(15) |

With all the random variables, we construct a bigger Bayesian hierarchical model as shown in Fig. 2, where the hyperparameters , , , , , and are provided as a deterministic constants in the hierarchical model. From the DAG structure in Fig. 2, the joint distribution of the random variables , , , , are obtained from the Fig. 2 as,
(16) |
where denotes the joint distribution of the random variables, denotes the likelihood function, is the prior distribution for the weight vector , is the prior distribution for the latent vector , is the prior distribution for the slab variance , is the prior distribution for the noise variance, is the prior distribution for the success probability and is the marginal likelihood. Direct sampling from the joint distribution function is intractable in this case due to the spike and slab distribution. Thus, Gibbs sampling technique is used to draw the random samples from the joint distribution [47]. For the Gibbs sampling, the conditional distributions of the random variables are derived in B. The weights corresponding to the spike distribution does not contribute to the selection of the correct basis functions. Thus, only the weights corresponding to i.e. the vector is sampled using the Gibbs sampling. Referring to the Eqs. (12), (13), (14), and (15), the sequence of the random variables , using the Gibbs sampling technique is obtained by following steps,
-
1.
The weight vector is sampled from the Gaussian distribution with mean and variance as,
(17) where the mean and covariance is defined as, and , respectively.
-
2.
The latent variable is assigned the values from the set, by using the Bernoulli distribution as,
(18) where and . Here, denotes the latent variable vector consisting of all the elements except the component. The probability that the latent variable takes a value 0 or 1 is estimated as follows,
(19) -
3.
The noise variance is simulated from the Inverse-gamma distribution as,
(20) -
4.
The slab variance is sampled from the Inverse-gamma distribution as,
(21) -
5.
The success rate is sampled from the Beta distribution as,
(22) where .
-
6.
The weight vector is updated using the step 1.
This MCMC is performed for a total of simulations out of which initial 1000 samples are discarded as the burn-in samples. Let denote the number of MCMC required to achieve the stationary distribution after the burn-in samples are discarded. Then the marginal posterior inclusion probability (PIP):= for each of the basis functions can be estimated by taking mean over the Gibbs samples for each of the latent vector [36] as,
(23) |
The basis functions whose corresponding PIP values are more than 0.5 i.e. are included in the final model of discovered equation. The PIP value greater than 0.5 indicates that the selected basis functions are observed more than half of the times in the MCMC simulations. Higher PIP value suggests that in case of unseen scenarios, the corresponding basis functions are highly likely to occur in the data representation target vector. The mean and covariance of the weight vector gives the expected value and standard deviation of the system parameters. The standard deviation provides the confidence interval of the parameters which can be used to design the lower and upper bounds of a random variable in an unseen environment. A pseudo code for the proposed framework, is provided in the Algorithm 1.
The Gibbs sampler is initialised with the following initial values of the hyperparameters: =0.1, =10, and is set equal to the residual variance from ordinary least-squares regression. The initial vector of binary latent variables is computed by setting to zero and then activating the components of that reduce the mean-squared error between the training data and the obtained model from ordinary least-squares. For this purpose a forward followed by a backward search algorithm is devised, where the backward search iterates through the activated components of initial latent vector in similar fashion to forward search. Given all the other parameters, the initial value of is obtained from the Eq. (16). For the commencement of the algorithm, the deterministic prior parameters are set to the following values: =0.1 and =1 for the Beta prior on , =0.5 and =0.5 for inverse-Gamma prior on slab variance, and, = and = for inverse-Gamma prior on measurement noise. A Markov chains with 3000 samples is used for Gibbs sampling. The first 1000 samples are discarded as burn-in, and the remaining 2000 samples are used for posterior computation.
For all the demonstrations in this work, the data are simulated using the Euler-Maruyama (EM) scheme at a frequency of 1000Hz using the parameters listed in the Table 1. The noise in the measurements is modeled as -dimensional sequence of zero-mean Gaussian white noise with a standard deviation equal to 5% of the standard deviation of the simulated quantities. In this work, the dictionary is constructed from 5 types of mathematical functions, each function representing a mapping of the -dimensional state vector :
(24) |
Here denotes the -dimensional vector of 1, denotes the set of terms present in the multinomial expansion , represents the signum functions of the form , denotes the absolute mapping of the states: . The tensor product term represents the set of functions: . For the study, the length of the polynomial is chosen as 6. The cardinality of the library is found as , where the number of terms in the multinomial expansion can be found as, . The complete architecture of the propose framework is illustrated in the Fig. 1.
Simulated systems | Drift parameters | Diffusion parameters | |
Linear | Non-linear | ||
Black-Scholes | = 2 | - | = 1 |
Duffing-Van der pol | = 1, = , = 2 | = | = 10 |
2-DOF non-linear | = = 1, = | = 1, = 9.81 | = 10, = 10 |
= , = = 2 | |||
Bouc-Wen | = 1, = , = 20, = 0.5 | = = 0.5, = 1, = 3 | = 2 |
3 Discovery of governing physics of example problems
In this section, the efficacy and robustness of the proposed Bayesian physics discovery framework is observed on a variety of representative stochastic non-linear dynamical systems. The examples taken includes: (a) Black–Scholes SDE, (b) Duffing Van-der pol oscillator, and (c) Two degree-of-freedom (DOF) base isolated shear building. For the equation discovery, it is assumed that the input forces are not measurable and only the noisy measurements of the displacements are available for physics discovery. The velocity component is obtained from the displacement vector through numerical differentiation such as fourth order central finite difference formula. Additionally a case study using Bouc-Wen oscillator is undertaken, where it is assumed that the hysteresis measurement is also not available. Therefore the proposed framework treats it as a partially observed system and tries to estimate the effect of the unobserved state without explicitly considering it in to the library of candidate functions. The results of the case studies undertaken in this work are presented in Figs. 3, 4 and Table 2. These results demonstrate sufficient accuracy in the discovered physics and robustness of the proposed framework to learn governing law from limited and noisy displacement measurements only (without any input measurements).



Correct SDEs: | |
---|---|
Black-Scholes: | |
Duff.-Van der pol: | |
2-DOF non-linear: | |
Identified SDEs: | |
Black-Scholes: | |
Duff.-Van der pol: | |
2-DOF non-linear: |
3.1 Black–Scholes SDE
A Black–Scholes SDE (formulated as geometric Brownian motion) is a continuous-time stochastic process where the logarithm of the randomly varying quantity follows a Brownian motion. The Black-Scholes SDE is frequently used in stock market for modelling of the evolution of stock price of an underlying asset [48]. The Black–Scholes SDE has the drift and diffusion with and being the real constants. Towards this, the Black–Scholes SDE is defined as a geometric Brownian motion as follows:
(25) |
where is the Brownian motion. Data is generated by simulating Eq. (25) and then corrupting it with Gaussian white noise. Identifying the diffusion term as, , the variance of the diffusion is obtained: . The results of this case study are plotted in the Figs. 3(a) and 4(a). Fig. 3(a) depicts that there are two identified basis function and . However, from the Eq. (25), it is obvious that only one function should have been identified as the driving function of the Black-Scholes SDE. To understand this discrepancy one needs to consider that the evolution of the random variable in the Black-Scholes SDE follows Geometric Brownian motion (GBM) and GBMs always assume positive values (e.g. real stock price). Since the characteristics of both the functions and in this case are same, the algorithm identifies both the functions with almost equal probability. A similar phenomenon is observed in case of the identification of the diffusion term too. Figs. 4(b) depicts almost equal contribution from the functions and , however, only the term should have been actually identified. Upon considering the corresponding parameter values of the basis functions and for the drift component, and, and for the diffusion component, the Black-Scholes SDE in Eq. (25) can be easily simulated for any unseen environmental conditions.
To verify that this is not a limitation of the proposed scheme, this numerical demonstration is repeated in the Figs. 3(b) and 4(b) without considering the functions and in the library. The parameters of the basis functions for the drift and diffusion components are portrayed in Figs. 5(c) and 5(d), respectively. The results clearly show that the proposed scheme is able to identify the basis functions along with their associated parameters and without any significant error. As a consequence of the above case study, one can ask a question about what basis functions are to be considered in the library. One of the possibilities could be to visualize the time series data and then take scientific and engineering judgements. For example, if the time history data shows no zero crossing then one can opt against considering the functions which shares similar properties (for e.g. and , and and ). Besides one can also choose to consider due to the fact that the similar terms will have equal probability of occurrence and as a combination will demonstrate the observed phenomenon. However, for future prediction the model will be applicable to process depicting no zero crossing only.
Finally, to illustrate the robustness of the proposed approach and simulate a realistic scenario, we consider a case where the data is generated using strong Taylor 1.5 stochastic integration scheme. The results of the study is provided in the Fig. 6. In the Figs. 6(a) and (b), it is evident that the proposed framework is able to correctly identify the drift and diffusion components of the Black-Scholes equation. The subplots (c) and (d) further shows the expected values of the parameters of the identified basis functions, which matches almost exactly with the parameters listed in Table. 1. This shows that despite the lower order formulation, the proposed framework is robust and well equipped.

3.2 Duffing-Van der pol oscillator
The second order non-linear hardening Duffing-Van der pol oscillator is considered, in this section. The Duffing-Van der pol oscillator draws its physical relevance from the models in flow-induced structural vibration problems. The Duffing-Van der pol oscillator has a cubic dissipating force and it is driven by multiplicative white noise. Towards this, the equation of motion of the undertaken system is expressed as:
(26) |
where , , and are the mass, damping, and stiffness parameters of the oscillator. Here, is a real-valued parameter associated with the cubic non-linearity, is the strength of the multiplicative white noise, and is the Brownian motion. The time derivative of the Brownian here represents the white noise. With a statespace transformation , and , where and represents the displacement and velocity, the corresponding first order Itô-stochastic SDEs for the dynamical system are derived as,
(27) |
For estimating the drift and diffusion terms only, the evolution of second variable is used to construct the target linear and quadratic variation vectors from the sample paths using the Kramers-Moyal formulae. Here, the variance of the diffusion term is identified as, . The results for the basis function selection are displayed in Figs. 3(c) and 4(c), respectively, while their associated parameters are plotted in Fig. 7. From the Figs. 3(c) and 4(c), it is evident that the proposed framework is able to accurately discover the basis functions for the drift and diffusion terms as, and , respectively.
The pairwise joint posterior distributions of weights associated with the discovered basis functions are presented in Fig. 7. In the subplots (a), (b), and (c), the weights of discovered drift functions are shown. It is evident in this figures that the actual parameters associated with the drift component of DVP oscillator as listed in Table 1 is correctly identified with very small relative error. The subplot (d) depicts the posterior distribution of the identified diffusion basis function. The mean value of the posterior distribution represents the term , which in this case can be noticed as = . The mean value here is obtained as approximately 106.47. By performing the square root operation over the mean, one can approximately get the diffusion value . These results verify that the proposed scheme can be successfully applied to discover the governing physics of non-linear oscillators when subjected to parametric excitation effectively.

3.3 Two story base isolated Shear Building
The responses of civil engineering structures often exhibit the characteristics of stochastic non-linear dynamical systems due to the external excitation. Here, a 2 story base isolated structure is considered. The system is a linear base isolated structure where the structure is connected to the foundation through a Coulomb friction-type base isolator. Under this considerations, the governing equation of motion of the system is written as,
(28) |
where is the Coulomb friction force arising due to the sliding of bearings in Coulomb oscillator, and , , and are the mass, damping, and stiffness of -DOF. Here, and are the strength of the white noise and , where and are the independent Brownian motion. In this case study, the equation discovery involves identification of two drift and two covariance terms (one for each of the DOFs). Following to the case study of the Duffing-Van der pol oscillator, the target linear variation vector for discovery of ; drift term needs to be constructed from the velocity component of the corresponding DOF. For discovery of the ; component of the covariance matrix, the quadratic variation matrix is constructed from the pointwise product of and response vectors.
(29) |
The results for the identification of corresponding basis functions for both the drift terms are presented in Figs. 3(d) and (e). From Fig. 3(d), it can be observed that the identified basis functions are , which correctly matches with the terms in the equation of motion of first DOF. Similarly, from Fig. 3(e) it can be verified that the proposed scheme has correctly identified the basis functions of the second drift component as . Figs. 8 and 9 show the pairwise joint posterior distributions of the weights associated with identified first and second drift basis functions, respectively. From these figures it is evident that the proposed scheme is able to identify the parameters of the basis functions as listed in Table 1 with sufficient accuracy. In order to find the explicit values of system stiffness and damping parameters, the parameters of second DOF can be identified from the Figs. 8(h), 8(e), and 8(f).


The identification results for the basis functions along with the posterior distribution of their parameters for both the diffusion terms are depicted in Fig. 10. Figs. 10(a) and 10(b) present the results of the first diffusion term. The identification results of the second diffusion term are shown in Figs. 10(c) and 10(d). In both the cases, it is evident that the proposed scheme has correctly identified the basis function as . The posterior distributions of the covariance terms, and is plotted in Figs. 10(b) and 10(d) whose mean values represents and . The summery of the results are presented in Table 2.

3.4 Case study: Non-linear system with partially observed state variables
In this section, a case study has been undertaken where we do not have access to all the state variables. We consider a Bouc-Wen oscillator where the system is described using three states namely, displacement and velocity of the main mass, and the hysteresis displacement associated with the isolator [43, 49]. In this context, we note that the measurement of the hysteresis displacement is practically intractable and only the displacement and velocity states of the main system is available. Further, measuring the velocity of a mechanical system using sensors is a practically challenging task. Through this example, we illustrate the applicability of the proposed approach for such a system. The governing equation of motion of a sdof Bouc-Wen oscillator is [43]:
(30) |
where , , and are the mass, damping, and stiffness of the system, is the system state, is a factor that defines the participation of the elastic force and hysteresis force , , , and is the hysteresis displacement. The evolution of the hysteresis parameter is defined using the non-linear differential equation:
(31) |
The positive exponential parameter defines the smoothness of the transition from elastic to the post-elastic branch, and the parameters , , and control the size and shape of the hysteresis loop. The drift and diffusion terms are:
(32) |
Then, the covariance of the diffusion matrix is obtained as,
(33) |
For the identification of the system only the displacement value (noisy) is considered as input to the algorithm. The system is simulated for = 1s at a frequency of 1000Hz using the parameter values, = 1, = 20, = 100000, = 0.5, = 0.5, = 0.5, = 1, , and = 2. The identified basis functions for the drift and diffusion terms are presented in Fig. 11. In sub-figure (a) it is evident that there are more number of basis functions than the input equation. It is straightforward to note that this extra basis functions arises to take care of the non-observable hysteresis parameter . One can then identify the basis functions and for the drift and diffusion, respectively, that best represent the data as,
(34) |


The joint posterior distributions of the retained basis functions in Fig. 11 (a) and (b) are plotted in the Fig. 12. Based on the basis selection in Fig. 11 and from the joint posteriors of the parameter plotted in Fig. 12, the final drift and diffusion fields for the Bouc-Wen system can identified as follows,
(35) |
Correct SDEs: | |
---|---|
Identified SDEs: |
For further treatment the term is neglected due to its non-significant parameter value. With this, the identified equation for the Bouc-Wen system is shown in Eq. (36):
(36) |
In order to verify the accuracy of the identified system, the identified equation (Eq. (36)) is simulated using the EM scheme with a time step of = 0.001. To account for the fact that in future the external excitation will be uncertain and the intensity of the same will not remain same, we model the forcing function using Brownian force with different intensity. To be specific, a Brownian noise intensity of = 6 is used to simulate the new unseen environment. The results are compared in Fig. 13. We observe that the results obtained using the identified physics matches almost exactly with the ground truth obtained by solving the actual system. It is worthwhile to note that the proposed approach require only 1s of data and is able to predict the the responses for a stochastic force of different intensity at distant future (500s). This clearly indicates that the proposed approach, even with partially observed displacement only measurement, generalizes to unseen environment and provides accurate estimate even for out-of-distribution inputs. The summary of the cases studies undertaken in this work is presented in the Table 2.

4 Discussions
We proposed a novel data-driven framework for discovering governing physics of multi-dimensional non-linear dynamical systems from limited and noisy output only data. In many natural process the estimate of external input is intractable due to the limitations in the present measurement technologies. Further, measurement of all the state variables is often not possible due to high operational cost; the proposed framework is developed to identify governing equations in such scenarios. In essence, the proposed approach identifies SDEs from displacement only measurements; in other words, no measurement for the input is required. The deterministic drift component of the SDEs captures the dynamics of the underlying dynamical system whereas the external stochastic force is identified in terms of the diffusion component. This is a key novelty of the proposed framework.
The proposed framework employs the sparse Bayesian linear regression in conjunction with the Gibbs sampler to simultaneously obtain the basis functions of the model and their associated parameters. To that end, the Kramers-Moyal expansion is utilized to express the drift and diffusion dynamics of an SDE in terms of the measured samples paths. The drift and diffusion vectors obtained from the sample paths are then used as a target vector in the sparse regression. Use of the Kramers-Moyal expansion within the purview of sparse Bayesian linear regression for discovery of an explainable governing physics is another novelty of the proposed framework. The unified framework possesses the ability of the Bayesian inference to update the probability of observing the correct basis function based on the new information. The ability to update the information is missing in recently published least-square based physics discovery schemes. In a noisy and unseen environment, one would be more interested in dealing with the probability distribution of a random variable rather than a point estimate; the proposed framework thus, takes a leap in such situations and provides a probability distribution for each of the weights associated with corresponding basis functions. The obtained standard deviations of the distributions can then be used for defining the lower and uppers bounds on the estimates of an unknown parameters. Another aspect of the proposed framework is its advantages over the Neural network based grey box models, where the explicit expression of the discovered governing physics is not known. In cases of partially observed processes, the proposed framework seems to provide an explainable equation for the governing physics. Although the basis functions in the obtained equation is significantly different from the one determined from first principal laws, the obtained model is able to predict the distant future without significant error. To the knowledge of authors, the discovery of physics in a partially observed process from the purview of probability theory is one of the key contributions of the proposed novel framework.
Despite of the advantages of the proposed framework over presently available physics discovery techniques there are certain issues that requires special attention for accurate identification of the governing physics. The first issue is the judicious selection of the candidate basis function. The presence of basis functions with high correlation may sometimes yield a physical law different from the actual physics. This is evident from the identification example on the Black-Scholes equation. The second issue is the associated computational cost. Although the computational cost of the proposed scheme is significantly less than the time required to train its neural network alternatives, further improvements can be made. One such alternative is to use the marginal standard deviation criteria, instead of taking the mean of the latent vector. Another improvement in terms of computational efficiency can be made by devising an appropriate filter to process out the signal noise synchronously with the sparse Bayesian linear regression. The third issue is related to the quality of the data. The low-fidelity data are less expensive to obtain but the representation of the physical model can be poor due to the presence of noise. On the other hand, the high-fidelity data are highly accurate but expensive to obtain. The proposed framework in its present state is not well equipped to make use of both the low and high-fidelity data. Thus, to further enhance the fidelity of the proposed framework for field applications a more robust algorithm needs to be formulated. The use of low-fidelity data ensure low operational cost while the high-fidelity data will ensure the accuracy of the discovered physics. Lastly, the proposed framework leverages the Kramers-Moyal expansion to express the drift and diffusion components of an SDE in terms of the first and second order moments of the sample paths. The moments are approximated in a limiting sense by making use of the absolute and quadratic identities of Brownian motion. In this context, future improvements in the proposed framework can be made by exploiting the higher order moments to retain the higher order terms from Stochastic-Taylor expansions of the drift and diffusion terms. Overall, the proposed framework provides a novel methodology for discovering the governing physics from displacement only measurements and is particularly useful whenever it is not possible to measure the input force. The learned equations have shown to be accurate and demonstrated good prediction ability in distant future.
Appendix A Kramers-Moyal expansion for estimation of the drift and diffusion terms of an SDE from sample paths
Let us consider, = to be the transition probability density of the solution of the SDE in Eq. (3). Then the Kramers-Moyal expansion is written as [50],
(37) |
where the coefficients in the expansion are given as,
(38) |
In order to know how many terms in Eq. (37) will be active for the SDE in Eq. (3), a Fokker-Planck equation for the pdf of the solution of Eq. (3) needs to be constructed. Towards this, let us consider a well behaved generic function which is at least twice differential. For this function, the Itô’s lemma is written as,
(39) |
Substituting Eq. (3) in the above equation gives the following:
(40) |
Upon taking the derivative with respect to on both sides yields:
(41) |
Expanding the above equation using expectation operator and noting that the last term in the expression is a stochastic integral, one can invoke the result , where is the expectation operator to obtain the following,
(42) |
By expanding the terms in the above equation using integration by parts and assuming that , the following weak form is obtained:
(43) |
As the function is arbitrarily selected, the Fokker-Plank-Kolmogorov equation is obtained.
(44) |
In a general setting the above equation for higher dimensional diffusion process is expressed as,
(45) |
where the random variable satisfies the SDE in Eq. (3). At this point it is clear that there will be two terms active in Eq. (3). Thus for =2, Eq. (37) yields,
(46) |
On comparison of Eqs. (37) and (46), it is straightforward to note that to estimate the drift and diffusion terms it suffices to estimate the first and second order moments of the variations of the random variable and then calculate the coefficients and , formally,
(47) |
To derive these coefficients in the Kramers-Moyal expansion, it is imperative to understand the one-step Itô-Taylor expansion of the random variable . Referring to the Eq. (40), the integral form the Itô-lemma can be expressed as follows,
(48) |
where and denotes the first and second order partial derivatives with respect to system states. For simplicity, two stochastic operators are defined in the following form:
(49) |
Using the operators, Eq. (48) can be rephrased as,
(50) |
Then substituting , one verifies that , and . This yields the first iteration:
(51) |
In order to perform the second iteration it is required to find the stochastic expansion of the terms , and . Thus, expanding , and and using the operators in Eq. (49) yields,
(52) |
On substituting the above result in Eq. (51) and further iterating,
(53) |
where the remainder term is,
(54) |
The above equation is infinitely expandable using the Eq. (49). For further treatment, the first moment is taken on both side. Noting the results and ,
(55) |
If the number of iteration is , then the Brownian integrals of multiplicity have a contribution proportional to , the time integrals have a contribution proportional to , and the combination shares a contribution between and . Thus it is easy to infer that as , the higher order terms in the remainder will vanish. After invoking this facts in the above expression, the first coefficient in Kramers-Moyal expansion is obtained as,
(56) |
To derive the second coefficient, it is only required to find the quadratic variation of the increment process of . Upon taking second moment on both sides of Eq. (53), the following is obtained,
(57) |
In the above proof, the Itô identities are utilized, which states that under the mean square convergence theory the quadratic variation of the time and Brownian increments are given as , , , . The deterministic time integrals will vanish automatically since they have finite variance and zero quadratic covariance and the other higher order Brownian integrals will vanish as . Thus the second coefficient in the Kramers-Moyal expansion is obtained as,
(58) |
From the ergodic assumption of the evolution of process , the time average is often replaced by the expectation operator . For -variables , the diffusion process has the form,
The properties of the Brownian motion are: and . If the covariation matrix of the diffusion components is given by , then, the drift of -diffusion process and the -element of the covariation matrix can be estimated as,
(59) |
Appendix B Conditional probability distributions of the discontinuous Spike and Slab prior (DSS)
The DSS prior model is described in the Methods section. Drawing sample from the joint distribution is intractable but possible through Markov Chain Monte Carlo methods. In the present work, the Gibbs sampling technique is utilized to draw the sample for the random variables , , , , and . For drawing samples using Gibbs sampling, the random variables need to be conditioned on other variables. However, due to the DAG structure in Fig. 2, the dependencies of the conditional distributions on other random variables can be relaxed to some extent as follows:
(60) | ||||
The elements of the weight vector for which the latent variable , becomes an absorbing state in the Markov chain. However, to achieve a stationary distribution, one needs to construct an irreducible Markov chain. Thus, the conditionals over weight vector is eliminated by marginalizing the conditional distributions with respect to the vector .
(61) |
The conditional distribution :
(62) |
where . Given the latent vector , is sampled as, .
The conditional distribution :
(63) |
where is the number of elements of the weight vector for which the latent variable . In the above it is to be noted that and are constants when is sampled. This can be observed from the DAG structure in Fig. 2. Thus given , and and is sampled as, .
The conditional distribution :
(64) | ||||
From the DAG structure it can be understood that and will act as a constant when is sampled. Since the -values corresponding to the spike distribution does not contribute to the basis function selection, the remaining weights belonging to the slab denoted by is used. Then, one can expand the intregrand , as,
(65) | ||||
where and Upon integration of the the above expression about , one obtains:
(66) | ||||
The random variable is sampled as, .
The conditional distribution : The elements of the weight vector corresponding to the spike distribution i.e. the weights for which the latent variable , are assigned the value 0. The conditional distribution for the weights belonging to the slab distribution denoted by is derived as follows:
(67) | ||||
where and . Then, the random values of can be sampled as .
The conditional distribution : The latent variable corresponding to the element of the weight vector are assigned the value 0 or 1 independently. The conditional probability distributions of are estimated by comparing the probabilities that the latent variable will assume a value 1 or 0, given the values of , and remaining values of the latent vector . Here, the term denotes the latent vector whose element is removed. Let denotes the probability with which the latent variable takes a value 1. The probability is then found as,
(68) | ||||
where . The marginal likelihood function was derived by integrating out the random variables and from the original likelihood function, which follows,
(69) |
where is obtained by marginalizing the likelihood function with respect to the variable . Considering only the weights whose corresponding latent variables , denoted by , the probability is obtained as,
(70) | ||||
where and . The operator denotes the determinant. With the above result, can be derived as,
(71) | ||||
where the operator denotes the Gamma function. To summarise, the random variables can be computed from the Bernoulli distribution with the parameter as, , where the marginalised likelihood function is obtained as,
(72) |
Acknowledgements
T. Tripura acknowledges the financial support received from the Ministry of Human Resource Development (MHRD), India in form of the Prime Minister’s Research Fellows (PMRF) scholarship. S. Chakraborty acknowledges the financial support received from Science and Engineering Research Board (SERB) through grant no. SRG/2021/000467 and seed grant received from IIT Delhi.
Declarations
Funding
The corresponding author received funding from IIT Delhi in form of seed grant.
Conflicts of interest
The authors declare that they have no conflict of interest.
Availability of data and material
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Code availability
The Python codes written for this work are available from the corresponding author on reasonable request.
References
- [1] Somdatta Goswami, Cosmin Anitescu, Souvik Chakraborty, and Timon Rabczuk. Transfer learning enhanced physics informed neural network for phase-field modeling of fracture. Theoretical and Applied Fracture Mechanics, 106:102447, 2020.
- [2] Maziar Raissi. Deep hidden physics models: Deep learning of nonlinear partial differential equations. The Journal of Machine Learning Research, 19(1):932–955, 2018.
- [3] Souvik Chakraborty. Transfer learning based multi-fidelity physics informed deep neural network. Journal of Computational Physics, 426:109942, 2021.
- [4] Souvik Chakraborty. Simulation free reliability analysis: A physics-informed deep learning based approach. arXiv preprint arXiv:2005.01302, 2020.
- [5] Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193, 2019.
- [6] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
- [7] Laure Zanna and Thomas Bolton. Data-driven equation discovery of ocean mesoscale closures. Geophysical Research Letters, 47(17):e2020GL088376, 2020.
- [8] Yayun Zheng, Fang Yang, Jinqiao Duan, Xu Sun, Ling Fu, and Jürgen Kurths. The maximum likelihood climate change for global warming under the influence of greenhouse effect and lévy noise. Chaos: An Interdisciplinary Journal of Nonlinear Science, 30(1):013132, 2020.
- [9] Chen Jia, Michael Q Zhang, and Hong Qian. Emergent lévy behavior in single-cell stochastic gene expression. Physical Review E, 96(4):040402, 2017.
- [10] Frank Noé and Cecilia Clementi. Collective variables for the study of long-time kinetics from molecular trajectories: theory and methods. Current opinion in structural biology, 43:141–147, 2017.
- [11] X Frank Zhang. Information uncertainty and stock returns. The Journal of Finance, 61(1):105–137, 2006.
- [12] Josh Bongard and Hod Lipson. Automated reverse engineering of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 104(24):9943–9948, 2007.
- [13] Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences, 113(15):3932–3937, 2016.
- [14] Hirotugu Akaike. A new look at the statistical model identification. IEEE transactions on automatic control, 19(6):716–723, 1974.
- [15] Gideon Schwarz. Estimating the dimension of a model. The annals of statistics, pages 461–464, 1978.
- [16] Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. science, 324(5923):81–85, 2009.
- [17] Ioannis G Kevrekidis and Giovanni Samaey. Equation-free multiscale computation: Algorithms and applications. Annual review of physical chemistry, 60:321–344, 2009.
- [18] Niall M Mangan, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Inferring biological networks by sparse identification of nonlinear dynamics. IEEE Transactions on Molecular, Biological and Multi-Scale Communications, 2(1):52–63, 2016.
- [19] Moritz Hoffmann, Christoph Fröhner, and Frank Noé. Reactive sindy: Discovering governing reactions from concentration data. The Journal of chemical physics, 150(2):025101, 2019.
- [20] Bhavana Bhadriraju, Abhinav Narasingam, and Joseph Sang-Il Kwon. Machine learning-based adaptive model identification of systems: Application to a chemical process. Chemical Engineering Research and Design, 152:372–383, 2019.
- [21] Jean-Christophe Loiseau and Steven L Brunton. Constrained sparse galerkin regression. Journal of Fluid Mechanics, 838:42–67, 2018.
- [22] Jean-Christophe Loiseau, Bernd R Noack, and Steven L Brunton. Sparse reduced-order modelling: sensor-based dynamics to full-state estimation. Journal of Fluid Mechanics, 844:459–490, 2018.
- [23] Zhilu Lai and Satish Nagarajaiah. Sparse structural system identification method for nonlinear dynamic systems with hysteresis/inelastic behavior. Mechanical Systems and Signal Processing, 117:813–842, 2019.
- [24] Shanwu Li, Eurika Kaiser, Shujin Laima, Hui Li, Steven L Brunton, and J Nathan Kutz. Discovering time-varying aerodynamics of a prototype bridge by sparse identification of nonlinear dynamical systems. Physical Review E, 100(2):022220, 2019.
- [25] Hayden Schaeffer and Scott G McCalla. Sparse model selection via integral terms. Physical Review E, 96(2):023302, 2017.
- [26] Niall M Mangan, J Nathan Kutz, Steven L Brunton, and Joshua L Proctor. Model selection for dynamical systems via sparse regression and information criteria. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2204):20170009, 2017.
- [27] Eurika Kaiser, J Nathan Kutz, and Steven L Brunton. Sparse identification of nonlinear dynamics for model predictive control in the low-data limit. Proceedings of the Royal Society A, 474(2219):20180335, 2018.
- [28] Hayden Schaeffer, Giang Tran, Rachel Ward, and Linan Zhang. Extracting structured dynamical systems using sparse optimization with very few samples. Multiscale Modeling & Simulation, 18(4):1435–1461, 2020.
- [29] Merten Stender, Sebastian Oberst, and Norbert Hoffmann. Recovery of differential equations from impulse response time series data for model identification and feature extraction. Vibration, 2(1):25–46, 2019.
- [30] Lorenzo Boninsegna, Feliks Nüske, and Cecilia Clementi. Sparse learning of stochastic dynamical equations. The Journal of chemical physics, 148(24):241723, 2018.
- [31] Samuel H Rudy, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Data-driven discovery of partial differential equations. Science Advances, 3(4):e1602614, 2017.
- [32] Sheng Zhang and Guang Lin. Robust data-driven discovery of governing physical laws with error bars. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 474(2217):20180305, 2018.
- [33] Samuel H Rudy, J Nathan Kutz, and Steven L Brunton. Deep learning of dynamics and signal-noise decomposition with time-stepping constraints. Journal of Computational Physics, 396:483–506, 2019.
- [34] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Multistep neural networks for data-driven discovery of nonlinear dynamical systems. arXiv preprint arXiv:1801.01236, 2018.
- [35] R Fuentes, R Nayek, P Gardner, N Dervilis, T Rogers, K Worden, and EJ Cross. Equation discovery for nonlinear dynamical systems: A bayesian viewpoint. Mechanical Systems and Signal Processing, 154:107528, 2021.
- [36] Rajdip Nayek, Ramon Fuentes, Keith Worden, and Elizabeth J Cross. On spike-and-slab priors for bayesian equation discovery of nonlinear dynamical systems via sparse linear regression. Mechanical Systems and Signal Processing, 161:107986, 2021.
- [37] Kushagra Gupta, Dootika Vats, and Snigdhansu Chatterjee. Bayesian equation selection on sparse data for discovery of stochastic dynamical systems. arXiv preprint arXiv:2101.04437, 2021.
- [38] Toby J Mitchell and John J Beauchamp. Bayesian variable selection in linear regression. Journal of the american statistical association, 83(404):1023–1032, 1988.
- [39] Edward I George and Robert E McCulloch. Approaches for bayesian variable selection. Statistica sinica, pages 339–373, 1997.
- [40] Robert B O’Hara and Mikko J Sillanpää. A review of bayesian variable selection methods: what, how and which. Bayesian analysis, 4(1):85–117, 2009.
- [41] Ovidiu Calin. An informal introduction to stochastic calculus with applications. World Scientific, 2015.
- [42] Fima C Klebaner. Introduction to stochastic calculus with applications. World Scientific Publishing Company, 2005.
- [43] Tapas Tripura, Ankush Gogoi, and Budhaditya Hazra. An ito-taylor weak 3.0 method for stochastic dynamics of nonlinear systems. Applied Mathematical Modelling, 2020.
- [44] Peter E Kloeden and Eckhard Platen. Higher-order implicit strong numerical schemes for stochastic differential equations. Journal of statistical physics, 66(1-2):283–314, 1992.
- [45] Tapas Tripura, Mohammad Imran, Budhaditya Hazra, and Souvik Chakraborty. A change of measure enhanced near exact euler maruyama scheme for the solution to nonlinear stochastic dynamical systems. arXiv preprint arXiv:2108.10655, 2021.
- [46] Tapas Tripura, Budhaditya Hazra, and Souvik Chakraborty. Generalized weakly corrected milstein solutions to stochastic differential equations. arXiv preprint arXiv:2108.10681, 2021.
- [47] George Casella and Edward I George. Explaining the gibbs sampler. The American Statistician, 46(3):167–174, 1992.
- [48] Bernt Oksendal. Stochastic differential equations: an introduction with applications. Springer Science & Business Media, 2013.
- [49] Tapas Tripura, Basuraj Bhowmik, Vikram Pakrashi, and Budhaditya Hazra. Real-time damage detection of degrading systems. Structural Health Monitoring, 19(3):810–837, 2020.
- [50] Hannes Risken. Fokker-planck equation. In The Fokker-Planck Equation, pages 63–95. Springer, 1996.