Rate of convergence in the Smoluchowski-Kramers approximation for mean-field stochastic differential equations
Abstract.
In this paper we study a second-order mean-field stochastic differential systems describing the movement of a particle under the influence of a time-dependent force, a friction, a mean-field interaction and a space and time-dependent stochastic noise. Using techniques from Malliavin calculus, we establish explicit rates of convergence in the zero-mass limit (Smoluchowski-Kramers approximation) in the -distances and in the total variation distance for the position process, the velocity process and a re-scaled velocity process to their corresponding limiting processes.
Key words and phrases:
Smoluchowski-Kramers approximation, Stochastic differential by mean-field, Total variation distance, Malliavin calculus2010 Mathematics Subject Classification:
60G22, 60H07, 91G301. Introduction
In this paper, we are interested in the following second-order mean-field stochastic differential equations
(1.1) |
Here and are positive constants, is a given function, are given points in the real line, and is the standard one-dimensional Wiener process. The notation denotes the expectation with respect to the probability measure of the underlying probability space in which the Wiener process is defined.
System (1.1) describes the movement of a particle at position (displacement) and with velocity , at time , under the influence of four different forces: an external, possibly time-dependent and non-potential, force ; a friction ; a (McKean-Vlasov type) mean-field interaction force (noting that here the mean-field term is acting on the velocity rather than the position) and a stochastic noise . Physically, is the inverse of the mass, is the friction coefficient and is the strength of the interaction. We use the superscript in (1.1) to emphasize the dependence on since in the subsequent analysis we are concerned with the asymptotic behaviour of (1.1) as tends to .
Under Assumptions 1.1 (see below) of this paper, system (1.1) can also be obtained as the mean-field (hydrodynamic) limit of the following interacting particle system as tends to
(1.2) |
where are independent one-dimensional Wiener processes. In fact, under Assumptions 1.1 the above interacting system satisfies the property of propagation of chaos, that is as tends to infinity, it behaves more and more like a system of independent particles, in which each particle evolves according to (1.1) where the interaction term in (1.2) is replaced by the expectation one. For a detailed account on the propagation of chaos phenomenon, we refer the reader to classical papers [Kac56, Szn91] and more recent papers [BGM10, Duo15, JW17] and references therein for degenerate diffusion systems like (1.1). The interacting particle system (1.2) and its mean-field limit (1.1) and more broadly systems of these types have been used extensively in biology, chemistry and statistical physics for the modelling of molecular dynamics, chemical reactions, flockings, social interactions, just to name a few, see for instance, the monographs [RF96, Pav14].
In this paper, we are interested in the zero-mass limit (as also known as the Smoluchowski-Kramers approximation) of (1.1), that is its asymptotic behaviour as tends to . By employing techniques from Malliavin calculus, we obtain explicitly rate of convergences, in -distances and in total variation distances, for both the position and velocity processes.
1.1. Main results
Before stating our main results, we make the following assumptions.
Assumption 1.1.
-
The coefficients have linear growth, i.e. there exists such that
-
The coefficients are Lipschitz, i.e. there exists such that
Assumption 1.2.
are twice differentiable in and the derivatives are bounded by some constant .
Let be random variables, we denote by the total variation distance between the laws of and , that is,
Consider the following first-order stochastic differential equation, which will be the limiting system for the displacement process
(1.3) |
Our first main result provides an explicit rates of convergence for the displacement process.
Theorem 1.1 (Quantitative rates of convergence of the displacement process).
Under Assumptions 1.1 and 1.2, systems (1.1) and (1.3) have unique solutions and the following statements hold.
-
(1)
(rate of convergence in -distances) For all , and ,
where for and is a positive constant depending on but not on and .
-
(2)
(rate of convergence in the total variation distance). We further assume that for all Then, for each and
where is a constant depending only on but not on and . As a corollary, if for all then we have.
Theorem 1.1 combines Theorem 3.1 (for the -distances) and Theorem 3.2 (for the total variation distance) in Section 3.1.
We are also interested in the asymptotic behavior, when , of the velocity process of (1.1) and of a re-scaled velocity process, , which is defined by
The re-scaled process satisfies the following stochastic differential equation
(1.4) |
where is a rescaled Brownian process.
Now we consider the following stochastic differential equation, which will be the limiting process of the rescaled velocity process
(1.5) |
We now describe our result for the rescaled velocity process first since for this process we also work with a general setting where both and can depend on both spatial and temporal variables. We only assume additionally the following condition.
Assumption 1.3.
In the next theorem, we provide explicit rates of convergence, both in -distances and in the total variation distance, for the rescaled velocity process.
Theorem 1.2 (Quantitative rates of convergence for the rescaled velocity processes).
Theorem 1.2 summarizes Theorem 3.3 (for the -distances) and Theorem 3.4 (for the total variation distance) in Section 3.2.
When and , [Nar94, Theorem 2.3] shows that the velocity process converges to the normal distribution as . The third aim of this paper is to generalize this result to a more general setting where depends on both and while depends only on , i.e. , obtaining rates of convergence in the total variation distance. The following theorem is the content of Theorem 3.5 in Section 3.2.
Theorem 1.3 (Quantitative rates of convergence for the velocity processes).
Under Assumptions 1.1 the following hold. Assume additionally that is continuously differentiable on and that for each . Let be a normal random variable with mean and variance , Then, for each and
where is a constant not depending on and .
Theorem 1.3 is Theorem 3.5 in Section 3.2. We emphasize that in the main theorems, to obtain the existence and uniqueness as well as the rate of convergence in -distances we only use Assumptions 1.1. Assumptions 1.2 and 1.3 are needed to employ techniques from Malliavin calculus, in particular to derive estimates for the Malliavin derivatives.
Corollary 1.1 (Rate of convergence in Wasserstein distance for the laws of the displacement and velocity processes).
Let and be two probability measures with finite second moments, then the -Wasserstein distance, , between them can be defined by
Using this formulation, as a direct consequence of our main results, we also obtain explicit rates of convergence in -Wasserstein distances for the laws of the displacement and the rescaled velocity processes to the corresponding limiting ones
1.2. Comparison with existing literature and future work
The zero-mass limit of second order differential equations has been studied intensively in the literature. In the seminal work [Kra40], Kramers formally discusses this problem, in the context of applications to chemical reactions, for the classical underdamped Langevin dynamics, which corresponds to (1.1) with (a gradient potential force), (no interaction force) and a constant diffusion coefficient. Due to this seminal work, this limit has become known in the literature as the Smoluchowski-Kramers approximation. Nelson rigorously shows that, under suitable rescaling, the solution to the Langevin equation converges almost surely to the solution of (3.1) with [Nel67]. Since then various generalizations and related results have been proved using different approaches such as stochastic methods, asymptotic expansions and variational techniques, see for instance [Nar91b, Nar91a, Nar94, Fre04, CF06, HVW12, DLPS17, DLP+18, NN20]. The most relevant papers to the present one include [Nar91b, Nar91a, Nar94, DLPS17, NN20]. The main novelty of the present paper lies in the fact that we consider interacting (mean-field) systems allowing time-dependent external forces and diffusion coefficients, and providing explicit rates of convergence in both -distances and total variation distances for both displacement and velocity processes. Existing papers lack at least one of these features. More specifically,
Papers that consider mean-field (interaction) systems. The papers [Nar91b, Nar91a, Nar94, DLPS17] consider second order mean-field stochastic differential equations establishing the zero-mass limit, but they require much more stringent conditions that (time-independent force) and (constant diffusivity). On top of that, they do not provide a rate of convergence. Furthermore, our approach using Malliavin calculus is also different: Narita’s papers use direct arguments while [DLPS17] employs variational methods based on Gamma-convergence and large deviation principle.
Papers that provide a rate of convergence. The papers [NN20, DLP+18] provide a rate of convergence but only consider non-interacting systems (also using different measurements). Like our paper, [NN20] also utilizes techniques from Malliavin calculus, but [DLP+18] uses a completely different variational method. The recent paper [CT22], which studies the kinetic Vlasov-Fokker-Planck equation, is particularly interesting since it considers both interacting systems and provides a rate of convergence, but this paper is different to ours in a couple of aspects. First, the interaction force is acting on the position instead of the velocity; second, it works on the Fokker Planck equations and obtains a rate of convergence in Wasserstein distance while we work on the stochastic differential equations and obtain error quantifications in both -distances and total variation distances; third, as mentioned, we use Malliavin calculus while [CT22] applied variational techniques like in [DLPS17, DLP+18]. We also mention the paper [Ta20], which provides similar rate of convergence to ours but it consider non mean-field stochastic differential equations driven by fractional Brownian motions.
Future work. The Lipschitz boundedness and differentiability Assumptions 1.1-1.2-1.3 are standard, but rather restricted since they do not cover some physically interesting interacting singular, such as Coloumb or Newton, forces. It would be interesting and challenging to extend our work to non-Lipschtizian and singular coefficients. Initial attempts in this direction for related models exist in the literature, see [Bre09] for non-Lipschitzian coefficients and recent papers [XY22, CT22] for singular forces. Another interesting problem for future work is to study the Kramers-Smoluchowski approximation for the -particle system (1.2) obtaining a rate of convergence that is independent of .
1.3. Overview of the proofs
To prove the main theorems for the general setting, with time-dependent coefficients, and obtain -distances and total variations distances for the position and velocity processes, several technical improvements have been carried out.
On existence and uniqueness. Under Assumptions 1.1, the existence and uniqueness, as well as the boundedness of the moments, of the second-order system (1.1) and the limiting first-order one (1.3) are standard results following Hölder’s and the Burkholder-Davis-Gundy inequalities.
On rate of convergence in -distances. Combining the mentioned inequalities and known estimates from [Nar91b] we can directly estimate and and obtain the rate of convergences in -distances, proving parts (1) of both theorems.
On rate of convergence in total-variation distances. The Malliavin differentiablity of the processes is followed from similar arguments as in [Nua06]. Obtaining the rate of convergence in total variation distances is the most technically challenging. Lemma 2.1, which provides an upper bound estimate for the total variation between two random variables in terms of their Malliavin derivatives, is the key in our analysis. This lemma enables us to obtain the desired rates of convergence by estimating the corresponding quantities appearing in the right-hand side of Lemma 2.1.
1.4. Organization of the paper
2. Preliminaries
In this section, we provide some basic and directly relevant knowledge on the Malliavin calculus and mean-field stochastic differential equations.
2.1. Malliavin calculus
Let us recall some elements of stochastic calculus of variations (for more details see [Nua06]). We suppose that is defined on a complete probability space , where is a natural filtration generated by the Brownian motion For we denote by the Wiener integral
Let denote the dense subset of consisting of smooth random variables of the form
(2.1) |
where If has the form (2.1), we define its Malliavin derivative as the process given by
More generally, for each we can define the iterated derivative operator on a cylindrical random variable by setting
For any we shall denote by the closure of with respect to the norm
A random variable is said to be Malliavin differentiable if it belongs to
An important operator in the Malliavin’s calculus theory is the divergence operator , which is the adjoint of the derivative operator . The domain of is the set of all functions such that
where is some positive constant depending on . In particular, if , then is characterized by the following duality relationship
The following lemma provides an upper bound on the total variation distance between two random variables in terms of their Malliavin derivatives. This lemma will play an important role in the analysis of the present paper.
Lemma 2.1.
Let be such that Then, for any random variable we have
(2.2) |
provided that the expectations exist.
Proof.
From [Ta20, Lemma 2.1] we have
(2.3) |
Now using [Nua06, Proposition 1.3.1], we get
(2.4) |
Moreover, observing that
which implies that
(2.5) |
Substituting the inequality (2.5) into (2.1) and using Hölder’s inequality, one can derive that
Finally, substituting the above estimate back into (2.3) and using the fundamental inequality for all , we obtain (2.2), which completes the proof of this lemma. ∎
2.2. Mean-field stochastic differential equations
Let be a probability space with an increasing family of sub--algebras of and let be a one-dimensional Brownian motion process adapted to .
The following lemma provides equivalent formulations of (1.1) and (1.3) as stochastic integral equations.
Lemma 2.2.
Proof.
Firstly, we can rewrite the second equation of (1.1) as follows
Using Itô formula, we have the following expression
which implies
(2.8) |
Secondly, substituting this equation into the first equation of (1.1) we get
Now, we use integration by parts for the non-stochastic integral and Ito’s product rule for the stochastic integral to get
(2.9) |
where the terms () are defined in the statement of the lemma. On the other hand, from the second equation of (1.1) we have
This implies that
(2.10) |
Integrating this equation over the interval and changing the order of integration in the double integral, we get
(2.11) |
The proof for Equation (2.7) is similar. ∎
3. Proof of the main results
In this section, we present the proofs of the main theorems 1.1, 1.2 and 1.3. We start with the displacement process (Theorems 3.1 and 3.2 give Theorem 1.1) in Section 3.1. Then in Section 3.2 we deal with the rescaled velocity process and the velocity process (Theorems 3.3 and 3.4 give Theorem 1.2 and Theorem 1.3 is Theorem 3.5).
3.1. Approximation of the displacement process
In this section, we give explicit bounds on -distances and the total variation distance between the solution of (1.1) and the solution of (1.3). We will repeatedly use the following fundamental inequalities.
-
(i)
Minkowski’s inequality: for and real numbers , we have
(3.1) -
(ii)
Hölder’s inequality: for , and measurable functions we have
(3.2) -
(iii)
The Burkholder-Davis-Gundy (BDG) inequality for Brownian stochastic integrals, see for instance [SP12, Section 17.7]: for and we have
(3.3) where is a positive constant depending only on .
Applying the BDG inequality (3.3) to solutions of (2.6) and (2.7) we obtain
(3.4) | ||||
(3.5) |
The next lemma provides important estimates on the moments of the displacement process , which will be helpful to prove the main results of this section. Hereafter, we denote by a generic constant which may vary at each appearance.
Lemma 3.1.
Proof.
We first prove (3.6). We shall divide the proof into two steps.
Step 1: We evaluate the upper bound of the moments of each .
From definition of and Assumptions 1.1 we have
Now Minkowski’s inequality (3.1) with and Hölder’s inequality (3.2) yield
(3.8) |
For by substituting (2.10) into and changing the order of integration in the double integral, one can derive that
(3.9) |
Using Assumptions 1.1, Minkowski’s inequality (3.1) with and Hölder’s inequality (3.2) with noting that for all , we get
(3.10) |
For , using the BDG inequality (3.4), Hölder’s inequality (3.2) and Assumptions 1.1, we get
(3.11) |
Next, from Assumptions 1.1, Minkowski’s inequality (3.1) with and Hölder’s inequality (3.2) with noting that for all , we get
(3.12) |
Step 2: We estimate the integrand in the integrals in the right hand side of the above expressions. From (2.6), applying Minkowski’s inequality with , Hölder’s inequality (3.2), the BDG inequality (3.4) as well as Assumptions 1.1, we obtain
From this, together with (3.8), (3.1), (3.1) and (3.1), we deduce that
(3.13) |
where is a positive constant depending on . From (3.13), by applying Gronwall’s lemma, we have
which completes the proof of (3.6).
In the following theorem, we obtain a rate of convergence in -distances in the Smoluchowski-Kramers approximation for the displacement process.
Theorem 3.1.
Proof.
Similar to the proof of the previous lemma, by applying Minkowski’s inequality (3.1) with , Hölder’s inequality (3.2), the BDG inequality (3.5) and Assumptions 1.1, we get
Next we estimate the terms , . We start with . From definition of and Assumptions 1.1 and Minkowski’s inequality (3.1) with we obtain that
This, together with the fact that the function is increasing and Lemma 3.1, implies
(3.14) |
where is a positive constant depending on . Next, using (3.1) and Lemma 3.1, we get
where is constant depending on . Thus,
(3.15) |
Applying the BDG inequality (3.3), Hölder’s inequality (3.2) and Lemma 3.1, one can derive that
where is constant depending on . On the other hand, for all and , we have
Thus the function is decreasing. Hence we get
Now we consider . Using Lemma 3.1 we can derive that
Thus,
where is constant depending on . From the above estimates, together with the fact that one sees that
where is constant depending only on . Using Growwall’s inequality, we obtain the claimed estimate and complete the proof. ∎
In the following lemma, we show Malliavin differentiability of and .
Lemma 3.2.
Under Assumptions 1.1, the solutions and of (2.6) and (2.7) respectively are Malliavin differentiable random variables. Moreover, the derivatives satisfy for and for
(3.16) | ||||
(3.17) |
where are adapted stochastic processes and are bounded by .
Proof.
From the second equation of (1.1) we have
Using Minkowski’s inequality (3.1) with , Hölder’s inequality (3.2) with , the BDG inequality (3.3), Assumptions 1.1 and Lemma 3.1 we get
Applying Gronwall’s lemma we obtain
This, together with Lemma 3.1 we deduce that and are bounded. Then, by Assumptions 1.1 and the dominated convergence theorem, the integrals and are continuous functions.
Let us define
Then is continuous function in and Equation (2.6) becomes
(3.18) |
Now, we consider the Picard approximation sequence given by
From this, using the same method as in the proof of [Nua06, Theorem 2.2.1], we conclude that the solution of (3.18) (thus, of (2.6)) is Malliavin’s differentable. Obviously, the solution is -adapted. Hence, we always have for For from [Nua06, Proposition 1.2.4] and Lipschitz property of and , there exist adapted processes , uniformly bounded by such that and . Then we obtain (3.2) by applying the operator to the equation (3.18).
The proof for the solution of (2.7) is similar. ∎
Remark 3.1.
If and are continuously differentiable, then , and . Here, for a function we use the convention
In the next lemma, we show that the moments of the Malliavin’s derivative of solutions of (2.7) are bounded.
Proof.
Using Minkowski’s inequality (3.1) with , Hölder’s inequality (3.2), the BDG inequality (3.3), Assumptions 1.1 and Lemma 3.1, and noting that , it follows from (3.17) that
where is a positive constant depending only on .
Taking the expectation and using Gronwall’s inequality, we obtain the claimed estimate. ∎
The following lemma provides an upper bound for the difference between the derivatives of the solutions of (2.6) and (2.7).
Lemma 3.4.
Proof.
Under Assumptions 1.2, and are twice differentiable, thus (see Remark 3.1) , and . Furthermore,
(3.19) |
(3.20) |
Now, we shall estimate each term in the right hand side of (3.1). First, using Assumptions 1.1, Lemma 3.1 and Theorem 3.1 for , we can derive that
where is constant depending only on . From Hölder’s inequality, Assumptions 1.1, Lemma 3.1, Lemma 3.3 and Theorem 3.1 for , together with (3.19) we get
By Itô’s isometry formula, Hölder’s inequality, Assumptions 1.1, Lemma 3.1, Lemma 3.3 and Theorem 3.1 for , together with (3.19), we have
Next, using Hölder’s inequality, Assumptions 1.1 and Lemma 3.3 one sees that
By the same estimate for the last term in the right hand side of (3.1), we can obtain
From the above estimates, together with the fact that the function is decreasing one can derive that
where is constant depending only on .
Let , then we have
Thus, applying Gronwall’s inequality, we get
where is constant depending only on . This completes the proof of the lemma. ∎
Now, we give explicit bounds on the total variation distance between the solution of (2.6) and the solution of (2.7).
Theorem 3.2.
Proof.
Lemma 2.1 gives us
Thanks to Theorem 3.1 and Lemma 3.4, we obtain
(3.21) |
where is a constant depending only on . Now, from (3.17), one sees that
which implies that
Define the stochastic process one derives that
for . We observe that is a martingale with bounded quadratic variation. Indeed, So, by Dubins and Schwarz’s theorem (see, e.g. Theorem 3.4.6 in [KSSS91]) there exists a Wiener process such that Then, we arrive at the following
This implies that, for each and
On the other hand, by Fernique’s theorem, we always have
Hence
(3.22) |
We observe that, from (3.2), for under Assumptions 1.2,
Now, using Minkowski’s inequality (3.1) with , Hölder’s inequality (3.2), the BDG inequality (3.3), Assumptions 1.1 and 1.2, we can deduce
This, with together Lemma 3.3, gives us
where is a positive constant. By Gronwall’s inequality, we can verify that
Therefore,
(3.23) |
where is a constant depending only on .
From Theorem 3.2, together with the fact that for all and , , then we get the following Corollary.
3.2. Approximation the velocity and rescaled velocity processes
In this section, we establish rates of convergence in -distances and in the total variation distance for the velocity and rescaled velocity processes. We will discuss the re-scaled velocity process first since in this case, our results are applicable to more general settings where both external forces and diffusion coefficients can be dependent on both and , i.e. and .
3.2.1. The re-scaled velocity process
From the second equation of (1.1) we can see that the process satisfies
(3.24) |
We recall the definition of the re-scaled velocity process introduced in the Introduction
Then satisfies (1.4), that is
(3.25) |
Now, we put , then is a Brownian motion process and (3.25) can be rewritten in the form
(3.26) |
Our goal in this section is to study the rate of convergence in -distance and in the total variation distance between and . Here, is the solution of Ornstein-Uhlembeck process (1.5), which is
(3.27) |
First, we obtain the rate of convergence in -distances between and in the following lemma.
Theorem 3.3.
Proof.
From (3.26) and (3.27), together with the fact that for all , we get
Using Minkowski’s inequality (3.1) with , Hölder’s inequality, the BDG inequality and Assumptions 1.1 and 1.3, one can derive that
By Lemma 3.1, with noting that , we have
where is constant depending only on . Using Growwall’s inequality, we obtain the claimed inequality and complete the proof. ∎
From (3.25) and (3.27), under Assumptions 1.1 the Malliavin differentiability of the solutions and can be proved by using the same method as in the proof of Lemma 3.2. Moreover, the Malliavin derivatives and satisfy for and
(3.28) |
where are adapted stochastic processes and bounded by Furthermore, if and are continuously differentiable, then , and
Lemma 3.5.
Proof.
It follows from (3.2.1) that, for
Using Cauchy-Schwarz inequality, the Itô isometry formula, Assumptions 1.1 and 1.3, Lemma 3.1, with noting that , we get
where is constant depending only on . From this, we have
Denote , using Gronwall’s inequality, one sees easily that
where is a constant depending only on . This completes our proof. ∎
Bringing the above lemmas together, we can get the following result.
Theorem 3.4.
3.2.2. The velocity process
As mentioned in the introduction, when and , [Nar94, Theorem 2.3] shows that the velocity process converges to the normal distribution as . In the rest of this section, we generalize this result to a much more general setting where depends on both and while depends only on , i.e. .
From (2.2) we get
(3.33) |
Since is Malliavin differentiable, is also Malliavin differentiable satisfying for , and for
(3.34) |
Define
Then is also Malliavin differentiable and for and for
(3.35) |
Lemma 3.6.
Proof.
Lemma 3.7.
Proof.
Now we are ready to prove the rate of convergence in total variation distance for the velocity process as .
Theorem 3.5.
Proof.
Using Lemma 2.1, we have:
By Lemmas 3.6 and 3.7, one can derive that
(3.37) |
where is a constant depending only on .
Now we calculate the derivatives of . One can easily show that
(3.38) |
Therefore,
Thus, for all ,
(3.39) |
From (3.2.2), (3.38), (3.39), we obtain
where is a constant depending only on . Thus,
where is a constant depending only on .
Note that by Itô’s isometry and using integration by parts for the non-stochastic integral, we have
Thus, we can deduce that is random variable with normal distribution with mean and variance
Now, applying Lemma 4.9, [Kla07], we derive that
where , and is an universal constant. Thus,
where is a constant depending on . This completes the proof. ∎
Acknowledgment
Research of MHD was supported by EPSRC Grants EP/W008041/1 and EP/V038516/1.
References
- [BGM10] Bolley, F., Guillin, A., and Malrieu, F. Trend to equilibrium and particle approximation for a weakly selfconsistent vlasov-fokker-planck equation. ESAIM: M2AN, 44(5):867–884, 2010.
- [Bre09] Norbert Breimhorst. Smoluchowski-Kramers Approximation for Stochastic Differential Equations with non-Lipschitzian coefficients. PhD thesis, 2009.
- [CF06] S. Cerrai and M. Freidlin. On the Smoluchowski-Kramers approximation for a system with an infinite number of degrees of freedom. Probab. Theory Related Fields, 135(3):363–394, 2006.
- [CT22] Y.-P. Choi and O. Tse. Quantified overdamped limit for kinetic vlasov–fokker–planck equations with singular interaction forces. Journal of Differential Equations, 330:150–207, 2022.
- [DLP+18] M H Duong, A Lamacz, M A Peletier, A Schlichting, and U Sharma. Quantification of coarse-graining error in langevin and overdamped langevin dynamics. Nonlinearity, 31(10):4517–4566, aug 2018.
- [DLPS17] M. H. Duong, A. Lamacz, M. A. Peletier, and U. Sharma. Variational approach to coarse-graining of generalized gradient flows. Calculus of Variations and Partial Differential Equations, 56(4):100, Jun 2017.
- [Duo15] Manh Hong Duong. Long time behaviour and particle approximation of a generalised vlasov dynamic. Nonlinear Analysis: Theory, Methods & Applications, 127:1–16, 2015.
- [Fre04] M. Freidlin. Some remarks on the Smoluchowski-Kramers approximation. Journal of Statistical Physics, 117(3-4):617–634, 2004.
- [HVW12] S. Hottovy, G. Volpe, and J. Wehr. Noise-induced drift in stochastic differential equations with arbitrary friction and diffusion in the Smoluchowski-Kramers limit. J. Stat. Phys., 146(4):762–773, 2012.
- [JW17] P.-E. Jabin and Z. Wang. Mean Field Limit for Stochastic Particle Systems, pages 379–402. Springer International Publishing, Cham, 2017.
- [Kac56] M. Kac. Foundations of kinetic theory. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955, vol. III, University of California Press, Berkeley, Los Angeles, 1956.
- [Kla07] B. Klartag. A central limit theorem for convex sets. Inventiones mathematicae, 168(1):91–131, Apr 2007.
- [Kra40] H.A. Kramers. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica, 7(4):284 – 304, 1940.
- [KSSS91] I. Karatzas, I.K.S. Shreve, S. Shreve, and S.E. Shreve. Brownian Motion and Stochastic Calculus. Graduate Texts in Mathematics (113) (Book 113). Springer New York, 1991.
- [McK67] H. P. McKean. Propagation of chaos for a class of non-linear parabolic equations, pages 41–57. Stochastic Differential Equations (Lecture Series in Differential Equations, Session 7, Catholic Univ. 1967.
- [Nar91a] Kiyomasa Narita. Asymptotic behavior of velocity process in the smoluchowski–kramers approximation for stochastic differential equations. Advances in Applied Probability, 23(2):317–326, 1991.
- [Nar91b] Kiyomasa Narita. The smoluchowski–kramers approximation for the stochastic liénard equation by mean-field. Advances in Applied Probability, 23(2):303–316, 1991.
- [Nar94] K. Narita. Asymptotic behavior of fluctuation and deviation from limit system in the smoluchowski-kramers approximation for sde. Yokohama Mathematical Journal, 42:41–76, 1994.
- [Nel67] Edward Nelson. Dynamical Theories of Brownian Motion, volume 17. Princeton University Press Princeton, 1967.
- [NN20] V. T. Nguyen and T. D. Nguyen. A berry–esseen bound in the smoluchowski–kramers approximation. Journal of Statistical Physics, 179(4):871–884, May 2020.
- [Nua06] D. Nualart. The Malliavin Calculus and Related Topics. Probability and Its Applications. Springer Berlin Heidelberg, 2006.
- [Pav14] G.A. Pavliotis. Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations. Texts in Applied Mathematics. Springer New York, 2014.
- [RF96] H. Risken and T. Frank. The Fokker-Planck Equation: Methods of Solution and Applications. Springer Series in Synergetics. Springer Berlin Heidelberg, 1996.
- [SP12] R. L. Schilling and L. Partzsch. Brownian Motion: An Introduction to Stochastic Processes. Walter de Gruyter, 2012.
- [Szn91] A.-S. Sznitman. Topics in propagation of chaos. In Paul-Louis Hennequin, editor, Ecole d’Eté de Probabilités de Saint-Flour XIX — 1989, volume 1464 of Lecture Notes in Mathematics, pages 165–251. Springer Berlin Heidelberg, 1991.
- [Ta20] C. S. Ta. The rate of convergence for the smoluchowski-kramers approximation for stochastic differential equations with fbm. Journal of Statistical Physics, 181(5):1730–1745, Dec 2020.
- [XY22] Longjie Xie and Li Yang. The smoluchowski–kramers limits of stochastic differential equations with irregular coefficients. Stochastic Processes and their Applications, 150:91–115, 2022.