Fisher-KPP equation with small data and the extremal process of branching Brownian motion
Abstract
We consider the limiting extremal process of the particles of the binary branching Brownian motion. We show that after a shift by the logarithm of the derivative martingale , the rescaled ”density” of particles, which are at distance from a position close to the tip of , converges in probability to a multiple of the exponential as . We also show that the fluctuations of the density, after another scaling and an additional random but explicit shift, converge to a -stable random variable. Our approach uses analytic techniques and is motivated by the connection between the properties of the branching Brownian motion and the Bramson shift of the solutions to the Fisher-KPP equation with some specific initial conditions initiated in [9, 10] and further developed in the present paper. The proofs of the limit theorems for rely crucially on the fine asymptotics of the behavior of the Bramson shift for the Fisher-KPP equation starting with initial conditions of ”size” , up to terms of the order , with some .
1 Introduction
The BBM connection to the Fisher-KPP equation
The standard binary branching Brownian motion (BBM) on is a process that starts at the initial time with one particle at the position . The particle performs a Brownian motion until a random, exponentially distributed with parameter , time when it splits into two off-spring particles. Each of the new particles starts an independent Brownian motion at the branching point. They have independent, again exponentially distributed with parameter , clocks attached to them that ring at their respective branching times. At the branching time, the particle that is branching produces two independent off-spring, and the process continues, so that at a time we have a random number of Brownian particles. It will be convenient for us to assume that the variance of each individual Brownian motion is not but .
A remarkable observation by McKean [19] is a connection between the location of the maximal particle for the BBM and the Fisher-KPP equation
(1.1) |
with the initial condition . Let be the positions of the BBM particles at time . McKean has discovered an exact formula
(1.2) |
More generally, given a sufficiently regular function , the solution to the Fisher-KPP equation (1.1) with the initial condition can be written as
(1.3) | ||||
so that (1.1) is a special case of (1.3) with . Here, refers to a BBM that starts at with a single particle at the position and not at . The slightly unusual form of (1.3) in the right side, with the process starting at , will be convenient when we look at the limiting measure of the BBM, also with a flipped sign, as in (1.10) below.
McKean’s interpretation has been used by Bramson [7, 8] to establish the following result on the long term behavior of the solutions to the Fisher-KPP equation. It has been known since the pioneering work of Fisher [12] and Kolmogorov, Petrovskii and Piskunov [16] that the Fisher-KPP equation admits traveling wave solutions of the form that satisfy
(1.4) |
for all . We will denote by the traveling wave that corresponds to the minimal speed :
(1.5) |
Solutions to (1.5) are defined up to a translation in space. We fix the particular translate by requiring that the traveling wave has the asymptotics
(1.6) |
with the pre-factor in front of the right side equal to one, and some that is not, to the best of our knowledge, explicit. Let now be the solution to (1.1) with the initial condition such that for all , and is compactly supported on the right – there exists such that for all . It was already shown in [16], for the particular example of , that there exists a reference frame such that as and
(1.7) |
uniformly on semi-infinite intervals of the form , for each fixed. Bramson has refined this result, showing that there exists a constant that is known as the Bramson shift corresponding to the initial condition , such that
(1.8) |
uniformly on semi-infinite intervals of the form , for each fixed, with
(1.9) |
Note that we have chosen the sign of in (1.8) in the way that makes the shift positive for ”small” initial conditions that we will consider later. In that sense, at a time , the solution is located at the position . A shorter probabilistic proof of this convergence was given recently in [24], and PDE proofs of various versions on Bramson’s results have been obtained in [14, 18, 22, 25], with further refinements in [13, 15, 23] and especially in the recent fascinating paper [5].
The limiting extremal process of BBM and its connection to the Bramson shift
Motivated by the above discussion, one may consider not just the maximal particle but the statistics of the BBM process re-centered at the location , and ask if it can also be connected to the solutions to the Fisher-KPP equation. Let be the positions of the BBM particles at time , and consider the BBM measure seen from :
(1.10) |
Recall that is the number of particles alive at time . It was shown in [1, 2, 3, 10] that there exists a point process so that we have
(1.11) |
with , so that corresponds to the maximal particle in the BBM, to the second largest, and so on. In what follows we will call the limiting extremal process or simply extremal process. Sometimes it is also called in the literature the decorated Poisson point process, see e.g. [1]. The properties of the limit measure are closely related to the long time limit of the derivative martingale introduced in [17]:
(1.12) |
It turns out that there exists a direct connection between , and the Bramson shift via the Laplace transform of . As explained in Appendix C of [10], see also [3], the results of [17] imply that for any test function that is compactly supported on the right, we have the identity
(1.13) |
Here, for a measure and a function we use the notation
Note that , and is also compactly supported on the right, so that its Bramson shift is well-defined and finite. In addition to a special case of (1.13) with , it was shown in [17] that for each we have
(1.14) |
This together with (1.13), characterizes the Laplace transform of : for any test function compactly supported on the right, we have the duality identity
(1.15) |
Let us note that the normalization of the traveling wave in (1.6) implies, in particular, that extra shifts appear neither in (1.14), nor in (1.15). A helpful discussion of the normalization constants in these identities can be found in Chapter 2 of [6]. We also explain this in Section 2.
It is important to note that, in fact, the results in [1], [3] and in Appendix C of [10] imply the conditional version of (1.13), see Lemma 2.1 below:
(1.16) |
In principle, (1.16) completely characterizes the conditional distribution of the measure in terms of its conditional Laplace transform. However, the Bramson shift is a very implicit function of the initial condition, and making the direct use of (1.16) is by no means straightforward. One of the goals of the present paper is precisely to make use of this connection to obtain new results about the extremal process .
Let us first illustrate what kind of results on the Bramson shift we may need on the example of the asymptotic growth of , Theorem 1.1 in [11], originally conjectured in [10]. This result says that there exists so that
(1.17) |
The proof of (1.17) in [11] uses purely probabilistic tools. In order to relate this result to the Bramson shift and the realm of PDE, we can do the following. Consider the shifted and rescaled version of the measure , as in (1.17):
(1.18) |
where
(1.19) |
so that
(1.20) |
We may analyze the conditional on Laplace transform of using (1.16): given a non-negative function compactly supported on the right, we have
(1.21) |
with
(1.22) |
Note that is also compactly supported on the right, so that its Bramson shift is well-defined. Furthermore, for the function is small: it is of the size , as is . Thus, (1.21) relates the understanding of the conditional on weak limit of to the asymptotics of the Bramson shift for small initial conditions for the Fisher-KPP equation (1.1), and this is the strategy we will exploit in this paper to obtain limit theorems for the process . Let us stress that a connection between the limiting statistics of BBM and the Bramson shift for small initial conditions was already made in [10], though with a slightly different objective in mind, and in a different way.
The Bramson shift for small initial conditions
We now state the results for the Bramson shift of the solutions to the Fisher-KPP equation
(1.23) |
with a small initial condition
(1.24) |
that we will need for studying the limiting behavior of . Here, is a small parameter, and the function is non-negative, bounded and compactly supported on the right: there exists such that for . We will use the notation for the Bramson shift of :
(1.25) |
We chose the sign of in (1.25) so that for sufficiently small. The following proposition gives the asymptotic behavior for for small that is sufficiently precise to recover (1.17).
Proposition 1.1
Under the above assumptions on , we have
(1.26) |
with
(1.27) |
To formulate the convergence result, let (resp. ) be the space of continuous (resp. non-negative continuous) compactly supported functions on , and (resp. ) be the space of bounded continuous (resp. non-negative bounded continuous) functions on compactly supported on the right. Let (resp. ) be the space of signed (resp. non-negative) Radon measures on such that , equipped with the topology generated by
An immediate corollary of Proposition 1.1 is the following version of (1.17). We set
(1.28) |
so that
(1.29) |
Theorem 1.2
Conditionally on , we have
(1.30) |
In other words, looks like an exponential shifted by to the left. This theorem also explicitly identifies the constant in (1.17). Accordingly, we can reformulate Theorem 1.2 as follows: consider the measures shifted by :
(1.31) |
and
(1.32) |
This gives the following version of Theorem 1.2:
Corollary 1.3
We have
(1.33) |
As we have mentioned, Theorem 1.2 and Corollary 1.3 are not really new, except for identifying the constant , even though the approach via the asymtptotics of the Bramson shift produces an analytic rather than a probabilistic proof. In order to obtain genuinely new results on the fluctuations of around , we will need a finer asymptotics for the shift than in Proposition 1.1. Let us define the constant
(1.34) |
that depends on the initial condition , as does in (1.27), and universal constants
(1.35) |
and
(1.36) |
that do not depend on . Here, is the constant that appears in the asymptotics (1.6) for . The following theorem allows us to obtain convergence in law of the fluctuations of .
Theorem 1.4
Under the above assumptions on , we have the asymptotics
(1.37) |
as , with some .
The first two terms in (1.26) and (1.37) have been predicted in [10] in addressing a different BBM question, using an informal Tauberian type argument that we were not able to make rigorous. The rest of terms have not been predicted, to the best of our knowledge. The proof in the current paper does not seem to be directly related to the arguments of [10] but the general approach to the statistics of BBM via the Bramson shift asymptotics for small initial conditions comes from [10].
Weak convergence of the fluctuations of the extremal process
Theorem 1.2 indicates that to obtain the limiting behavior of the fluctuations of one should consider a rescaling of the signed measure
(1.38) |
It turns out, however, that there is an extra small deterministic shift of the exponential profile that needs to be performed before the rescaling: a better object to rescale is not as in (1.38) but
(1.39) |
with an extra deterministic correction
(1.40) |
The properly rescaled measures are
(1.41) | ||||
Let us set
(1.42) |
and let be a spectrally positive -stable stochastic process with the Laplace transform
such that is independent of . The main probabilistic result of this paper is the following theorem describing the limiting behavior of fluctuations of the extremal process. We use the notation for the convergence in distribution.
Theorem 1.5
(i) Conditionally on , we have
(1.43) |
Here, is a random measure such that
(1.44) |
and is the constant that appears in (1.37).
(ii) We also have
(1.45) |
Here, is a random measure such that
(1.46) |
One can immediately deduce from the above theorem, that, conditionally on , for any test function , is a spectrally positive -stable random variable with the Laplace transform:
(1.47) |
We note that, in the study of BBM, both the -stable process behavior and a small deterministic correction have appeared in a related but different context for the convergence (1.12) of to as in [21]. There, the correction is of the form , and is close in spirit to due to the diffusive nature of the space-time scaling, though the exact translation of the corrections appearing here and in [21] is not quite clear. The correction to the front location has also been observed in [5] and proved in [13]. The -stable like tails for itself have been observed already in [4] for the BBM, and in [20] for the branching random walk.
The paper is organized is follows. Section 2 explains how the the probabilistic statements, Theorem 1.2 and Theorem 1.5, follow from the correspomnding asymptoptics for the Bramson shift in Proposition 1.1 and Theorem 1.4, as well as the the duality identities (1.14) and (1.15). The rest of the paper is devoted to the proof of Proposition 1.1 and Theorem 1.4. Section 3 introduces the self-similar variables that are used throughout the proofs, and reformulates the required results as Propositions 3.2 and 3.3. We also explain in that section how the values of the specific constants that appear in the asymptotics for come about. Section 4 contains the analysis of the linear Dirichlet problem that appears throughout the PDE approach to the Bramson shift [14, 22, 23]. More specifically, we describe an approximate solution to the adjoint linear problem that is one reason for the logarithmic correction in in (1.40). We should stress that this is not the only contribution to the term – the second one comes from the nonlinear term in the Fisher-KPP equation. Section 5 contains the proof of Proposition 3.2. Of course, this Proposition is just a weaker version of Proposition 3.3, in the same vein as Proposition 1.1 is an immediate consequence of Theorem 1.4. However, the proof of Proposition 3.2 is much shorter than that of Proposition 3.3, so we present it separately for the convenience of the reader. Section 6 contains the proof of Proposition 3.3, with some intermediate steps proved in Section 7. Finally, Appendix A contains an auxiliary lemma on the continuity of the Bramson shift.
We use the notation , , , , etc. for various constants that do not depend on , and can change from line to line.
Acknowledgment. We are deeply indebted to Lisa Hartung for illuminating discussions during the early stage of this work, and are extremely grateful to Éric Brunet and Julien Berestycki for generously sharing their deep understanding of various aspects of the BBM and the Bramson shift, without which this work would not be possible. The work of JMR was supported by from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) ERC Grant Agreement n. 321186 - ReaDi, and from the ANR NONLOCAL project (ANR-14-CE25-0013). The work or LM and LR was supported by a US-Israel BSF grant, LR was partially supported by NSF grants DMS-1613603 and DMS-1910023, and ONR grant N00014-17-1-2145.
2 The proof of the probabilistic statements
In this section, we explain how the probabilistic results, Theorem 1.2 and Theorem 1.5, follow from Proposition 1.1 and Theorem 1.4, respectively.
The duality identity
We first briefly explain the duality identities (1.14) and (1.15), and, in particular, the fact that no extra shift of the wave is needed in these relations, as soon as the wave normalization (1.6) is fixed. To see this, note that in the formal limit , we have , and (1.15) reduces it to
(2.1) |
This is simply the definition of the Bramson shift, combined with the probabilistic interpretation (1.2) of the solution to (1.1) with the initial condition . Thus, no extra shift is needed neither in (2.1), nor in (1.15). The generalization of (2.1) to (1.15) is explained in Appendix C of [10]. Some normalization constants appear in the discussion there, but as we see, they are not needed once the wave is normalized by (1.6) rather than by
(2.2) |
as in [10].
Proof of Theorem 1.2
We now prove Theorem 1.2. For an arbitrary , we set, as in (1.22),
(2.3) |
and
(2.4) |
with
(2.5) |
Note that both and look like ”small step” initial conditions:
We need the following result that, in fact, follows easily from [1], [3] and Appendix C in [10]. We provide the proof for the sake of completeness we provide the proof.
Lemma 2.1
For any , we have
(2.6) |
and hence
(2.7) |
Proof. Fix an arbitrary and let be a filtration generated by . Then, by the definition (1.10) of , the Markov property and (1.3), we get (with some ambiguity of notation we set : the expectation for BBMs starting with one particle at )
(2.8) | ||||
Here, is the solution to (1.1) with the initial condition . Next, we use (1.8) to get
(2.9) | ||||
Following the derivations in (2.4.9)-(2.4.12) in [6] (see also (23)-(25) in [17]) and using (1.6) which gives in the above references we obtain
(2.10) |
Fix an arbitrary bounded continuous function . Putting together (LABEL:eq:28_05_1), (LABEL:eq:28_05_2), and (2.10), we get,
(2.11) | ||||
We used the bounded convergence theorem in the last equality.
Recall, that by Theorem 2.1 in [1], the pair converges in distribution to as , where (resp. ) is just the measure (resp. ) shifted by . This together with a.s. convergence of to implies also convergence in distribution of to as , and thus
From this we get
Therefore, we have
where convergence to zero of the first term follows from a.s. convergence of to and the bounded convergence theorem. The second term converges to zero by (2.11). Hence, we have
for any and any bounded continuous function , which implies (2.6) for any . The extension to follows via approximation and a kind of continuity of in functions in and bounded by , made precise in Lemma A.1 in Appendix A.
We continue the proof of Theorem 1.2. Recall that we fixed an arbitrary . It follows from Proposition 1.1, with that the Bramson shift appearing in the right side of (2.7) has the asymptotics
(2.12) |
To get the weak convergence of the measures , it is sufficient to get the convergence of for any . Thus, we will check convergence of the Laplace transforms of (conditionally on ). As a consequence of Lemma 2.1 and (2.12), we obtain
(2.13) |
Since was arbitrary, this implies that, conditionally on ,
(2.14) |
Therefore, conditionally on , we have
(2.15) |
finishing the proof of Theorem 1.2.
Proof of Theorem 1.5
We will prove only part (i) since the proof of (ii) goes along the same lines. Once again, to get the weak convergence of measures , it is sufficient to get the convergence of for any , and we will check the convergence of the Laplace transforms of (conditionally on ). Fix an arbitrary . We have
(2.16) | ||||
We used expression (1.29) for and identity (2.7) above, with replaced by , and
Let us also introduce
so that (2.16) can be written as
(2.17) |
Note that by Theorem 1.4 with we get
(2.18) | ||||
Using this in (2.17) gives
(2.19) | ||||
By the definition of , and we get
and since was arbitrary we are done.
3 The solution asymptotics in self-similar variables
Proposition 1.1 is a consequence of the following two steps. The first result connects the Bramson shift of a solution to the Fisher-KPP equation with a small initial condition to the asymptotics of the solution to a problem in the self-similar variables with an initial condition shifted far to the right.
Proposition 3.1
Let be the solution to
(3.1) |
with the initial condition , where . Then, for each , the function has the asymptotics
(3.2) |
Furthermore, the Bramson shift that appears in Proposition 1.1 is given by
(3.3) |
The second result, at the core of the proof of Proposition 1.1, describes the asymptotics of for large .
Proposition 3.2
Proposition 3.3
Using (3.3), we obtain from Proposition 3.3 that
(3.6) | ||||
which proves Theorem 1.4. Thus, our goal is to prove Proposition 3.3.
Of course, Proposition 3.2 in an immediate consequence of Proposition 3.3. However, as its proof is both much shorter and helpful in the proof of the latter, we present its proof separately in Section 5.
Common sense scaling arguments
In order to verify that the constants in Proposition 3.3 are plausible, let us see assume that we have the asymptotics
(3.7) |
and see what simple arguments say about the possible values of the coefficients , , and . First, consider a shifted initial condition . Then, the shift given by (3.6) should also change by , so that
(3.8) |
Note that
Using (3.7) gives
This means that for (3.8) to hold we must have
(3.9) |
The second invariance is to consider an initial condition . This is equivalent to keeping intact and replacing by . If we replace by in (3.6) and keep unchanged, this corresponds to replacing by and by , which gives
(3.10) | ||||
If, instead, we replace by and keep intact, we get from (3.6)
(3.11) | ||||
Comparing (3.10) and (3.11) we see that we should have
(3.12) |
that, in view of (3.9), implies that .
Some preliminary transformations and the self-similar variables
The conclusion of Proposition 3.1 follows from a series of changes of variables that we now describe. We first go into the moving frame, writing solution to (1.23)-(1.24) as
(3.13) |
The function satisfies
(3.14) |
Next, we take out the exponential decay factor, writing
(3.15) |
which gives
(3.16) |
As (3.16) is a perturbation of the standard heat equation, it is helpful to pass to the self-similar variables:
(3.17) |
The function is the solution of
(3.18) |
with the initial condition
(3.19) |
In order to get rid of the pre-factor in the initial condition (3.19), and also to adjust the zero-order term in (3.18), it is convenient to represent as
(3.20) |
Here, is the solution of
(3.21) |
with the initial condition
(3.22) |
The next, and last, in this chain of preliminary transformations is to eliminate the pre-factor in the last term in (3.21). We choose
(3.23) |
so that
(3.24) |
and make a change of the spatial variable:
(3.25) |
The function satisfies:
(3.26) |
with the initial condition
(3.27) |
with as in (3.3), and
(3.28) |
This, with a slight abuse of notation, is exactly (3.1). Note that depends on only through as it appears in the initial condition. We will interchangeably, with some abuse of notation use and for the same object.
As far as the asymptotics of and its connection to the Bramson shift are concerned, it was shown in [22] that there exists a constant so that the solution of (3.18) has the asymptotics
(3.29) |
and the Bramson shift is given by
(3.30) |
The corrseponding long-time asymptotics for the function , the solution to (3.21) is
(3.31) |
with
(3.32) |
and the asymptotics for is
(3.33) |
so that
(3.34) |
and the Bramson shift is
(3.35) |
This finishes the proof of Proposition 3.1.
4 Connection to the linear Dirichlet problem
Before giving the proof of Proposition 3.2, let us recall the intuition that leads to the long-time asymptotics (3.2) for the solution of (3.1), and also explain how the asymptotics (3.4) comes about. The key point is that we may think of (3.1) as a linear equation with the factor
(4.1) |
in the last term in its right side playing the role of an absorption coefficient. Disregarding our lack of information about that enters (4.1), we expect that when this term is ”extremely large” for and ”extremely small” for . Thinking again of (3.1) as a linear equation for , the former means that is very small for , while the latter indicates that essentially solves a linear problem for . The drift term in (3.1) with the pre-factor is also very small at large times. Thus, if we take some , then for , a good approximation to (3.1) is the linear Dirichlet problem
(4.2) | ||||
In other words, one would solve the full nonlinear problem on the whole line only until a large time , and for simply solve the linear Dirichlet problem (4.2). It is easy to see that
(4.3) |
is a steady solution to (4.2). In addition, the operator
(4.4) |
with the Dirichlet boundary condition at has a discrete spectrum. It follows that has the long time asymptotics
(4.5) |
As the integral
(4.6) |
is conserved, the coefficient is determined by the relation
(4.7) |
so that
(4.8) |
As we expect and to be close if is sufficiently large, we should have an approximation
(4.9) |
if . This, in turn, implies that
(4.10) |
This informal argument is made rigorous in [22].
The limit in the right side of (4.10) is an implicit functional of the initial conditions for the nonlinear problem (3.1), and the evolution of the solution in the initial time layer, before the linear approximation kicks in, is difficult to control, so that there is no explicit expression for . In the present setting, however, the initial condition in (3.1) is shifted to the right by . Therefore, at small times the solution is concentrated at , a region where the factor in front of the nonlinear term in (3.1)
(4.11) |
is very small even for . Hence, solutions to the nonlinear equation (3.1) with the initial conditions (3.27) should be well approximated, to the leading order, by the linear problem
(4.12) |
even for small times. However, the solution ”does not yet know” for ”small” that there is a large dissipative term in the nonlinear equation, or the Dirichlet boundary condition in the linear version, and evolves ”as if (4.12) is posed for ”. This leads to exponential growth in until the solution spreads sufficiently far to the left, close to and ”discovers” the Dirichlet boundary condition (or the nonlinearity in the full nonlinear version). During this ”short time” evolution we have
(4.13) |
Unlike the first moment, the total mass in the right side does not grow as gets larger – the shift of the initial condition to the right increases the first moment but not the mass. Thus, the first moment of will only change by a factor that is during the ”short time” evolution, so that it is conserved to the leading order in . The ”long time” evolution following this initial time layer is well approximated by the linear Dirichlet problem (4.2) that preserves the first moment. Thus, altogether, the first moment will not change to the leading order if is large, so that
(4.14) |
which leads to the explicit expression for in terms of the initial first moment:
(4.15) | ||||
which is (3.4). This very informal argument is behind the reason why we can describe the Bramson shift so explicitly for , which corresponds to . The rest of the proof of Proposition 3.2 formalizes this argument by providing matching upper and lower bounds on the limit in the right side of (4.10).
An approximate solution to the adjoint linear problem
In order to improve on the approximate conservation law (4.13) let us make the following observation. Let us set
(4.16) |
with
(4.17) |
and consider a solution to the linear Dirichlet problem
(4.18) |
Lemma 4.1
We have
(4.19) |
Proof. Note that
(4.20) | ||||
It is easy to check that the function is a solution to
(4.21) |
With this, we can compute that the function satisfies
(4.22) | ||||
5 The proof of Proposition 3.2
5.1 An upper bound for the first moment
In this section, we prove an upper bound for .
Lemma 5.1
Reduction to a Dirichlet problem
We first bound the solution to (3.1) by a solution to the linear Dirichlet problem, up to a relatively small error. We start with two observations. First, the solution of the original KPP problem (1.23) satisfies , hence the function defined in (3.17) satisfies
Retracing our changes of variables, we deduce that defined in (3.20) satisfies
(5.2) |
and introduced in (3.25) obeys
(5.3) |
It follows that at the boundary we have
(5.4) |
so that for the function does satisfy an approximate Dirichlet boundary condition at . However, the bound (5.4) is very poor for – recall that the initial condition is located at distance away from the origin, so the solution remains small near for some time . In particular, as a first step, we can bound from above by the solution to the linear problem on the whole line:
(5.5) |
A change of variables
leads to the standard heat equation in the self-similar variables
(5.6) |
Thus, the function can be written explicitly as
(5.7) | ||||
Here, is the standard heat kernel:
(5.8) |
As , and satisfies for and for , we have
(5.9) | ||||
Note that for any we have
(5.10) | ||||
We take sufficiently small, and consider two cases. First, if
then we have, from (5.9) and (5.10), for for :
(5.11) |
On the other hand, if , then, taking
(5.12) |
in (5.10), we see that for sufficiently large, the upper limit of integration
(5.13) | ||||
is very negative. In particular, the integral in the right side of (5.10) can be estimated as
(5.14) | ||||
Then, we have from (5.9), (5.10), (5.13) and (5.14)
(5.15) | ||||
provided that and
(5.16) |
In particular, we can take
(5.17) |
as long as is sufficiently large, because then we have
(5.18) |
so that (5.16) holds. It follows that
(5.19) |
Taking into account (5.4), we deduce that we also have
(5.20) |
It follows that the function , the solution to (3.1), is bounded from above by the solution to the linear half-line problem
(5.21) |
with the initial condition
(5.22) |
In order to deal with the small but non-zero boundary condition in (5.1), we make yet another change of variables:
(5.23) |
with a smooth function such that , and for . This leads to
(5.24) |
with a uniformly bounded function that is supported in and is independent of . We recall that the operator is defined in (4.4). This change of variable does not affect the asymptotics of the first moment:
(5.25) |
The initial condition for is
(5.26) |
It is easy to see that
(5.27) |
Here, is a constant that depends neither on nor on , and is the solution to the homogeneous problem
(5.28) | |||
Let us now use Lemma 4.1, with : multiply (5.1) by and integrate:
(5.29) | ||||
Note, however, that integrating (5.1) gives a simple upper bound
(5.30) |
implying a trivial and very poor upper bound
(5.31) |
We remind that we use the notation for various constants that do not depend on . Using this estimate in the right side of (5.29) gives
(5.32) |
Recalling the definition (4.16) of and passing to the limit gives
(5.33) | ||||
with as in (1.27), and a constant that may depend on . We used (4.25) in the last inequality above. The conclusion of Lemma 5.1 now follows.
5.2 A lower bound for the first moment
We now prove a lower bound for the first moment matching the upper bound in Lemma 5.1, to the leading order in , which will finish the proof of the asymptotics (3.4) in Proposition 3.2.
Lemma 5.2
There exists a constant so that
(5.34) |
We would like first to get rid of the nonlinearity in the last term in the left side of (5.35). This is done as follows: recall that we have
(5.37) |
where is the solution to the linear whole line problem (5.1). Note that (5.7) implies that
(5.38) |
Thus, a lower bound for is the solution to
(5.39) |
with the boundary condition
(5.40) |
and the initial condition
(5.41) |
and with . Next, we shift, setting
so that satisfies
(5.42) |
with the boundary condition
(5.43) |
and the initial condition
(5.44) |
We multiply (5.42) by
(5.45) |
and integrate, using Lemma 4.1:
(5.46) | ||||
We used the fact that in the second inequality above, and that is non-negative and increasing.
Integrating (5.42) using positivity of for gives
As , this implies a trivial bound
(5.47) |
As in (5.46), we then obtain
(5.48) |
so that
(5.49) | ||||
with as in (1.27), and an appropriate . This finishes the proof of the lower bound in Lemma 5.2, and also completes the proof of estimate (3.4) in Proposition 3.2.
6 The proof of Proposition 3.3
The proof of Proposition 3.3 is much more involved than that of Proposition 3.2. In this section, we outline the main steps and state the auxiliary results needed in the proof.
We start with (3.26):
(6.1) |
with the initial condition . Recall that we are interested in
(6.2) |
Here, is the approximate solution to the adjoint linear problem, defined in (4.16) with . Multiplying (6.1) by and integrating in gives, according to Lemma 4.1:
(6.3) |
so that
(6.4) |
with
(6.5) | ||||
The error term
(6.6) |
has two contributions. The first
(6.7) |
comes from the boundary at since we do not have but only that is small. The second comes from the error term in Lemma 4.1, because is only an approximate solution to the adjoint problem and not an exact one:
(6.8) |
The linear term and the error term in (6.4) are quite straightforward to evaluate and estimate, respectively. The main difficulty will be in finding the precise asymptotics of the nonlinear term in the right side of (6.4).
The error term bound
The error term in (6.4) is bounded by the following lemma.
Lemma 6.1
There exists so that
(6.9) |
Proof. As we have seen in the proof of Lemma 5.1 – see (5.19) and (5.20), we have an upper bound
(6.10) |
hence can be bounded as
(6.11) |
We now estimate . As in the proof of the upper bound in Lemma 5.1, see (5.23)-(5.1), we deduce from (6.10) that
(6.12) |
where is the solution to (5.1):
(6.13) | ||||
The simple-minded bound (5.31) implies that if we fix any , then
(6.14) |
For short times we can take and use (5.31) to write
(6.15) | ||||
Here, is the solution to the whole line problem (5.1):
(6.16) |
given explicitly by (5.7):
(6.17) |
In the proof of Lemma 5.1 we only looked for the bound on at but now we need to consider . Note that for and not too large, the function is increasing in for and . Hence, we have
(6.18) |
As in (5.9) and (5.11), we get, as for :
(6.19) | ||||
Let us take and , which gives
(6.20) |
Using this in (6.15), we obtain for :
(6.21) |
while (6.14) becomes
(6.22) |
It follows that
(6.23) |
and the conclusion of Lemma 6.1 follows.
The linear contribution
The nonlinear contribution
It remains to find the asymptotics of the term in (6.5), and that computation is quite long. The first step is a series of simplifications. We start by approximating in (6.5) by : set
(6.26) |
Lemma 6.2
There exists so that
(6.27) |
This lemma is proved in Section 7.5. The next approximation involves going back to the original space-time variables and restricting the spatial integration to ”relatively short” distances with small.
Lemma 6.3
There exists and so that
(6.28) |
where
(6.29) |
and is the solution to
(6.30) |
with .
Note that (6.30) is simply (3.16) with a slightly different notation. The functions and are related by
(6.31) |
This lemma is proved in Section 7.1 below. It is easy to see why (6.28) should be true: we expect that the function behaves roughly as
(6.32) |
With this input, a back of the envelope computation shows that the integral in (6.29) over is, indeed, small, due to the exponential decay factor.
The next approximation is to discard the ”short times” from the time integration, looking only at times with some sufficiently small. This is, again, quite expected: as the integration is now over the region , and is initially supported at , it takes the time of roughly to populate the domain of integration, so shorter times can be discarded.
Lemma 6.4
Let sufficiently small. Then, there exists so that
(6.33) |
where
(6.34) |
This lemma is proved in Section 7.2 below.
Lemmas 6.3 and 6.4 allow us to focus on the integration in the region and times , and consider as the solution to (6.30) for , with the initial condition and the boundary condition at coming from the ”outer” solution in the self-similar variables:
(6.35) |
The analysis of the behavior of for and that we have done so far, would only give information for , not with , as this point corresponds to
To bridge this gap, we need the following crucial lemma that is proved in Section 7.6. It refines the informal asymptotics in (6.32) by an extremely important Gaussian factor that interpolates the short time and the long time behavior.
Lemma 6.5
There exists sufficiently small and a constant so that for all we have
(6.36) |
and for all we have
(6.37) |
Lemma 6.5, proved in Section 7.6 allows us to look at upper and lower solutions for in the region as the solutions to
(6.38) |
with the boundary condition
(6.39) |
The initial condition for at is for all , and for it is for all , with some , which comes from (7.14) below. We have then the inequality
(6.40) |
with
(6.41) |
The last step in the proof of Proposition 3.3. is to prove the following.
Lemma 6.6
7 Proofs of auxiliary lemmas
This section contains the proofs of the auxiliary lemmas used in Section 6 to prove Proposition 3.3, except for Lemma 6.5 that is proved in Section 7.6.
7.1 The proof of Lemma 6.3
First, we go back to the original, non-self-similar variables: write
so that the function satisfies
(7.1) |
with the initial condition . To revert back even more, we write , with the function that satisfies (6.30) with . Then, we can re-write as
(7.2) | ||||
Next, we split into two terms, corresponding to small and large :
(7.3) |
with to be chosen, and split the second term again:
(7.4) |
with a large to chosen. We estimate the term using the simple upper bound
(7.5) |
so that
(7.6) |
To estimate we slightly refine (7.5) to
(7.7) |
Here, is the solution to the heat equation on the whole line
(7.8) |
It follows that
(7.9) |
so that for we have
(7.10) |
Thus, if we take , then
(7.11) |
Using also in (7.6), we obtain
(7.12) |
finishing the proof of Lemma 6.3.
7.2 The proof of Lemma 6.4
7.3 The proof of Lemma 6.6: an informal argument
Let us explain informally why Lemma 6.6 is true, before giving the proof in Section 7.4 below. We expect that the functions , solutions to (6.38)-(6.39), converge as to , the solutions to
(7.16) |
with the boundary condition
(7.17) |
with the constant as in Lemma 6.5. Note that has an explicit form
(7.18) |
with that solves
(7.19) |
with the slope at infinity
(7.20) |
More precisely, we expect that is well approximated by taking
(7.21) |
leading to the asymptotics
(7.22) |
that holds for all , in the sense that
(7.23) | ||||
with some , and
(7.24) |
Note that
(7.25) |
It follows that
(7.26) |
with sufficiently small. Therefore, we have
(7.27) |
To analyze the asymptotic behavior of for , note that the solution to (7.19) with the normalization (7.20) is given by
(7.28) |
where is the Fisher-KPP minimal speed wave, solution to (1.5), with the normalization (1.6). Hence, the function has the asymptotics
(7.29) |
with as in (1.6), and some , and
(7.30) |
In addition, we have from (7.19)-(7.20) that
(7.31) |
and
(7.32) | ||||
Therefore, we can write as
(7.33) |
For the last integral in the right side, we deduce from (7.30) that
(7.34) |
It follows that for we have
(7.35) |
Hence, we have
(7.36) | ||||
We used above the fact that and .
7.4 The proof of Lemma 6.6
Let us now proceed with the actual proof of Lemma 6.6. The computation in the previous section relied crucially on approximation (7.22), and the main step is to justify it. The function satisfies (6.38)-(6.39) (we drop the sub-script in this section):
(7.37) |
for , with the boundary condition
(7.38) |
with , and the initial condition
(7.39) |
as in the upper bound (7.14), with some .
Our goal is to find a more explicit super-solution than the solution to (7.37)-(7.39). Motivated by (7.18), let us define
(7.40) |
with
(7.41) |
so that
(7.42) | ||||
with , as in (7.18).
Lemma 7.1
There exists sufficiently large, and sufficiently small, so that
(7.43) |
The proof of Lemma 7.1
Our goal will be to show that we can choose , and so that
(7.44) |
Let us first show that with the choice of the shift as in (7.41), at the boundary we have
(7.45) |
so that
(7.46) |
At this point we have
(7.47) |
It follows from (7.29) that there exists so that
(7.48) |
Therefore, if
then, as is increasing, we automatically have from (7.47) that
(7.49) | ||||
On the other hand, if
(7.50) |
and is sufficiently large, then we have
(7.51) |
with a sufficiently small , and dominates the other terms in the argument inside in the right side of (7.47). In particular, if (7.50) holds, since is increasing, we can use (7.48) to obtain
(7.52) | ||||
because
(7.53) |
We have used (7.51) above. Therefore, if we choose as in (7.41), then the comparison at the boundary (7.45), indeed, holds.
To look at the other boundary point, , note that the function is a super-solution to (7.37) for all sufficiently large, and at we have, due to Lemma 6.5:
(7.54) |
if is sufficiently large. It follows that
and, in particular, we deduce that
(7.55) |
and thus
(7.56) |
At the initial time we have the comparison
(7.57) |
Having established the comparison at the boundary and the initial time, we now show that is a super-solution for the equation for . The function satisfies
(7.58) | ||||
We used the fact that the function is increasing in and is increasing in in the last step above. Note that
(7.59) |
hence
(7.60) | ||||
The second term in the right side of (7.58) for is bounded by
(7.61) | ||||
Therefore, the function satisfies
(7.62) |
On the other hand, recall from [22] that for , the function satisfies
(7.63) |
with such that
(7.64) |
In addition, we have
(7.65) |
and
(7.66) |
We conclude that
(7.67) | ||||
provided that and are sufficiently small. Now, putting together the boundary comparisons (7.46) and (7.56), the initial time comparison (7.57), as well as (7.62) and (7.67), we deduce from the comparison principle that if , and are all sufficiently small, then
(7.68) |
This implies (7.44), and (7.43) follows, finishing the proof of Lemma 7.1.
The end of the proof of Lemma 6.6
(7.69) | ||||
with the three terms coming from the expansion
We have, clearly,
(7.70) |
For the second term, we recall that for we have
(7.71) | ||||
and use this to write
(7.72) | ||||
For the main term, we need to compute more precisely, and we use expression (7.42) for :
(7.73) | ||||
This is simply expression (7.23) that we have computed in Section 7.3, leading to (7.36):
(7.74) |
This finishes the proof of the upper bound in Lemma 6.6.
Proceeding as in the proof of the upper bound, with some minor modifications, we can obtain a matching lower bound
(7.75) |
which finishes the proof of Lemma 6.6.
7.5 The proof of Lemma 6.2
We need to show that
(7.76) |
As in (7.2), we re-write this in terms of as
(7.77) | ||||
We sketch the argument that can be made precise using Lemma 7.1 in a straightforward way. Let us use approximation (7.22) again:
(7.78) |
and insert this into (7.77). This would give
(7.79) | ||||
Recall that , so we can roughly estimate
(7.80) | ||||
This finishes the proof of Lemma 6.2.
7.6 The proof of Lemma 6.5
We now turn to the proof of Lemma 6.5. Let us go back to (6.30):
(7.81) |
and undo yet another change of variables:
(7.82) |
The function satisfies simply
(7.83) | ||||
Note that
(7.84) |
so that our point of interest for is
(7.85) |
We perform the standard parabolic scaling
(7.86) |
so that the function solves
(7.87) | ||||
In the new variables, the point of interest is
(7.88) |
and the important time scales are , corresponding to . In particular, we have
(7.89) |
Let us start with the proof of the lower bound (6.37). The maximum principle implies that
Therefore, if we take and impose the Dirichlet boundary condition at , then we have
(7.90) |
where is the solution to
(7.91) | ||||
We see from (7.88) that, as , we have
(7.92) |
provided that is sufficiently small, so that we have not lost our point of interest in this approximation. An explicit formula for the solution to (7.91) gives
(7.93) |
with
(7.94) |
The restriction on in (7.93) comes from the requirement that (7.92) holds, so that is in the region where is defined.
The integrand in (7.93) is very small for because of the exponential decay of the initial condition . With this in mind, we write the difference of the exponentials in (7.93) as
(7.95) |
As , we have
(7.96) |
Using the inequality
(7.97) |
for sufficiently small we have then
(7.98) |
The correction in the pre-factor comes from that violate (7.96), so that we can not use (7.97). Their contribution is extremely small since is decaying exponentially as and has compact support for . Using the straightforward approximations for the Gaussian and the factor inside the parenthesis inside the integral in (7.98), as well as for the exponential factor in front of the integral, we obtain
(7.99) | ||||
Going back to (7.89) and (7.94), we note that
(7.100) |
Using this in (7.99) gives
(7.101) |
Unrolling the changes of variables (7.84), (7.86), and (7.88), and using the bound (7.90), as well as the approximation (7.89), we can re-write (7.101) as
(7.102) | ||||
which is the lower bound (6.37).
For the upper bound (6.36), we again look at the solution of (7.87). A simple upper bound for is by the solution to the heat equation on the whole real line:
(7.103) | ||||
Accordingly, we set
(7.104) |
with a sufficiently small to be chosen, depending on . We also have the upper bound
(7.105) |
that follows immediately from the maximum principle. In particular, we have
(7.106) |
whence
(7.107) |
Here, is the solution to and solves
(7.108) | ||||
Let us note the following properties of the initial condition . First, (7.104) implies that it is localized near , in the sense that
(7.109) |
Hence, as soon as departs from a very small neighborhood of 1, is exponentially small in . Furthermore, the mass of is
(7.110) |
and its first moment is
(7.111) |
because of (7.109).
It follows that the function can then be estimated along the same lines as in the proof of the lower bound. This eventually leads to (6.36).
Appendix A Proof of an auxiliary lemma
Here we prove an elementary result used in the proof of Lemma 2.1.
Lemma A.1
Let , , and be a continuous function such that for , for and for . Define , then as .
Proof. As , the comparison principle implies immediately that
(A.1) |
and we only need to verify an opposite bound. Let , and be the solutions to (1.1) with the initial conditions
(A.2) |
The function satisfies
(A.3) |
with the initial condition . It follows from the maximum principle that
and, in particular,
(A.4) |
Passing to the limit , we obtain
(A.5) |
for each fixed and all . Dividing by and passing to the limit , keeping fixed, gives
(A.6) |
As for all , we know that as . Using this and passing to the limit in (A.6) leads to
(A.7) |
This, together with (A.1) finishes the proof.
References
- [1] E. Aidékon, J. Berestycki, É. Brunet, Z. Shi, Branching Brownian motion seen from its tip, Probab. Theory Relat. Fields 157, 2013, 405–451.
- [2] L.-P. Arguin, A. Bovier, and N. Kistler, Poissonian statistics in the extremal process of branching Brownian motion. Ann. Appl. Probab. 22, 2012, 1693–1711.
- [3] L.-P. Arguin, A. Bovier, and N. Kistler, The extremal process of branching Brownian motion. Probab. Theory Relat. Fields 157, 2013, 535–574.
- [4] J. Berestycki, N. Berestycki and J. Schweinsberg, The genealogy of branching Brownian motion with absorption, Ann. Probab. 41, 2013, 527–618.
- [5] J. Berestycki, E. Brunet, and B. Derrida, Exact solution and precise asymptotics of a Fisher-KPP type front, J. Phys. A 51, 2018, 035204, 21 pp.
- [6] A. Bovier, From spin glasses to branching Brownian motion – and back?, in ”Random Walks, Random Fields, and Disordered Systems” (Proceedings of the 2013 Prague Summer School on Mathematical Statistical Physics), M. Biskup, J. Cerny, R. Kotecky, Eds., Lecture Notes in Mathematics 2144, Springer, 2015.
- [7] M.D. Bramson, Maximal displacement of branching Brownian motion, Comm. Pure Appl. Math. 31, 1978, 531–581.
- [8] M.D. Bramson, Convergence of solutions of the Kolmogorov equation to travelling waves, Mem. Amer. Math. Soc. 44, 1983.
- [9] E. Brunet and B. Derrida. Statistics at the tip of a branching random walk and the delay of traveling waves. Eur. Phys. Lett. 87, 2009, 60010.
- [10] E. Brunet and B. Derrida. A branching random walk seen from the tip, Jour. Stat. Phys. 143, 2011, 420–446.
- [11] A. Cortines, L. Hartung and O. Louidor, The Structure of Extreme Level Sets in Branching Brownian Motion, Ann. Probab. 47, 2019, 2257–2302.
- [12] R.A. Fisher, The wave of advance of advantageous genes, Ann. Eugen., 7, 1937, 355–369.
- [13] C. Graham, Precise asymptotics for Fisher KPP fronts, Nonlinearity, 32, 2019, 1967–1998.
- [14] F. Hamel, J. Nolen, J.-M. Roquejoffre and L. Ryzhik, A short proof of the logarithmic Bramson correction in Fisher-KPP equation, Netw. Heterog. Media, 8, 2013, 275–289.
- [15] C. Henderson, Population stabilization in branching Brownian motion with absorption and drift, Comm. Math. Sci. 14, 2016, 973–985.
- [16] A.N. Kolmogorov, I.G. Petrovskii and N.S. Piskunov, Étude de l’équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique, Bull. Univ. État Moscou, Sér. Inter. A 1, 1937, 1–26.
- [17] S. P. Lalley and T. Sellke A Conditional Limit Theorem for the Frontier of a Branching Brownian Motion Ann. Probab. Volume 15, Number 3 (1987), 1052-1061.
- [18] K.-S. Lau, On the nonlinear diffusion equation of Kolmogorov, Petrovskii and Piskunov, J. Diff. Eqs. 59, 1985, 44-70.
- [19] H.P. McKean, Application of Brownian motion to the equation of Kolmogorov-Petrovskii-Piskunov, Comm. Pure Appl. Math. 28 1975, 323–331.
- [20] T. Madaule, The tail distribution of the Derivative martingale and the global minimum of the branching random walk, arXiv:1606.03211, 2016.
- [21] P. Maillard and M. Pain, 1-stable fluctuations in branching Brownian motion at critical temperature I: the derivative martingale, Ann. Prob., 47, 2019, 2953–3002.
- [22] J. Nolen, J.-M. Roquejoffre and L. Ryzhik, Convergence to a single wave in the Fisher-KPP equation, Chin. Ann. Math. Ser. B, 38, 2017, 629–646.
- [23] J. Nolen, J.-M. Roquejoffre and L. Ryzhik, Refined long-time asymptotics for Fisher-KPP fronts, Comm. Contemp. Math., 2018, 1850072.
- [24] M. Roberts, A simple path to asymptotics for the frontier of a branching Brownian motion, Ann. Prob. 41, 2013, 3518–3541.
- [25] K. Uchiyama, The behavior of solutions of some nonlinear diffusion equations for large time, J. Math. Kyoto Univ. 18, 1978, 453–508.