Fractal Geometry of the Valleys of the Parabolic Anderson Equation
Abstract.
We study the macroscopic fractal properties of the deep valleys of the solution of the -dimensional parabolic Anderson equation
where is the time-space white noise and Unlike the macroscopic multifractality of the tall peaks, we show that valleys of the parabolic Anderson equation are macroscopically monofractal. In fact, the macroscopic Hausdorff dimension (introduced by Barlow and Taylor [BT89, BT92]) of the valleys undergoes a phase transition at a point which does not depend on the initial data. The key tool of our proof is a lower bound to the lower tail probability of the parabolic Anderson equation. Such lower bound is obtained for the first time in this paper and will be derived by utilizing the connection between the parabolic Anderson equation and the Kardar-Parisi-Zhang equation. Our techniques of proving this lower bound can be extended to other models in the KPZ universality class including the KPZ fixed point.
Keywords: Parabolic Anderson models, KPZ equation, macroscopic Hausdorff dimension.
AMS 2010 subject classification: Primary. 60H15; Secondary. 35R60, 60K37.
1. Introduction
We consider the parabolic Anderson equation
(1.1) |
where is the time-space white noise and the nonnegative initial datum is a bounded positive initial data, i.e.,
(1.2) |
The solution theory of (1.1) is standard and was accomplished by Ito’s calculus or martingale problem. The existence and uniqueness of the solution of (1.1) under those initial conditions follow from [BC95, Theorem 2.2] (see also [Qua12, Section 3.3]). Thanks to [Mue91], the solution of (1.1) is strictly positive for all when is a positive initial data. Logarithm of the solution of (1.1) formally solves the Kardar-Parisi-Zhang (KPZ) equation which is written as follows
(1.3) |
The KPZ equation is the canonical stochastic PDE in the KPZ universality class. The solution theory of the KPZ equation had been approached in the recent past via different techniques, namely, the regularity structures [Hai13], paracontrolled stochastic PDE [GIP15, GP17], energy solution method [GJ14], renormalization group techniques [Kup16]. The solution constructed in those works are found to be consistent with the logarithm of the solution of (1.1). The latter is the physically relevant solution of the KPZ equation and often called as the Cole-Hopf solution.
The main objective of this paper is to study the ‘gaps’ between the tall peaks of the parabolic Anderson equation. The tall peaks of the solution of the parabolic Anderson equation triggers exponential growth of the moments of one point distributions. When the initial data is non-random and satisfies (1.2), [Che15] showed that
(1.4) |
The above result showcases the ‘intermittency’ (cf. [CM94]) of the parabolic Anderson equation. Motivated by this result and its analogue in other stochastic PDEs with multiplicative noise, [KKX17, KKX18] studied the macroscopic fractality of the spatial-temporal tall peaks of the solution for a large collection of parabolic stochastic PDEs including the parabolic Anderson equation. They had shown that the values of the macroscopic Hausdorff dimension of the tall peaks are distinct and nontrivial when the length scale and stretch factor vary, a property which symbolizes the multifractality. This is in stark contrast with the case of Brownian motion where the tall peaks demonstrate constant Hausdorff dimension (see [KKX17, Theorem 1.4]) along different length scale.
Study on the peaks of the Parabolic Anderson models on bears many new innovations in the recent past. As we have hinted above, those works were built on the connections with the geometry of the intermittency which reveals that the total mass in parabolic Anderson model in is concentrated on smaller island of peaks which are well separated from each other. However, the mystery behind the geometry of the solution filling inner space between those islands still remains open. Despite of many inspiring works on the tall peaks, there was hardly any study which focuses on the so called valleys or, gaps between tall peaks. Our main goal is to showcase that the spatio-temporal valleys of the parabolic Anderson equation rather shows a different feature than the peaks and the (macroscopic) Hausdorff dimensions of the associated level sets exhibit a phase transition.
The main object of our study is the spatio-temporal level sets of the valleys shown in below
(1.5) |
for every . Here, is called the length scale. From [CG20b], it is known that decays linearly with as grows large. In light of this fact, we may say that captures the average rate of decay of the Cole-Hopf solution of the KPZ equation. For every , we define by
The application on a square box produces a non-linear stretching in the time direction and the extent of stretching is determined by which we call as the stretch factor.
We seek to study the fractal nature of the level sets as get large. The fractal nature of the peaks in parabolic Anderson equation had been quantified in [KKX17, KKX18] by the Barlow-Taylor’s macroscopic Hausdorff dimension. Motivated by those works, we aim to determine the macroscopic Hausdorff dimension of . A precise mathematical definition of macroscopic Hausdorff dimension of any set is given in Section 2.2. For any set , we denote its macroscopic Hausdorff dimension by .
We are now ready to state the main result which will show that the macroscopic Hausdorff dimension of the valleys of the parabolic Anderson equation stays the same when we vary the stretch factor. However, it will undergo a sharp phase transition when we vary the length scale.
Remark 1.2.
Even though Theorem 1.1 will be proved only for the bounded positive initial data, we believe that the same result should hold for a large class of initial data including the case when is a Dirac delta measure. The proof techniques will almost remain same except in few places which are pointed out Section 1.1. Furthermore, it may be possible to extend Theorem 1.4 for other parabolic stochastic PDEs if few key estimates such as the ones shown in Theorem 1.4, Proposition 2.2 and 2.3 can be obtained in those cases.
Theorem 1.1 shows that for any length scale and any stretch factor , the macroscopic Hausdorff dimension of the valleys of the solution of (1.1) remains constant at value . The macroscopic dimension of the valleys changes to only when the stretch scale is greater than . This is in stark contrast with the fractal geometry of the peaks as revealed in [KKX18]. On contrary to the valleys, the macroscopic Hausdorff dimension of the peaks of (1.1) varies non-trivially with the stretch and length scales (see [KKX18, Theorem 1.1]). This last property marks the multifractality of the peaks whereas Theorem 1.1 is a sign of the monofractality of the valleys.
Theorem 1.1 in conjunction with [KKX18, Theorem 1.1] signals an apparent dichotomy between the peaks and the valleys which seems resonating with the geometry of intermittency of the parabolic Anderson model (PAM) on . A large number of works in the past including [CM94, GKM07, KLMS09] points to fact that the most of the masses in PAM are concentrated on the very high peaks and those peaks are well separated by long stretches of trapped valleys. These distinctive features of the peaks and valleys in PAM are expected to be present in the case of parabolic Anderson equation. In fact, [CCKK17] showed that if the initial data decays at infinity faster than the Gaussian kernel, then the total mass of the solution of the parabolic Anderson equation dissipates, that is, it vanishes sub-exponentially as . When (1.1) is considered on the torus instead of , [KKMS20] proved among other things that the supremum of the solution is localized in space. In combination of these works, Theorem 1.1 hints towards the fact that the macroscopically tall peaks of (1.1) are in fact highly concentrated on the small islands which help to create many large gaps or valleys between them.
We also like to point out that the monofractality of the valleys of (1.1) does not hold in the case when the multiplicative noise of (1.1) is replaced with the additive noise . In the latter case, the solution is a mean zero Gaussian process and [KKX18, Theorem 4.1] showed that the tall peaks of that Gaussian process are still multifractal. Due to the symmetry between the peaks and valleys for a Gaussian process, the valleys in the additive noise case are also multifractal. This attests to the fact that the monofractality of the valleys are intrinsic to the systems with multiplicative noise (see [GD05, GT05, ZTPSM00]).
As one may see, Theorem 1.1 did not cover the case . Since the transition of the macroscopic dimension occurs at , we believe that one would be able to see a crossover of dimension by taking for some function such that as . More precisely, we expect the following conjecture should be true.
Conjecture 1.3.
Consider the solution of (1.1) started from a bounded initial data . Then,
(1.6) |
for all where the constant will depend on and the initial data .
Conjecture 1.3 speculates that the macroscopic dimension of is indeed and it smoothly falls off to before reaches any nonzero fixed value greater than . Furthermore, the nontrivial dependence of the dimension on the stretch factor and the length scale as shown in (1.6) points out to the multifractality of the valleys of (1.1) at the crossover. The scaling of the spatial coordinate of by stems from the KPZ scaling, i.e., ratio between the scaling exponents of the fluctuation, space and time. Conjecture 1.3 is motivated from a recent work [DG21] where the authors studied the macroscopic Hausdorff dimension of the peaks of the KPZ equation at the onset of its (macroscopic) convergence towards the KPZ fixed point under the KPZ scaling. In [DG21] (see the discussion after Theorem 1.3), one may find similar claim as in (1.6) for the valleys. However, till now, those results can only be proved for the narrow wedge initial data of the KPZ equation which corresponds to being the Dirac delta measure at . Proving Conjecture 1.3 requires substantial understanding of the geometry of the KPZ equation under general initial data which we hope to explore in a future work.
One of the necessary components for studying the valleys for any stochastic process is to determine precise estimates on the probabilities of the process taking smaller values or, namely the lower tail probability. While some of the recent works indeed revealed detailed information on such estimates for the parabolic Anderson equation, the degree of the preciseness will vary with different initial data. See below for a review on those tail probability estimates. Even though those tail estimates instigate new interests in different lines of research, one of the prominent questions which will indeed play a key role in proving our results remained unanswered. This is about giving coherent lower bounds on the probabilities of the solution of (1.1) taking smaller values. While such bounds are available when the initial data is a Dirac delta measure (see Theorem 1.1 of [CG20b]), not much is known for any bounded initial data. Our second result of this section which we state below will partially fill this gap.
Theorem 1.4.
Suppose that is a bounded measurable function. For any and , there exists and such that for all ,
(1.7) |
Remark 1.5.
We believe that the exponent of in the right hand side of (1.7) is not optimal. The lower tail large deviation principle of the KPZ equation implies that the expected exponent of is . However, such result had only been proved for the case when is a Dirac delta measure at (see [Tsa18, CC19]). [CG20b] (see also Proposition 4.1) showed that the exponent of in the upper bound of the lower tail probability is . Proving a matching exponent in the lower bound seems to be out of reach at the moment. We would also like to stress that Theorem 1.4 possibly can be extended to a large class of initial data. See Remark 4.5 for further discussion.
Tail probabilities are ubiquitous in unlocking geometric patterns of any stochastic process. The parabolic Anderson model or, the KPZ equation are not the exceptions. Since the KPZ equation is the canonical stochastic PDE of the KPZ universality class, the techniques in handling the tail events of the KPZ equation can be replicated for other random growth models in many circumstances. To this end, the proof of Theorem 1.4 which provides lower bound to the lower tail probability of the KPZ equation, may certainly points out to useful pathways to solve similar problems in relevant situations. Intrinsically, Theorem 1.4 ensures that the solution of (1.1) cannot abruptly becomes zero. In that regards, Theorem 1.4 complements the result of [CHN16, Theorem 1.4] which proves that the law of the admits a strict positive density in .
Early breakthroughs in obtaining the lower tail estimates of the solution of (1.1) were achieved in [Mue91, MN08] using tools like large deviation bounds, comparison principles and Malliavin calculus etc. However, those results were proven only for bounded initial data. While [Flo14] provided the first useful bound on the lower tail probability under Dirac delta initial measure, [CG20b] obtained estimates which are tight and uniform in time. The tail estimates of [CG20b] were instrumental in [CG20a] for further improving the tail bounds for a large class of initial data including the bounded initial data (see Proposition 2.3). Upper tail probability of (1.1) has been studied before in many places including [CJK13, CD15, KKX17] in regards to its connection with the moments and intermittency property. Those bounds were recently improved in [CG20a]. See Proposition 2.2 for general initial data and Proposition 4.2 for the Dirac delta initial measure.
Finally, we finish off this section with a few words on the novelty of the present work. One of the prime objectives of this work lies in bridging two parallel lines of research: (a) study on the fractal geometry of the PAM and related models and (b) recently discovered tools and techniques in studying long time behavior of the KPZ equation. For instance, our proof techniques which will be further elaborated in Section 1.1 hinge on two different approaches to study the solution of (1.1). The first one is the construction of the local proxy of the mild solution of (1.1), which is originally based on Walsh’s solution theory [Wal86] of stochastic PDE. The second is based on very modern tools for tackling tail probabilities of the KPZ equation. These tools are developed in last couple years by incorporating ideas from random matrix theory, geometry of random curves such as KPZ line ensemble (introduced by [CH16]), interacting particle systems etc. We hope that our result will induce further interests in combining these two approaches to unravel deeper mysteries of the parabolic Anderson equation.
1.1. Proof Ideas
The proofs of Theorem 1.1 and 1.4 are mainly based on combination of probabilistic arguments and tail estimates obtained from many previous works. The core idea in the proof of Theorem 1.1 lies in the construction local ‘proxies’ of the solution of (1.1). Those local objects obtained at different space-time locations will be independent of each other when the locations are far apart and in the latter case, they indeed become very close to the solution of (1.1). See Proposition 2.8 for more details. The construction of those local proxies are carefully carried out in [KKX17, KKX18] for bounded initial data. While it could be possible to extend that construction for a large class of initial data including Dirac delta measure with some extra work, we will not pursue this direction in the present paper. Using techniques which are motivated from the works of [KKX17, KKX18], our proof will summarize the fractal information of the valleys through those local proxies. Use of the mutual independence and tail probabilities (obtained via Theorem 1.4, Proposition 2.3, 2.2) of those proxies are the main takeaway tools of our analysis.
The proof of Theorem 1.4 combines several recently developed tools from [CG20b, CG20a, CGH21, DG21]. The backbone of the argument is a convolution formula which provides an useful integral representation of the one point distribution of (1.1) started from any reasonable data (see Proposition 2.1). This reduces the result in Theorem 1.4 to a much simpler problem related to the solution of (1.1) started from Dirac delta measure (see Theorem 4.4). We will use a fresh blend of tail estimates of the latter and monotonocity property (like FKG type inequality, see Proposition 4.8) to prove Theorem 4.4 and finally, to complete proving Theorem 1.4. The above mentioned reduction can be carried out for a large class of initial data including those which grows linearly in the spatial variable . See Remark 4.5 for more details. Our overall techniques for proving Theorem 1.4 heavily relies on the inputs from the Gibbs property of the KPZ line ensemble (via Proposition 4.6 and 4.7) and monotonocity property of the KPZ equation (via Proposition 4.8). Such properties are also present in many models in the KPZ universality class including the KPZ fixed point (recently constructed in [MQR16, DOV18]), asymmetric simple exclusion process (ASEP), last passage percolation, stochastic six vertex model etc. We hope that it could be possible to prove lower bound to the lower tail probability of those models using the techniques of the present paper.
Outline
The rest of the paper is organized as follows. Section 2 reviews some preliminary facts and estimates on the tail probabilities of (1.1) and furthermore, recalls the definition and some useful results on the macroscopic Hausdorff dimension. Theorem 1.1 will be proved in Section 3. Finally, Section 4 will contain the proof of Theorem 1.4.
2. Preliminaries
In this section, we introduce few notations and recall some known facts which are important for our analysis.
2.1. Convolution Formula & Tail Estimates
Recall that the logarithm of the solution of (1.1) is the Cole-Hopf solution of the KPZ equation. When the initial data is a Dirac delta measure at , we denote the Cole-Hopf solution by where ‘’ stands for the narrow wedge initial data for the KPZ equation.
The solution of (1.1) started from the Dirac delta measure is called the fundamental solution. The one dimensional distributions of started from any initial data can be descried in terms of the fundamental solution via the convolution formula. The precise statement is stated as follows.
Proposition 2.1 (Convolution Formula, Lemma 1.18 of [CH16]).
For any measurable function and for a fixed , the unique solution of (1.1) started from satisfies
(2.1) |
In some of the recent works [CG20a, GL20], the convolution formula has been utilized to derive useful bounds on the probability of getting too large or, too small. In the next two propositions, we summarize those findings. For the sake of completeness, we will provide a short proof of these propositions in Section 4.2 and 4.3 respectively.
Proposition 2.2 (Upper Tail Estimates).
Fix . There exist and which depend on and initial data such that for all and ,
(2.2) |
Proposition 2.3 (Lower Tail Estimates).
Fix any small and . There exist and such that for any and ,
(2.3) |
2.2. Macroscopic dimension & Localization
In this section, we recall the definition of Barlow-Taylor’s macroscopic Hausdorff dimension in and and some of its properties. Localization technique in parabolic Anderson model, pioneered by [KKX17] is an important tool for studying the macroscopic fractality. We review this tool in Proposition 2.8.
Definition 2.5 (Barlow-Taylor’s macroscopic Hausdorff dimension [BT89, BT92]).
Let be the collection of all sets of the form
(2.4) |
for and in . Define . Let for and for . For any and , the -dimensional macroscopic Hausdorff content of the set will be denoted as and defined as
(2.5) |
where the infimum is taken over all such that covers the set . The Barlow-Taylor’s macroscopic Hausdorff dimension of any set which we will denote as is defined as
(2.6) |
The following proposition is useful in getting lower bound to the macroscopic Hausdorff dimension.
Proposition 2.6 (Theorem 4 of [BT92]).
Fix . For any set and , let us define
(2.7) |
Then, there exists a constant such that .
The next result which is taken from Corollary 1.4 of [KKX18] constructs a two dimensional set whose macroscopic Hausdorff dimension is . We use this set to show the phase transition of the macroscopic Hausdorff dimension in Theorem 1.1.
Proposition 2.7.
Fix an arbitrary constant and for any , define
(2.8) |
Corollary 1.4 of [KKX18] also showed that the dimension of is for any .
The next result describes an important tool (originated from [CJK13]) for studying the fractal geometry pioneered by [KKX17, KKX18]. In brief, it describes how to construct ‘local proxy’ for the the solution of (1.1) in far away spots so that the proxies will be independent of each other. In [KKX17, KKX18], these proxies were made to summarize the fractal information hidden in the peaks of the parabolic Anderson equation. As we will show in our proof, these proxies are also useful in decoding fractality of the valleys.
Proposition 2.8 (‘Local Proxy’ in parabolic Anderson equation, Theorem 3.9 of [KKX18]).
Fix . There exists a finite constant independent of and such that for any finite set of nonrandom points and that satisfy
(2.9) |
there exists a set of independent random variables with all positive moments finite, satisfying
(2.10) |
where is a positive constant independent , and .
3. Fractality of Valleys
3.1. Proof of part (a) of Theorem 1.1
Since is subset of a , . It suffices to show that . The definition of the Barlow-Taylor’s Hausdorff dimension implies that if holds with probability for all , then, is almost surely greater than or equal to . In what follows, we will show that this indeed holds, i.e.,
(3.1) |
Our proof will require Proposition 2.7 and 2.2. The first proposition which is taken from [BT92] bounds from below by a discrete measure (see (2.7)) on the set for any . The main goal of this section will be to obtain suitable lower bound to that discrete measure. One of the key tools that we use is the upper tail probability of the parabolic Anderson equation. Tight bounds to this tail probability was derived in [CG20a] for a large class of initial data and we have compiled their result in Proposition 2.2. We now proceed to complete the proof of part of Theorem 1.1.
Proof of Theorem 1.1(a): Consider a set of points satisfying (2.9). We seek to show that is bounded below by almost surely for all large . This will show (3.1) and hence, will complete proof. We divide the rest of the proof into two steps. In Step I, we will show a upper bound to the tail probability of . Step II will use this upper bound to show that is almost surely close to for all large .
Step I: Fix and . Here, we claim and prove that there exists , and large such that for all ,
(3.2) |
By Proposition 2.2, for any and , there exists such that for all and ,
(3.3) |
Recall the local proxy defined in Proposition 2.8. By the union bound, we get
In what follows, we bound the two terms on the right side of the above display. By Lemma 2.9, for any , is a set of independent random variables which imply
(3.4) |
The last inequality holds for large since for large , uniformly in . Moreover, the union bound and Lemma 2.9 yields
(3.5) |
Step II: Define the random set
Notice that is same as . Fix . For any and , define
Our goal is now to obtain an upper bound for the supremum of the probabilities of the events as varies over the set . For this, we seek to apply (3.2). Pick an integer from the set . Fix a set of integers such that for all , . As a consequence, the following inequality holds
(3.6) |
for when is sufficiently large. Since is less than the infimum value for , we may write
(3.7) |
for all with all sufficiently large . The last inequality follows by applying (3.2) and recalling that . Note that right hand side of the above inequality does not depend on . Therefore, we can deduce the following inequality
The last inequality follows from the direct application of (3.7). For large , the right hand side of the last inequality is summable in . Thus, by the Borel-Cantelli lemma, the following holds
(3.8) |
almost surely for all sufficiently large . As a result, eventually exceeds as increases implying holds almost surely. Combining this with Proposition 2.6 yields almost surely. Letting to competes the proof.
∎
3.2. Proof of part (b) of Theorem 1.1
We complete the proof in two steps. The first step is to show that is bounded above by and the second step is to show that is bounded below by . These two steps will be contained in the following two propositions. These propositions together complete the proof of part .
Proposition 3.1.
For all and , we have
(3.9) |
Proposition 3.2.
For all and , there exists such that for all ,
(3.10) |
One of the key inputs for the proof of these propositions is the upper bound on the probability of being small in a bounded domain. This result will be obtained in Proposition 3.3 using Proposition 2.3.
Proposition 3.3.
Let and . There exists a positive constant such that for all large , , and integers ,
(3.11) |
Remark 3.4.
We first prove Proposition 3.1 and 3.2 assuming Proposition 3.3 in two ensuing subsections and then, Proposition 3.3 will be finally proved in Section 3.2.3.
3.2.1. Proof of Proposition 3.1
The equality in (3.10) between the macroscopic Hausdorff dimension of and the set is straightforward.
We now prove that the dimension of the latter set is at most . Observe that for all , . Fix an arbitrary constant . Proposition 3.3 implies that for any , , , there exists such that uniformly for all with all large ,
(3.12) |
Below we introduce few notations which will be used throughout the rest of the proof. Define , , and
(3.13) | ||||
We now claim and prove that
(3.14) |
Before proceeding to the proof of (3.14), let us explain how (3.14) completes the proof of the inequality in (3.9). As one will see below, similar argument as in the proof of (3.14) will imply
(3.15) |
Moreover, we have (see [BT92, Section 4.1]). Combining these with (3.14) and recalling that for any two sets show the upper bound in the macroscopic Hausdorff dimension from (3.9).
Now we proceed to prove (3.14). Recall from Definition 2.5. From (3.13), it follows
(3.16) |
Using Proposition 2.7, we can deduce that . From this fact and (3.16), we have
(3.17) |
In the view of (3.17), we now bound the dimension of . Note that we can cover with many squares of the form for all integers . Let us denote the set of all these squares needed to cover by . Out of those squares in , can be covered with those satisfying
(3.18) |
Thus, for all large integers and for all real ,
where the last inequality follows from (3.12). Here, the positive constant is independent of . Note that the right hand side of the above display geometrically decays with . Summing both sides of the above display over , we deduce that a.s., for every . From the definition of the macroscopic Hausdorff dimension, this implies that a.s. Together with (3.17), the above result completes the proof of (3.14).
3.2.2. Proof of Proposition 3.2
Note that
for all . From the above display, the first inequality of (3.10) follows via the monotonocity of the Hausdorff dimension. Now we proceed to prove the second inequality. As in Section 3.1, we define the following random sets for any ,
Let us choose and fix arbitrary reals and satisfying . For any and , recall the definitions and In what follows, we claim and prove that for all sufficiently large , there exist and such that for all ,
(3.19) | ||||
(3.20) |
Let us fix such that for all , the following conditions are satisfied:
(3.21) |
We first bound the probability of exceeding the value . We will use this to prove (3.19).
Recall with from Proposition 2.8. By the union bound, we get
(3.22) | ||||
In what follows, we bound two terms of the right hand side of the above display. Recall that are independent which implies
(3.23) |
By Theorem 1.4, for any and , there exists and such that for all
(3.24) |
We fix and use Chebyshev’s inequality in conjunction with Proposition 2.8 to conclude
(3.25) |
where the last inequality follows for all large . Combining (3.24), (3.25) and substituting those into the last line of (3.2.2) shows
where the last inequality holds since for all . This provides an upper bound to the first term in the right hand side of (3.22). The second term can be bounded above by using the first inequality of (3.25) and union bound. As a result, we obtain the following
(3.26) |
Hence,
Note that the summation of the right hand side of the above inequality over is finite. By the Borel-Cantelli lemma, the following holds
(3.27) |
with probability for all sufficiently large . Like as in the proof of Theorem 1.1(a), (3.27) shows that eventually exceeds with probability . From Proposition 2.6, it now follows that with probability implying that
Letting to in the above display completes the proof.
3.2.3. Proof of Proposition 3.3
We need to bound the probability of getting smaller than as varies in the set . The first step to get such an upper bound is to localize in smaller boxes of bounded size. In each of the local box, one can control the probability of taking jumps using the available tail probability of the one point distribution and the local modulus of continuity of the parabolic Anderson equation. All of these upper bounds on the probabilities of the events restricted to the smaller box will be finally combined to complete the proof. Although our proof is motivated from [KKX18, Proposition 3.14], there is striking difference between our techniques for the valleys and those used in [KKX18] for the peaks. In the latter case, there was apriori knowledge on the tail bounds of the supremum of in an interval along spatial direction (see [Che16, Theorem 5.1], [KKX18, Proposition 3.14]). However, deriving such tail bound for the infimum of is far more challenging. We will bypass this difficulty by suitable use of one point lower tail of with its spatio-temporal modulus of continuity.
Fix and . We fix some large and which will be specified later. Let and be the least integers satisfying and . Define
for any integers , where and Note that and . By the union bound,
(3.28) | ||||
The proof will be completed by showing that and are both bounded above by for some constant . The first term will be bounded above using Proposition 2.3 and the second term will be bounded using the tail bound on the spatio-temporal modulus of continuity of . We provide the details below. Applying the union bound for , we have
where we have used and Proposition 2.3 to obtain the second inequality. The last inequality follows for all sufficiently large since . This shows the upper bound on .
We now proceed to bound . By [KKX18, Lemma 3.13], there exists such that for all , and ,
(3.29) |
where depends only on . Since the lengths of and are less than , we have
(3.30) |
Applying the union bound and the above inequality yields
where the last inequality follows since By choosing and such that , the last line of the above display can be bounded above by for all large with some constant . This provides the upper bound to . Substituting the upper bound on and into the right hand side of (3.28) completes the proof.
4. Tail Estimates
The main goal of this section is to prove Theorem 1.4. Apart from this, we also provide the proofs of Proposition 2.2 and 2.3 towards the end of this section. Before proceeding to main technical body of this section, we will recall some known facts and introduce few notations. Recall that the Cole-Hopf solution of the KPZ equation is none other than the logarithm of the solution of (1.1). When started from the delta initial measure, the logarithm of the solution of (1.1) corresponds to the solution of the KPZ equation started from the narrow wedge initial data. We will denote the latter by and define
(4.1) |
Proposition 4.1 (Theorem 1.1 of [CG20a]).
Fix and . Then, there exists and such that for all and ,
(4.2) |
Proposition 4.2 (Proposition 1.10 of [CG20b]).
For any , there exist and such that,for , and ,
(4.3) |
Proposition 4.3 (Proposition 4.2 of [CGH21]).
For any and , there exist and such that, for and ,
Now we proceed to prove Theorem 1.4.
4.1. Proof of Theorem 1.4
We use Proposition 2.1 to prove this theorem. By the convolution formula (see (2.1)), it suffices to show
(4.4) |
for all large . Recall that there exists such that for all . Fix . It is straightforward to check that there exists such that
(4.5) | ||||
for all . In the last equality, we used the change of variable . Thanks to the above display, proving (4.4) boils down to showing the following result which is certainly the main technical contribution of this section.
Theorem 4.4.
For , and , there exists and such that for ,
(4.6) |
Remark 4.5.
In (4.5), Theorem 1.4 is reduced to solving relatively simpler problem of finding lower bound to the lower tail probability of . Similar reduction is possible when the function defined by satisfies similar conditions as in Definition 1.1 of [CG20b]. This implies Theorem 1.4 can be extended to a large class of initial data since the rest of steps in our proof does not need any specification of .
For proving Theorem 4.4, we need the following three propositions. The first proposition will show that the supremum of in -variable is attained with high probability in a compact interval. The second proposition will find a tail bound on the fluctuations on a compact interval and the last proposition is a generalization of the FKG type inequality for the KPZ equation found in [CQ13]. We first state those propositions. This will be followed by the proof of Theorem 4.4. Thereafter, we will complete the proof of those propositions in three ensuing subsections.
Proposition 4.6.
For any and , there exists , such that for and ,
(4.7) |
Proposition 4.7.
Fix and . For any , there exists , such that for and
(4.8) |
Proposition 4.8.
Suppose and are disjoint intervals in . Then for all and , we have
(4.9) |
4.1.1. Proof of Theorem 4.4
Choose and fix an arbitrary . Let us define where is same as in Theorem 4.4. Note that
(4.10) |
where
By Proposition 4.6, we may bound from above by whenever and where and will only depend on some fixed . When is sufficiently large such that exceeds , then, we can certainly write
Now we proceed to find lower bound to . To this end, we decompose interval into smaller intervals where
with . Applying the FKG inequality of Proposition 4.8 shows that
(4.11) |
Note that for each . On each interval , by the union bound, we obtain
(4.12) |
We may use Proposition 4.1 to lower bound the first term in the right hand side of the above inequality. To this end, we get for large and ,
Since , the right hand side is bounded below by for all large . To bound the second term, we use Proposition 4.7. Letting , and applying stationarity of the spatial process shows
where the last inequality follows by applying Proposition 4.7. Combining the bounds on both terms on the right side of (4.12), we may write
(4.13) |
for all large . This provides a lower bound to each of the terms in the product of (4.11). Substituting those lower bounds into (4.11) and recalling that yields
for all large where is a positive constant which does not depend on or . Putting the lower bound on and the upper bound on together into (4.10) shows
(4.14) |
for all large . Notice that . This completes the proof since is an arbitrary constant.
4.1.2. Proof of Proposition 4.6
Note that (4.7) will be proved once we show the following bound
Fix and choose a constant such that . By the union bound, we may write
(4.15) |
By Proposition 4.1, we may bound by for all where is an absolute constant which only depends . It remains to bound the first term in the right hand side of the above display. To this end, denoting , we write
where the last inequality follows from Proposition 4.3 for all and . Combining the above bounds and substituting those into (4.15) completes the proof.
4.1.3. Proof of Proposition 4.7
We will prove this result using Proposition 4.1 and 4.2 from [DG21]. By proper identification of the notation used in this section with those used in [DG21, Proposition 4.1,4.2], we see that there exist and such that for any ,
Notice that the above inequalities holds (with possibly a different constant ) if we replace . The aftermath of these substitutions can be summarized in the following inequality: there exists depending on such that for all and ,
Now (4.8) follows from the above display by the stationarity of the spatial process of .
4.1.4. Proof of Proposition 4.8
By using the FKG inequality for the KPZ equation (see Proposition 1 in [CQ13] or, Proposition 2.7 in [CGH21], we may write that any , , , and ,
Suppose be the first terms of an enumerations of the rational numbers from and similarly, be the first terms in the rational enumeration from . Letting and shows
Since is almost surely continuous in for any , (4.8) follows immediately from the above display.
4.2. Proof of Proposition 2.2
To prove this result, we use Theorem 1.4 of [CG20a] which provides upper and lower bound to the upper tail probabilities of the Cole-Hopf solution of the KPZ equation started from a large class of initial data including any bounded initial data. Since the Cole-Hopf solution is the logarithm of the solution of (1.1), those tail probability bounds are also applicable for the parabolic Anderson equation. To apply [CG20a, Theorem 1.4], one needs to satisfy two conditions of Definition 1.1 of [CG20a]. The first condition requires to be upper bounded by a parabola in -variable for all and the second condition requires to be lower bounded by a finite constant in an interval around . If is a bounded positive initial data, then, the above two conditions will be satisfied for . Hence, by Theorem 1.4 of [CG20a], for any , there exists and such that for all and ,
(4.16) |
From the above inequalities, (2.2) for follows by substituting . Since is a bounded positive initial data, the inequalities in (4.16) also hold for where the corresponding constants would not depend on . The reason behind this fact lies in the convolution formula (2.1) for the one point distribution of and the proof can be completed using similar argument as in [CG20a, Theorem 1.4]. Once (4.16) holds for , we obtain (2.2) for by making the appropriate substitution of by as prescribed above. This completes the proof.
4.3. Proof of Proposition 2.3
Acknowledgments
The first author thanks Sayan Das for many useful conversations and numerous suggestions on the first draft of this paper. The second author thanks Professor Kunwoo Kim for valuable discussions with his continuous support and encouragement. The second author was supported by the NRF (National Research Foundation of Korea) grants 2019R1A5A1028324 and 2020R1A2C4002077.
References
- [BC95] L. Bertini and N. Cancrini. The stochastic heat equation: Feynman-kac formula and intermittence. Journal of statistical Physics, 78(5):1377–1401, 1995.
- [BT89] M. T Barlow and S. J Taylor. Fractional dimension of sets in discrete spaces. Journal of Physics A: Mathematical and General, 22(13):2621, 1989.
- [BT92] M. T. Barlow and S. J. Taylor. Defining fractal subsets of . Proceedings of the London Mathematical Society, 3(1):125–152, 1992.
- [CC19] M. Cafasso and T. Claeys. A Riemann-Hilbert approach to the lower tail of the kpz equation. arXiv preprint arXiv:1910.02493, 2019.
- [CCKK17] L. Chen, M. Cranston, D. Khoshnevisan, and K. Kim. Dissipation and high disorder. The Annals of Probability, 45(1):82–99, 2017.
- [CD15] L. Chen and R. C. Dalang. Moments and growth indices for the nonlinear stochastic heat equation with rough initial conditions. The Annals of Probability, 43(6):3006–3051, 2015.
- [CG20a] I. Corwin and P. Ghosal. KPZ equation tails for general initial data. Electronic Journal of Probability, 25:1–38, 2020.
- [CG20b] I. Corwin and P. Ghosal. Lower tail of the KPZ equation. Duke Mathematical Journal, 169(7):1329–1395, 2020.
- [CGH21] I. Corwin, P. Ghosal, and A. Hammond. KPZ equation correlations in time. The Annals of Probability, 49(2):832–876, 2021.
- [CH16] I. Corwin and A. Hammond. KPZ line ensemble. Probability Theory and Related Fields, 166(1):67–185, 2016.
- [Che15] X. Chen. Precise intermittency for the parabolic anderson equation with an -dimensional time–space white noise. In Annales de l’IHP Probabilités et statistiques, volume 51, pages 1486–1499, 2015.
- [Che16] X. Chen. Spatial asymptotics for the parabolic Anderson models with generalized time-space Gaussian noise. Ann. Probab., 44(2):1535–1598, 2016.
- [CHN16] L. Chen, Y. Hu, and D. Nualart. Regularity and strict positivity of densities for the nonlinear stochastic heat equation. arXiv preprint arXiv:1611.03909, 2016.
- [CJK13] D. Conus, M. Joseph, and D. Khoshnevisan. On the chaotic character of the stochastic heat equation, before the onset of intermitttency. The Annals of Probability, 41(3B):2225–2260, 2013.
- [CK17] Le Chen and Kunwoo Kim. On comparison principle and strict positivity of solutions to the nonlinear stochastic fractional heat equations. In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, volume 53, pages 358–388. Institut Henri Poincaré, 2017.
- [CM94] R. A. Carmona and S. A. Molchanov. Parabolic Anderson problem and intermittency. Mem. Amer. Math. Soc., 108(518):viii+125, 1994.
- [CQ13] I. Corwin and J. Quastel. Crossover distributions at the edge of the rarefaction fan. The Annals of Probability, 41(3A):1243–1314, 2013.
- [DG21] S. Das and P. Ghosal. Law of iterated logarithms and fractal properties of the KPZ equation. arXiv preprint arXiv:2101.00730, 2021.
- [DOV18] D. Dauvergne, J. Ortmann, and B. Virag. The directed landscape. arXiv e-prints, page arXiv:1812.00309, December 2018.
- [Flo14] G. R. M. Flores. On the (strict) positivity of solutions of the stochastic heat equation. The Annals of Probability, 42(4):1635–1643, 2014.
- [GD05] J. D. Gibbon and C. R. Doering. Intermittency and regularity issues in 3D Navier-Stokes turbulence. Arch. Ration. Mech. Anal., 177(1):115–150, 2005.
- [GIP15] M. Gubinelli, P. Imkeller, and N. Perkowski. Paracontrolled distributions and singular PDEs. Forum Math. Pi, 3:e6, 75, 2015.
- [GJ14] P. Gonçalves and M. Jara. Nonlinear fluctuations of weakly asymmetric interacting particle systems. Arch. Ration. Mech. Anal., 212(2):597–644, 2014.
- [GKM07] J. Gärtner, W. König, and S. Molchanov. Geometric characterization of intermittency in the parabolic Anderson model. Ann. Probab., 35(2):439–499, 2007.
- [GL20] P. Ghosal and Y. Lin. Lyapunov exponents of the SHE for general initial data. arXiv e-prints, page arXiv:2007.06505, July 2020.
- [GP17] M. Gubinelli and N. Perkowski. KPZ reloaded. Comm. Math. Phys., 349(1):165–269, 2017.
- [GT05] J. D. Gibbon and E. S. Titi. Cluster formation in complex multi-scale systems. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 461(2062):3089–3097, 2005.
- [Hai13] M. Hairer. Solving the KPZ equation. Ann. of Math. (2), 178(2):559–664, 2013.
- [KKMS20] D. Khoshnevisan, K. Kim, C. Mueller, and S.-Y. Shiu. Dissipation in parabolic spdes. Journal of Statistical Physics, 179(2):502–534, 2020.
- [KKX17] D. Khoshnevisan, K. Kim, and Y. Xiao. Intermittency and multifractality: A case study via parabolic stochastic pdes. The Annals of Probability, 45(6A):3697–3751, 2017.
- [KKX18] D. Khoshnevisan, K. Kim, and Y. Xiao. A macroscopic multifractal analysis of parabolic stochastic pdes. Communications in Mathematical Physics, 360(1):307–346, 2018.
- [KLMS09] W. König, H. Lacoin, P. Mörters, and N. Sidorova. A two cities theorem for the parabolic Anderson model. Ann. Probab., 37(1):347–392, 2009.
- [Kup16] A. Kupiainen. Renormalization group and stochastic PDEs. Ann. Henri Poincaré, 17(3):497–535, 2016.
- [MN08] C. Mueller and D. Nualart. Regularity of the density for the stochastic heat equation. Electronic Journal of Probability, 13:2248–2258, 2008.
- [MQR16] K. Matetski, J. Quastel, and D. Remenik. The KPZ fixed point. arXiv e-prints, page arXiv:1701.00018, December 2016.
- [Mue91] C. Mueller. On the support of solutions to the heat equation with noise. Stochastics: An International Journal of Probability and Stochastic Processes, 37(4):225–245, 1991.
- [Qua12] J. Quastel. Introduction to KPZ. In Current developments in mathematics, 2011, pages 125–194. Int. Press, Somerville, MA, 2012.
- [Tsa18] L.-C. Tsai. Exact lower tail large deviations of the KPZ equation. arXiv preprint arXiv:1809.03410, 2018.
- [Wal86] J. B. Walsh. An introduction to stochastic partial differential equations. In École d’Été de Probabilités de Saint Flour XIV-1984, pages 265–439. Springer, 1986.
- [ZTPSM00] M. G. Zimmermann, R. Toral, O. Piro, and M. San Miguel. Stochastic spatiotemporal intermittency and noise-induced transition to an absorbing phase. Phys. Rev. Lett., 85:3612–3615, Oct 2000.