This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Fractal Geometry of the Valleys of the Parabolic Anderson Equation

Promit Ghosal P. Ghosal, Department of Mathematics, Massachusetts Institute of Technology (MIT),
   P. Ghosal 77 Massachusetts Avenue, Cambridge, MA 02139, USA
[email protected]
 and  Jaeyun Yi J. Yi, Pohang University of Science and Technology (POSTECH),
   J. Yi Pohang, Gyeongbuk 37673, South Korea
[email protected]
Abstract.

We study the macroscopic fractal properties of the deep valleys of the solution of the (1+1)(1+1)-dimensional parabolic Anderson equation

{tu(t,x)=122x2u(t,x)+u(t,x)W˙(t,x),t>0,x,u(0,x)u0(x),x,\begin{cases}{\partial\over\partial t}u(t,x)=\frac{1}{2}{\partial^{2}\over\partial x^{2}}u(t,x)+u(t,x)\dot{W}(t,x),\quad t>0,x\in\mathds{R},\\ u(0,x)\equiv u_{0}(x),\quad x\in\mathds{R},\end{cases}

where W˙\dot{W} is the time-space white noise and 0<infxu0(x)supxu0(x)<.0<\inf_{x\in\mathds{R}}u_{0}(x)\leq\sup_{x\in\mathds{R}}u_{0}(x)<\infty. Unlike the macroscopic multifractality of the tall peaks, we show that valleys of the parabolic Anderson equation are macroscopically monofractal. In fact, the macroscopic Hausdorff dimension (introduced by Barlow and Taylor [BT89, BT92]) of the valleys undergoes a phase transition at a point which does not depend on the initial data. The key tool of our proof is a lower bound to the lower tail probability of the parabolic Anderson equation. Such lower bound is obtained for the first time in this paper and will be derived by utilizing the connection between the parabolic Anderson equation and the Kardar-Parisi-Zhang equation. Our techniques of proving this lower bound can be extended to other models in the KPZ universality class including the KPZ fixed point.

Keywords: Parabolic Anderson models, KPZ equation, macroscopic Hausdorff dimension.

AMS 2010 subject classification: Primary. 60H15; Secondary. 35R60, 60K37.

1. Introduction

We consider the parabolic Anderson equation

{tu(t,x)=122x2u(t,x)+u(t,x)W˙(t,x),t>0,x,u(0,x)u0(x),x,\begin{cases}{\partial\over\partial t}u(t,x)=\frac{1}{2}{\partial^{2}\over\partial x^{2}}u(t,x)+u(t,x)\dot{W}(t,x),\quad t>0,x\in\mathds{R},\\ u(0,x)\equiv u_{0}(x),\quad x\in\mathds{R},\end{cases} (1.1)

where W˙\dot{W} is the time-space white noise and the nonnegative initial datum u0C0()u_{0}\in C^{0}(\mathds{R}) is a bounded positive initial data, i.e.,

0<infxu0(x)supxu0(x)<.0<\inf_{x\in\mathds{R}}u_{0}(x)\leq\sup_{x\in\mathds{R}}u_{0}(x)<\infty. (1.2)

The solution theory of (1.1) is standard and was accomplished by Ito’s calculus or martingale problem. The existence and uniqueness of the solution of (1.1) under those initial conditions follow from [BC95, Theorem 2.2] (see also [Qua12, Section 3.3]). Thanks to [Mue91], the solution of (1.1) is strictly positive for all t>0t>0 when u0u_{0} is a positive initial data. Logarithm of the solution of (1.1) formally solves the Kardar-Parisi-Zhang (KPZ) equation which is written as follows

t(t,x)=122x2u(t,x)+(xu(t,x))2+W˙(t,x),\displaystyle\partial_{t}\mathcal{H}(t,x)=\frac{1}{2}{\partial^{2}\over\partial x^{2}}u(t,x)+\big{(}{\partial\over\partial x}u(t,x)\big{)}^{2}+\dot{W}(t,x), (1.3)

The KPZ equation is the canonical stochastic PDE in the KPZ universality class. The solution theory of the KPZ equation had been approached in the recent past via different techniques, namely, the regularity structures [Hai13], paracontrolled stochastic PDE [GIP15, GP17], energy solution method [GJ14], renormalization group techniques [Kup16]. The solution constructed in those works are found to be consistent with the logarithm of the solution of (1.1). The latter is the physically relevant solution of the KPZ equation and often called as the Cole-Hopf solution.

The main objective of this paper is to study the ‘gaps’ between the tall peaks of the parabolic Anderson equation. The tall peaks of the solution of the parabolic Anderson equation triggers exponential growth of the moments of one point distributions. When the initial data u0u_{0} is non-random and satisfies (1.2), [Che15] showed that

limt1tlog𝔼[u(t,x)k]=k(k21)24,k>0.\displaystyle\lim_{t\to\infty}\frac{1}{t}\log\mathds{E}[u(t,x)^{k}]=\frac{k(k^{2}-1)}{24},\quad\forall k\in\mathds{Z}_{>0}. (1.4)

The above result showcases the ‘intermittency’ (cf. [CM94]) of the parabolic Anderson equation. Motivated by this result and its analogue in other stochastic PDEs with multiplicative noise, [KKX17, KKX18] studied the macroscopic fractality of the spatial-temporal tall peaks of the solution for a large collection of parabolic stochastic PDEs including the parabolic Anderson equation. They had shown that the values of the macroscopic Hausdorff dimension of the tall peaks are distinct and nontrivial when the length scale and stretch factor vary, a property which symbolizes the multifractality. This is in stark contrast with the case of Brownian motion where the tall peaks demonstrate constant Hausdorff dimension (see [KKX17, Theorem 1.4]) along different length scale.

Study on the peaks of the Parabolic Anderson models on d\mathds{Z}^{d} bears many new innovations in the recent past. As we have hinted above, those works were built on the connections with the geometry of the intermittency which reveals that the total mass in parabolic Anderson model in d\mathds{Z}^{d} is concentrated on smaller island of peaks which are well separated from each other. However, the mystery behind the geometry of the solution filling inner space between those islands still remains open. Despite of many inspiring works on the tall peaks, there was hardly any study which focuses on the so called valleys or, gaps between tall peaks. Our main goal is to showcase that the spatio-temporal valleys of the parabolic Anderson equation rather shows a different feature than the peaks and the (macroscopic) Hausdorff dimensions of the associated level sets exhibit a phase transition.

The main object of our study is the spatio-temporal level sets of the valleys shown in below

𝒱(γ):={(t,x)(e,)×):u(t,x)<eγt},\displaystyle\mathcal{V}(\gamma):=\left\{(t,x)\in(e,\infty)\times\mathds{R})\,:\,u(t,x)<e^{-\gamma t}\right\}, (1.5)

for every γ>0\gamma>0. Here, γ\gamma is called the length scale. From [CG20b], it is known that logu(t,0)\log u(t,0) decays linearly with tt as tt grows large. In light of this fact, we may say that γ\gamma captures the average rate of decay of the Cole-Hopf solution of the KPZ equation. For every β>0\beta>0, we define 𝒮β:+×(1,)×\mathcal{S}_{\beta}:\mathds{R}_{+}\times\mathds{R}\rightarrow{(1,\infty)\times\mathds{R}} by

𝒮β(t,x):=(et/β,x)for all (t,x)+×.\displaystyle\mathcal{S}_{\beta}(t,x):=\big{(}e^{t/\beta},x\big{)}\quad\text{for all $(t,x)\in\mathds{R}_{+}\times\mathds{R}$.}

The application 𝒮β\mathcal{S}_{\beta} on a square box produces a non-linear stretching in the time direction and the extent of stretching is determined by β\beta which we call as the stretch factor.

We seek to study the fractal nature of the level sets 𝒮β(𝒱(γ))\mathcal{S}_{\beta}(\mathcal{V}(\gamma)) as t,xt,x get large. The fractal nature of the peaks in parabolic Anderson equation had been quantified in [KKX17, KKX18] by the Barlow-Taylor’s macroscopic Hausdorff dimension. Motivated by those works, we aim to determine the macroscopic Hausdorff dimension of 𝒮β(𝒱(γ))\mathcal{S}_{\beta}(\mathcal{V}(\gamma)). A precise mathematical definition of macroscopic Hausdorff dimension of any set is given in Section 2.2. For any set A2A\subset\mathds{R}^{2}, we denote its macroscopic Hausdorff dimension by Dim[A]\mathrm{Dim}_{\mathds{H}}[A].

We are now ready to state the main result which will show that the macroscopic Hausdorff dimension of the valleys of the parabolic Anderson equation stays the same when we vary the stretch factor. However, it will undergo a sharp phase transition when we vary the length scale.

Theorem 1.1.

Consider the solution of (1.1) where u0u_{0} satisfies (1.2). Then, for every β>0\beta>0,

  • (a)
    Dim[𝒮β(𝒱(γ))]=a.s.2for 0<γ<124,\mathrm{Dim}_{\mathds{H}}[\mathcal{S}_{\beta}(\mathcal{V}(\gamma))]\stackrel{{\scriptstyle a.s.}}{{=}}2\quad\text{for }0<\gamma<\frac{1}{24},
  • (b)
    Dim[𝒮β(𝒱(γ))]=a.s.1for γ>124.\mathrm{Dim}_{\mathds{H}}[\mathcal{S}_{\beta}(\mathcal{V}(\gamma))]\stackrel{{\scriptstyle a.s.}}{{=}}1\quad\text{for }\gamma>\frac{1}{24}.
Remark 1.2.

Even though Theorem 1.1 will be proved only for the bounded positive initial data, we believe that the same result should hold for a large class of initial data including the case when u0u_{0} is a Dirac delta measure. The proof techniques will almost remain same except in few places which are pointed out Section 1.1. Furthermore, it may be possible to extend Theorem 1.4 for other parabolic stochastic PDEs if few key estimates such as the ones shown in Theorem 1.4, Proposition 2.2 and 2.3 can be obtained in those cases.

Theorem 1.1 shows that for any length scale γ<124\gamma<\frac{1}{24} and any stretch factor β>0\beta>0, the macroscopic Hausdorff dimension of the valleys of the solution of (1.1) remains constant at value 22. The macroscopic dimension of the valleys changes to 11 only when the stretch scale γ\gamma is greater than 124\frac{1}{24}. This is in stark contrast with the fractal geometry of the peaks as revealed in [KKX18]. On contrary to the valleys, the macroscopic Hausdorff dimension of the peaks of (1.1) varies non-trivially with the stretch and length scales (see [KKX18, Theorem 1.1]). This last property marks the multifractality of the peaks whereas Theorem 1.1 is a sign of the monofractality of the valleys.

Theorem 1.1 in conjunction with [KKX18, Theorem 1.1] signals an apparent dichotomy between the peaks and the valleys which seems resonating with the geometry of intermittency of the parabolic Anderson model (PAM) on d\mathds{Z}^{d}. A large number of works in the past including [CM94, GKM07, KLMS09] points to fact that the most of the masses in PAM are concentrated on the very high peaks and those peaks are well separated by long stretches of trapped valleys. These distinctive features of the peaks and valleys in PAM are expected to be present in the case of parabolic Anderson equation. In fact, [CCKK17] showed that if the initial data decays at infinity faster than the Gaussian kernel, then the total mass of the solution of the parabolic Anderson equation dissipates, that is, it vanishes sub-exponentially as tt\rightarrow\infty. When (1.1) is considered on the torus instead of \mathds{R}, [KKMS20] proved among other things that the supremum of the solution is localized in space. In combination of these works, Theorem 1.1 hints towards the fact that the macroscopically tall peaks of (1.1) are in fact highly concentrated on the small islands which help to create many large gaps or valleys between them.

We also like to point out that the monofractality of the valleys of (1.1) does not hold in the case when the multiplicative noise uW˙u\dot{W} of (1.1) is replaced with the additive noise W˙\dot{W}. In the latter case, the solution is a mean zero Gaussian process and [KKX18, Theorem 4.1] showed that the tall peaks of that Gaussian process are still multifractal. Due to the symmetry between the peaks and valleys for a Gaussian process, the valleys in the additive noise case are also multifractal. This attests to the fact that the monofractality of the valleys are intrinsic to the systems with multiplicative noise (see [GD05, GT05, ZTPSM00]).

As one may see, Theorem 1.1 did not cover the case γ=124\gamma=\frac{1}{24}. Since the transition of the macroscopic dimension occurs at γ=124\gamma=\frac{1}{24}, we believe that one would be able to see a crossover of dimension by taking γ=124+f(t)\gamma=\frac{1}{24}+f(t) for some function f:>0f:\mathds{R}_{>0}\to\mathds{R} such that f(t)/t0f(t)/t\to 0 as t0t\to 0. More precisely, we expect the following conjecture should be true.

Conjecture 1.3.

Consider the solution of (1.1) started from a bounded initial data u0u_{0}. Then,

Dim(𝒮β({(t,x)(e,)×:u(t,t2/3x)et24α(tloglogt)1/3}))=a.s.2Cα3\displaystyle\mathrm{Dim}_{\mathds{H}}\Big{(}\mathcal{S}_{\beta}\Big{(}\big{\{}(t,x)\in(e,\infty)\times\mathds{R}:u(t,t^{2/3}x)\leq e^{-\frac{t}{24}-\alpha(t\log\log t)^{1/3}}\big{\}}\Big{)}\Big{)}\stackrel{{\scriptstyle a.s.}}{{=}}2-C\alpha^{3} (1.6)

for all α[0,C13]\alpha\in\big{[}0,C^{-\frac{1}{3}}\big{]} where the constant C>0C>0 will depend on β\beta and the initial data u0u_{0}.

Conjecture 1.3 speculates that the macroscopic dimension of 𝒮β(𝒱(124))\mathcal{S}_{\beta}(\mathcal{V}(\frac{1}{24})) is indeed 22 and it smoothly falls off to 11 before γ\gamma reaches any nonzero fixed value greater than 124\frac{1}{24}. Furthermore, the nontrivial dependence of the dimension on the stretch factor and the length scale as shown in (1.6) points out to the multifractality of the valleys of (1.1) at the crossover. The scaling of the spatial coordinate xx of uu by t2/3t^{2/3} stems from the KPZ scaling, i.e., 1:2:31:2:3 ratio between the scaling exponents of the fluctuation, space and time. Conjecture 1.3 is motivated from a recent work [DG21] where the authors studied the macroscopic Hausdorff dimension of the peaks of the KPZ equation at the onset of its (macroscopic) convergence towards the KPZ fixed point under the KPZ scaling. In [DG21] (see the discussion after Theorem 1.3), one may find similar claim as in (1.6) for the valleys. However, till now, those results can only be proved for the narrow wedge initial data of the KPZ equation which corresponds to u0u_{0} being the Dirac delta measure at 0. Proving Conjecture 1.3 requires substantial understanding of the geometry of the KPZ equation under general initial data which we hope to explore in a future work.

One of the necessary components for studying the valleys for any stochastic process is to determine precise estimates on the probabilities of the process taking smaller values or, namely the lower tail probability. While some of the recent works indeed revealed detailed information on such estimates for the parabolic Anderson equation, the degree of the preciseness will vary with different initial data. See below for a review on those tail probability estimates. Even though those tail estimates instigate new interests in different lines of research, one of the prominent questions which will indeed play a key role in proving our results remained unanswered. This is about giving coherent lower bounds on the probabilities of the solution of (1.1) taking smaller values. While such bounds are available when the initial data is a Dirac delta measure (see Theorem 1.1 of [CG20b]), not much is known for any bounded initial data. Our second result of this section which we state below will partially fill this gap.

Theorem 1.4.

Suppose that logu0()\log u_{0}(\cdot) is a bounded measurable function. For any γ>124\gamma>\frac{1}{24} and ϵ>0\epsilon>0, there exists c=c(γ,ϵ)>0c=c(\gamma,\epsilon)>0 and t0=t0(γ,ϵ)>0t_{0}=t_{0}(\gamma,\epsilon)>0 such that for all tt0t\geq t_{0},

(u(t,x)eγt)ect4+ϵ.\mathds{P}\big{(}u(t,x)\leq e^{-\gamma t}\big{)}\geq e^{-ct^{4+\epsilon}}. (1.7)
Remark 1.5.

We believe that the exponent of tt in the right hand side of (1.7) is not optimal. The lower tail large deviation principle of the KPZ equation implies that the expected exponent of tt is 22. However, such result had only been proved for the case when u0u_{0} is a Dirac delta measure at 0 (see [Tsa18, CC19]). [CG20b] (see also Proposition 4.1) showed that the exponent of tt in the upper bound of the lower tail probability is 22. Proving a matching exponent in the lower bound seems to be out of reach at the moment. We would also like to stress that Theorem 1.4 possibly can be extended to a large class of initial data. See Remark 4.5 for further discussion.

Tail probabilities are ubiquitous in unlocking geometric patterns of any stochastic process. The parabolic Anderson model or, the KPZ equation are not the exceptions. Since the KPZ equation is the canonical stochastic PDE of the KPZ universality class, the techniques in handling the tail events of the KPZ equation can be replicated for other random growth models in many circumstances. To this end, the proof of Theorem 1.4 which provides lower bound to the lower tail probability of the KPZ equation, may certainly points out to useful pathways to solve similar problems in relevant situations. Intrinsically, Theorem 1.4 ensures that the solution of (1.1) cannot abruptly becomes zero. In that regards, Theorem 1.4 complements the result of [CHN16, Theorem 1.4] which proves that the law of the u(t,x)u(t,x) admits a strict positive density in \mathds{R}.

Early breakthroughs in obtaining the lower tail estimates of the solution of (1.1) were achieved in [Mue91, MN08] using tools like large deviation bounds, comparison principles and Malliavin calculus etc. However, those results were proven only for bounded initial data. While [Flo14] provided the first useful bound on the lower tail probability under Dirac delta initial measure, [CG20b] obtained estimates which are tight and uniform in time. The tail estimates of [CG20b] were instrumental in [CG20a] for further improving the tail bounds for a large class of initial data including the bounded initial data (see Proposition 2.3). Upper tail probability of (1.1) has been studied before in many places including [CJK13, CD15, KKX17] in regards to its connection with the moments and intermittency property. Those bounds were recently improved in [CG20a]. See Proposition 2.2 for general initial data and Proposition 4.2 for the Dirac delta initial measure.

Finally, we finish off this section with a few words on the novelty of the present work. One of the prime objectives of this work lies in bridging two parallel lines of research: (a) study on the fractal geometry of the PAM and related models and (b) recently discovered tools and techniques in studying long time behavior of the KPZ equation. For instance, our proof techniques which will be further elaborated in Section 1.1 hinge on two different approaches to study the solution of (1.1). The first one is the construction of the local proxy of the mild solution of (1.1), which is originally based on Walsh’s solution theory [Wal86] of stochastic PDE. The second is based on very modern tools for tackling tail probabilities of the KPZ equation. These tools are developed in last couple years by incorporating ideas from random matrix theory, geometry of random curves such as KPZ line ensemble (introduced by [CH16]), interacting particle systems etc. We hope that our result will induce further interests in combining these two approaches to unravel deeper mysteries of the parabolic Anderson equation.

1.1. Proof Ideas

The proofs of Theorem 1.1 and 1.4 are mainly based on combination of probabilistic arguments and tail estimates obtained from many previous works. The core idea in the proof of Theorem 1.1 lies in the construction local ‘proxies’ of the solution of (1.1). Those local objects obtained at different space-time locations will be independent of each other when the locations are far apart and in the latter case, they indeed become very close to the solution of (1.1). See Proposition 2.8 for more details. The construction of those local proxies are carefully carried out in [KKX17, KKX18] for bounded initial data. While it could be possible to extend that construction for a large class of initial data including Dirac delta measure with some extra work, we will not pursue this direction in the present paper. Using techniques which are motivated from the works of [KKX17, KKX18], our proof will summarize the fractal information of the valleys through those local proxies. Use of the mutual independence and tail probabilities (obtained via Theorem 1.4, Proposition 2.32.2) of those proxies are the main takeaway tools of our analysis.

The proof of Theorem 1.4 combines several recently developed tools from [CG20b, CG20a, CGH21, DG21]. The backbone of the argument is a convolution formula which provides an useful integral representation of the one point distribution of (1.1) started from any reasonable data (see Proposition 2.1). This reduces the result in Theorem 1.4 to a much simpler problem related to the solution of (1.1) started from Dirac delta measure (see Theorem 4.4). We will use a fresh blend of tail estimates of the latter and monotonocity property (like FKG type inequality, see Proposition 4.8) to prove Theorem 4.4 and finally, to complete proving Theorem 1.4. The above mentioned reduction can be carried out for a large class of initial data including those which grows linearly in the spatial variable xx. See Remark 4.5 for more details. Our overall techniques for proving Theorem 1.4 heavily relies on the inputs from the Gibbs property of the KPZ line ensemble (via Proposition 4.6 and 4.7) and monotonocity property of the KPZ equation (via Proposition 4.8). Such properties are also present in many models in the KPZ universality class including the KPZ fixed point (recently constructed in [MQR16, DOV18]), asymmetric simple exclusion process (ASEP), last passage percolation, stochastic six vertex model etc. We hope that it could be possible to prove lower bound to the lower tail probability of those models using the techniques of the present paper.

Outline

The rest of the paper is organized as follows. Section 2 reviews some preliminary facts and estimates on the tail probabilities of (1.1) and furthermore, recalls the definition and some useful results on the macroscopic Hausdorff dimension. Theorem 1.1 will be proved in Section 3. Finally, Section 4 will contain the proof of Theorem 1.4.

2. Preliminaries

In this section, we introduce few notations and recall some known facts which are important for our analysis.

2.1. Convolution Formula & Tail Estimates

Recall that the logarithm of the solution of (1.1) is the Cole-Hopf solution of the KPZ equation. When the initial data u0u_{0} is a Dirac delta measure at 0, we denote the Cole-Hopf solution by 𝐧𝐰\mathcal{H}^{\mathbf{nw}} where ‘𝐧𝐰\mathbf{nw}’ stands for the narrow wedge initial data for the KPZ equation.

The solution of (1.1) started from the Dirac delta measure is called the fundamental solution. The one dimensional distributions of u(t,x)u(t,x) started from any initial data can be descried in terms of the fundamental solution via the convolution formula. The precise statement is stated as follows.

Proposition 2.1 (Convolution Formula, Lemma 1.18 of [CH16]).

For any measurable function u0:>0u_{0}:\mathds{R}\to\mathds{R}_{>0} and for a fixed (t,x)0×(t,x)\in\mathds{R}_{\geq 0}\times\mathds{R}, the unique solution of (1.1) started from u0u_{0} satisfies

logu(t,x)=dlog(u0(xy)e𝐧𝐰(t,y)𝑑y)\displaystyle\log u(t,x)\overset{\underset{\mathrm{d}}{}}{=}\log\left(\int_{-\infty}^{\infty}u_{0}(x-y)e^{\mathcal{H}^{\mathbf{nw}}(t,y)}dy\right) (2.1)

In some of the recent works [CG20a, GL20], the convolution formula has been utilized to derive useful bounds on the probability of u(t,x)u(t,x) getting too large or, too small. In the next two propositions, we summarize those findings. For the sake of completeness, we will provide a short proof of these propositions in Section 4.2 and 4.3 respectively.

Proposition 2.2 (Upper Tail Estimates).

Fix γ<124\gamma<\frac{1}{24}. There exist c1>c2>0c_{1}>c_{2}>0 and t0>0t_{0}>0 which depend on γ\gamma and initial data u0u_{0} such that for all t>t0t>t_{0} and xx\in\mathds{R},

ec1(124γ)3/2t(u(t,x)>eγt)ec2(124γ)3/2t.e^{-c_{1}(\frac{1}{24}-\gamma)^{3/2}t}\leq\mathds{P}(u(t,x)>e^{-\gamma t})\leq e^{-c_{2}(\frac{1}{24}-\gamma)^{3/2}t}. (2.2)
Proposition 2.3 (Lower Tail Estimates).

Fix any small δ>0\delta>0 and γ>124\gamma>\frac{1}{24}. There exist c=c(γ,δ,u0)>0c=c(\gamma,\delta,u_{0})>0 and t0(γ,δ,u0)>0t_{0}(\gamma,\delta,u_{0})>0 such that for any tt0t\geq t_{0} and xx\in\mathds{R},

(u(t,x)<eγt)ec(γ124)33δ/2t2δ.\mathds{P}\big{(}u(t,x)<e^{-\gamma t}\big{)}\leq e^{-c(\gamma-\frac{1}{24})^{3-3\delta/2}t^{2-\delta}}. (2.3)
Remark 2.4.

Notice that (2.3) found an upper bound to the lower tail probability. As we have mentioned before, the exponent of tt in the upper bound of (2.3) and the lower bound of (1.7) are not close.

2.2. Macroscopic dimension & Localization

In this section, we recall the definition of Barlow-Taylor’s macroscopic Hausdorff dimension in +×\mathds{R}_{+}\times\mathds{R} and and some of its properties. Localization technique in parabolic Anderson model, pioneered by [KKX17] is an important tool for studying the macroscopic fractality. We review this tool in Proposition 2.8.

Definition 2.5 (Barlow-Taylor’s macroscopic Hausdorff dimension [BT89, BT92]).

Let r\mathcal{B}_{r} be the collection of all sets of the form

Q(x,r):=[x1,x1+r)×[x2,x2+r)\displaystyle Q(x,r):=[x_{1},x_{1}+r)\times[x_{2},x_{2}+r) (2.4)

for x:=(x1,x2)+×x:=(x_{1},x_{2})\in\mathds{R}_{+}\times\mathds{R} and rr in (0,)(0,\infty). Define :=r1r\mathcal{B}:=\bigcup_{r\geq 1}\mathcal{B}_{r}. Let 𝕍n:=[0,en)×[en,en)\mathds{V}_{n}:=[0,e^{n})\times[-e^{n},e^{n}) for n0n\in\mathds{Z}_{\geq 0} and 𝕊n:=𝕍n\𝕍n1\mathds{S}_{n}:=\mathds{V}_{n}\backslash\mathds{V}_{n-1} for n>0n\in\mathds{Z}_{>0}. For any EdE\subset\mathds{R}^{d} and ρ>0\rho>0, the ρ\rho-dimensional macroscopic Hausdorff content of the set EE will be denoted as νn,ρ(E)\nu_{n,\rho}(E) and defined as

νn,ρ(E):=infi=1m(side(Qi)en)ρ\displaystyle\nu_{n,\rho}(E):=\inf\sum_{i=1}^{m}\Big{(}\frac{\mathrm{side}(Q_{i})}{e^{n}}\Big{)}^{\rho} (2.5)

where the infimum is taken over all Q1,,QmQ_{1},\ldots,Q_{m}\in\mathcal{B} such that i=1mQi\cup_{i=1}^{m}Q_{i} covers the set E𝕊nE\cap\mathds{S}_{n}. The Barlow-Taylor’s macroscopic Hausdorff dimension of any set EE which we will denote as Dim(E)\mathrm{Dim}_{\mathds{H}}(E) is defined as

Dim(E):=inf{ρ>0:n=1νn,ρ(E)<}.\displaystyle\mathrm{Dim}_{\mathds{H}}(E):=\inf\Big{\{}\rho>0:\sum_{n=1}^{\infty}\nu_{n,\rho}(E)<\infty\Big{\}}. (2.6)

The following proposition is useful in getting lower bound to the macroscopic Hausdorff dimension.

Proposition 2.6 (Theorem 4 of [BT92]).

Fix γ(0,2)\gamma\in(0,2). For any set EE and nn\in\mathds{Z}, let us define

μn(E)=sen<sen+10j<en(1γ)𝟏((s,j)E).\displaystyle\mu_{n}(E)=\sum_{\begin{subarray}{c}s\in\mathds{Z}\\ e^{n}<s\leq e^{n+1}\end{subarray}}\sum_{0\leq j<e^{n(1-\gamma)}}\mathbf{1}((s,j)\in E). (2.7)

Then, there exists a constant C>0C>0 such that νn,2γ(E)Ce(2γ)nμn(E)\nu_{n,2-\gamma}(E)\geq Ce^{-(2-\gamma)n}\mu_{n}(E).

The next result which is taken from Corollary 1.4 of [KKX18] constructs a two dimensional set whose macroscopic Hausdorff dimension is 11. We use this set to show the phase transition of the macroscopic Hausdorff dimension in Theorem 1.1.

Proposition 2.7.

Fix an arbitrary constant q>1q>1 and for any n1n\geq 1, define

Dim(Ξq)=1,Ξq:={(x,y)(0,)2:yxq}.\displaystyle\mathrm{Dim}_{\mathds{H}}(\Xi_{q})=1,\quad\Xi_{q}:=\Big{\{}(x,y)\in(0,\infty)^{2}:y\geq x^{q}\Big{\}}. (2.8)

Corollary 1.4 of [KKX18] also showed that the dimension of Ξq\Xi_{q} is 22 for any q(0,1]q\in(0,1].

The next result describes an important tool (originated from [CJK13]) for studying the fractal geometry pioneered by [KKX17, KKX18]. In brief, it describes how to construct ‘local proxy’ for the the solution of (1.1) in far away spots so that the proxies will be independent of each other. In [KKX17, KKX18], these proxies were made to summarize the fractal information hidden in the peaks of the parabolic Anderson equation. As we will show in our proof, these proxies are also useful in decoding fractality of the valleys.

Proposition 2.8 (‘Local Proxy’ in parabolic Anderson equation, Theorem 3.9 of [KKX18]).

Fix k2k\geq 2. There exists a finite constant c0>0c_{0}>0 independent of kk and tt such that for any finite set of nonrandom points x1,,xmx_{1},...,x_{m}\in\mathds{R} and t1t\geq 1 that satisfy

min1ijm|xixj|>c0t2k3,\min_{1\leq i\neq j\leq m}|x_{i}-x_{j}|>c_{0}t^{2}k^{3}, (2.9)

there exists a set of independent random variables Y1(k),Y2(k),,Ym(k)Y^{(k)}_{1},Y^{(k)}_{2},\ldots,Y^{(k)}_{m} with all positive moments finite, satisfying

sup1jm𝔼[|u(t,xj)Yj(k)|k]Ckek3t,\sup_{1\leq j\leq m}\mathds{E}\Big{[}\big{|}u(t,x_{j})-Y_{j}^{(k)}\big{|}^{k}\Big{]}\leq C^{k}e^{-k^{3}t}, (2.10)

where CC is a positive constant independent kk, tt and {xi:1im}\{x_{i}:1\leq i\leq m\}.

3. Fractality of Valleys

3.1. Proof of part (a) of Theorem 1.1

Since 𝒮β(𝒱(γ))\mathcal{S}_{\beta}(\mathcal{V}(\gamma)) is subset of a +×\mathds{R}_{+}\times\mathds{R}, Dim(𝒮β(𝒱(γ)))2\mathrm{Dim}_{\mathds{H}}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma)))\leq 2. It suffices to show that Dim(𝒮β(𝒱(γ)))2\mathrm{Dim}_{\mathds{H}}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma)))\geq 2. The definition of the Barlow-Taylor’s Hausdorff dimension implies that if n=1νn,2γ(𝒮β(𝒱(γ)))=\sum_{n=1}^{\infty}\nu_{n,2-\gamma}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma)))=\infty holds with probability 11 for all γ\gamma, then, Dim(𝒮β(𝒱(γ)))\mathrm{Dim}_{\mathds{H}}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma))) is almost surely greater than or equal to 22. In what follows, we will show that this indeed holds, i.e.,

n=1νn,2γ(E)=a.s.,γ(0,2).\displaystyle\sum_{n=1}^{\infty}\nu_{n,2-\gamma}(E)\stackrel{{\scriptstyle a.s.}}{{=}}\infty,\quad\forall\gamma\in(0,2). (3.1)

Our proof will require Proposition 2.7 and 2.2. The first proposition which is taken from [BT92] bounds νn,2γ\nu_{n,2-\gamma} from below by a discrete measure (see (2.7)) on the set (en,en+1]×(0,en(1γ)(e^{n},e^{n+1}]\times(0,e^{n(1-\gamma)} for any γ(0,2)\gamma\in(0,2). The main goal of this section will be to obtain suitable lower bound to that discrete measure. One of the key tools that we use is the upper tail probability of the parabolic Anderson equation. Tight bounds to this tail probability was derived in [CG20a] for a large class of initial data and we have compiled their result in Proposition 2.2. We now proceed to complete the proof of part (a)(a) of Theorem 1.1.

Proof of Theorem 1.1(a): Consider a set of points x1,,xmx_{1},\ldots,x_{m}\in\mathds{R} satisfying (2.9). We seek to show that μn(Sβ(𝒱(γ)))\mu_{n}(S_{\beta}(\mathcal{V}(\gamma))) is bounded below by e(2γ)ne^{(2-\gamma)n} almost surely for all large nn. This will show (3.1) and hence, will complete proof. We divide the rest of the proof into two steps. In Step I, we will show a upper bound to the tail probability of max1jmu(t,xj)\max_{1\leq j\leq m}u(t,x_{j}). Step II will use this upper bound to show that μn(Sβ(𝒱(γ)))\mu_{n}(S_{\beta}(\mathcal{V}(\gamma))) is almost surely close to e(2γ)ne^{(2-\gamma)n} for all large nn.

Step I: Fix t0>0t_{0}>0 and γ(0,124)\gamma\in(0,\frac{1}{24}). Here, we claim and prove that there exists μ=μ(t0,γ)>0\mu=\mu(t_{0},\gamma)>0, C=C(t0,γ)>0C=C(t_{0},\gamma)>0 and kk large such that for all t>t0t>t_{0},

(min1jmu(t,xj)>eγt)emμt+mlog2+Ckme(k3+γk)t.\displaystyle\mathds{P}\Big{(}\min_{1\leq j\leq m}u(t,x_{j})>e^{-\gamma t}\Big{)}\leq e^{-m\mu t+m\log 2}+C^{k}me^{(-k^{3}+\gamma k)t}. (3.2)

By Proposition 2.2, for any γ<124\gamma<\frac{1}{24} and t0>0t_{0}>0, there exists μ:=μ(t0,γ)>0\mu:=\mu(t_{0},\gamma)>0 such that for all t>t0t>t_{0} and xx\in\mathds{R},

(u(t,x)>eγt)eμt.\mathds{P}\big{(}u(t,x)>e^{-\gamma t}\big{)}\leq e^{-\mu t}. (3.3)

Recall the local proxy {Yj(k)}j\{Y^{(k)}_{j}\}_{j} defined in Proposition 2.8. By the union bound, we get

(min1jmu(t,xj)>eγt)\displaystyle\mathds{P}\big{(}\min_{1\leq j\leq m}u(t,x_{j})>e^{-\gamma t}\big{)}\leq (min1jmYj(k)>eγt2)+(max1jm|u(t,xj)Yj(k)|>eγt2).\displaystyle\mathds{P}\big{(}\min_{1\leq j\leq m}Y^{(k)}_{j}>\frac{e^{-\gamma t}}{2}\big{)}+\mathds{P}\big{(}\max_{1\leq j\leq m}|u(t,x_{j})-Y^{(k)}_{j}|>\frac{e^{-\gamma t}}{2}\big{)}.

In what follows, we bound the two terms on the right side of the above display. By Lemma 2.9, for any k>2k>2, {Yj(k)}j\{Y^{(k)}_{j}\}_{j} is a set of independent random variables which imply

(min1jmYj(k)>12eγt)\displaystyle\mathds{P}\big{(}\min_{1\leq j\leq m}Y^{(k)}_{j}>\frac{1}{2}e^{-\gamma t}\big{)} =(Yj(k)>12eγt)m\displaystyle=\mathds{P}\big{(}Y^{(k)}_{j}>\frac{1}{2}e^{-\gamma t}\big{)}^{m}
((u(t,x)>14eγt)+(|u(t,xj)Yj(k)|>14eγt))m\displaystyle\leq\Big{(}\mathds{P}\big{(}u(t,x)>\frac{1}{4}e^{-\gamma t}\big{)}+\mathds{P}\big{(}|u(t,x_{j})-Y^{(k)}_{j}|>\frac{1}{4}e^{-\gamma t}\big{)}\Big{)}^{m}
(eμt+Cke(k3+kγ)t)m2memμt.\displaystyle\leq\Big{(}e^{-\mu t}+C^{k}e^{(-k^{3}+k\gamma)t}\Big{)}^{m}\leq 2^{m}e^{-m\mu t}. (3.4)

The last inequality holds for kk large since eμt>Cke(k3kγ)te^{-\mu t}>C^{k}e^{-(k^{3}-k\gamma)t} for large kk, uniformly in tt. Moreover, the union bound and Lemma 2.9 yields

(max1jm|u(t,xj)Yj(k)|>eγt)\displaystyle\mathds{P}\big{(}\max_{1\leq j\leq m}|u(t,x_{j})-Y^{(k)}_{j}|>e^{-\gamma t}\big{)} j=1m(|u(t,xj)Yj(k)|>eγt)\displaystyle\leq\sum_{j=1}^{m}\mathds{P}\big{(}|u(t,x_{j})-Y^{(k)}_{j}|>e^{-\gamma t}\big{)}
mCke(k3+γk)t.\displaystyle\leq mC^{k}e^{(-k^{3}+\gamma k)t}. (3.5)

Combining (3.4) and (3.5) yields (3.2).

Step II: Define the random set

𝔖β,γ={(t,x)(1,)×:u(βlogt,x)<tβγ},𝔖^β,γ:=𝔖β,γn=0(en,en+1]2.\displaystyle\mathfrak{S}_{\beta,\gamma}=\left\{(t,x)\in(1,\infty)\times\mathds{R}\,:\,u(\beta\log t,x)<t^{-\beta\gamma}\right\},\quad\hat{\mathfrak{S}}_{\beta,\gamma}:=\mathfrak{S}_{\beta,\gamma}\cap\bigcup_{n=0}^{\infty}\left(e^{n},e^{n+1}\right]^{2}.

Notice that 𝔖β,γ(γ)\mathfrak{S}_{\beta,\gamma}(\gamma) is same as {(t,x):(βlogt,x)𝒮β(𝒱(γ))}[1,)×\{(t,x):(\beta\log t,x)\in\mathcal{S}_{\beta}(\mathcal{V}(\gamma))\}\cap[1,\infty)\times\mathds{R}. Fix 0<δ<ϵ<10<\delta<\epsilon<1. For any nn\in\mathds{N} and j[0,en(1δ))j\in[0,e^{n(1-\delta)})\cap\mathds{Z}, define

aj,n:=en+jenδ,j,n(δ):=(aj,n,aj+1,n].\displaystyle a_{j,n}:=e^{n}+je^{n\delta},\qquad\mathcal{I}_{j,n}(\delta):=(a_{j,n},a_{j+1,n}].

Our goal is now to obtain an upper bound for the supremum of the probabilities of the events {infxn,ju(βlogt,x)>tβγ}\{\inf_{x\in\mathcal{I}_{n,j}}u(\beta\log t,x)>t^{-\beta\gamma}\} as jj varies over the set [0,en(1δ))[0,e^{n(1-\delta)})\cap\mathds{Z}. For this, we seek to apply (3.2). Pick an integer mm from the set [12en(δϵ),2en(δϵ)]\Big{[}\frac{1}{2}e^{n(\delta-\epsilon)},2e^{n(\delta-\epsilon)}\Big{]}\cap\mathds{Z}. Fix a set of integers x1,,xmj,n(δ)x_{1},...,x_{m}\in\mathcal{I}_{j,n}(\delta) such that for all n>1n>1, min1ijm|xixj|enϵ\min_{1\leq i\neq j\leq m}|x_{i}-x_{j}|\geq e^{n\epsilon}. As a consequence, the following inequality holds

min1ijm|xixj|enϵc0k3(βlogt)2\min_{1\leq i\neq j\leq m}|x_{i}-x_{j}|\geq e^{n\epsilon}\geq c_{0}k^{3}(\beta\log t)^{2} (3.6)

for t(en,en+1]t\in\left(e^{n},e^{n+1}\right] when nn is sufficiently large. Since infsn,ju(βlogt,x)\inf_{s\in\mathcal{I}_{n,j}}u(\beta\log t,x) is less than the infimum value u(βlogt,xi)u(\beta\log t,x_{i}) for 1im1\leq i\leq m, we may write

(infsn,ju(βlogt,x)>tγβ)\displaystyle\mathds{P}\Big{(}\inf_{s\in\mathcal{I}_{n,j}}u(\beta\log t,x)>t^{-\gamma\beta}\Big{)} (inf1imu(βlogt,xi)>tγβ)\displaystyle\leq\mathds{P}\Big{(}\inf_{1\leq i\leq m}u(\beta\log t,x_{i})>t^{-\gamma\beta}\Big{)}
exp(12en(δϵ)(μβnlog2))+Ckexp((δϵ(k3γk)β)n),\displaystyle\leq\exp\left(-\frac{1}{2}e^{n(\delta-\epsilon)}\left(\mu\beta n-\log 2\right)\right)+C^{k}\exp\left((\delta-\epsilon-(k^{3}-\gamma k)\beta)n\right), (3.7)

for all t(en,en+1]t\in(e^{n},e^{n+1}] with all sufficiently large nn. The last inequality follows by applying (3.2) and recalling that 21en(1ϵ)m2en(1ϵ)2^{-1}e^{n(1-\epsilon)}\leq m\leq 2e^{n(1-\epsilon)}. Note that right hand side of the above inequality does not depend on jj. Therefore, we can deduce the following inequality

\displaystyle\mathds{P} (𝔖^β,γ({t}×j,n(δ))= for some j[0,en(1δ)) and t(en,en+1])\displaystyle\Big{(}\hat{\mathfrak{S}}_{\beta,\gamma}\cap(\{t\}\times\mathcal{I}_{j,n}(\delta))=\varnothing\text{ for some }j\in[0,e^{n(1-\delta)})\cap\mathds{Z}\text{ and }t\in(e^{n},e^{n+1}]\cap\mathds{Z}\Big{)}
({maxtt(en,en+1]maxjj(0,en(1δ)]infxj,n(γ)tγβu(βlogt,x)}>1)\displaystyle\leq\mathds{P}\Big{(}\Big{\{}\max_{\begin{subarray}{c}t\in\mathds{Z}\\ t\in(e^{n},e^{n+1}]\end{subarray}}\max_{\begin{subarray}{c}j\in\mathds{Z}\\ j\in(0,e^{n(1-\delta)}]\end{subarray}}\inf_{x\in\mathcal{I}_{j,n}(\gamma)}t^{\gamma\beta}u(\beta\log t,x)\Big{\}}>1\Big{)}
t(en,en+1]j(en,en+1](infsn,ju(βlogt,x)>tγβ)\displaystyle\leq\sum_{t\in\mathds{Z}\cap(e^{n},e^{n+1}]}\sum_{j\in\mathds{Z}\cap(e^{n},e^{n+1}]}\mathds{P}\Big{(}\inf_{s\in\mathcal{I}_{n,j}}u(\beta\log t,x)>t^{-\gamma\beta}\Big{)}
en(2δ)+1[exp(12en(δϵ)(μβnlog2))+Ckexp((δϵ(k3γk)β)n)].\displaystyle\leq e^{n(2-\delta)+1}\left[\exp\left(-\frac{1}{2}e^{n(\delta-\epsilon)}\left(\mu\beta n-\log 2\right)\right)+C^{k}\exp\left((\delta-\epsilon-(k^{3}-\gamma k)\beta)n\right)\right].

The last inequality follows from the direct application of (3.7). For large kk, the right hand side of the last inequality is summable in nn. Thus, by the Borel-Cantelli lemma, the following holds

𝔖^β,γ({t}×j,n(δ)) for all j[0,en(1δ)) and t(en,en+1],\hat{\mathfrak{S}}_{\beta,\gamma}\cap(\{t\}\times\mathcal{I}_{j,n}(\delta))\neq\varnothing\text{ for all }j\in[0,e^{n(1-\delta)})\cap\mathds{Z}\text{ and }t\in(e^{n},e^{n+1}]\cap\mathds{Z}, (3.8)

almost surely for all sufficiently large nn. As a result, μ2δ(𝒮β(𝒱(γ)))\mu_{2-\delta}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma))) eventually exceeds Cen(2δ)Ce^{n(2-\delta)} as nn increases implying nμn,2δ(𝒮β(𝒱(γ)))=\sum_{n}\mu_{n,2-\delta}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma)))=\infty holds almost surely. Combining this with Proposition 2.6 yields Dim(𝒮β(𝒱(γ)))2δ\mathrm{Dim}_{\mathds{H}}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma)))\geq 2-\delta almost surely. Letting δ\delta to 0 competes the proof.

3.2. Proof of part (b) of Theorem 1.1

We complete the proof in two steps. The first step is to show that Dim[𝒮β(𝒱(γ))]\mathrm{Dim}_{\mathds{H}}[\mathcal{S}_{\beta}(\mathcal{V}(\gamma))] is bounded above by 11 and the second step is to show that Dim[𝒮β(𝒱(γ))]\mathrm{Dim}_{\mathds{H}}[\mathcal{S}_{\beta}(\mathcal{V}(\gamma))] is bounded below by 11. These two steps will be contained in the following two propositions. These propositions together complete the proof of part (b)(b).

Proposition 3.1.

For all γ>1/24\gamma>1/24 and β>0\beta>0, we have

Dim[𝒮β(𝒱(γ))]=Dim({(t,x)(1,)×:u(βlogt,x)<tβγ})1a.s.\mathrm{Dim}_{\mathds{H}}[\mathcal{S}_{\beta}(\mathcal{V}(\gamma))]=\mathrm{Dim}_{\mathds{H}}\Big{(}\Big{\{}(t,x)\in(1,\infty)\times\mathds{R}:u(\beta\log t,x)<t^{-\beta\gamma}\Big{\}}\Big{)}\leq 1\quad\text{a.s.} (3.9)
Proposition 3.2.

For all γ>1/24\gamma>1/24 and β>0\beta>0, there exists t0>0t_{0}>0 such that for all tt0t\geq t_{0},

Dim[𝒮β(𝒱(γ))]Dim({x:u(βlogt,x)<tβγ})1a.s.\mathrm{Dim}_{\mathds{H}}[\mathcal{S}_{\beta}(\mathcal{V}(\gamma))]\geq\mathrm{Dim}_{\mathds{H}}\left(\left\{x\in\mathds{R}:u(\beta\log t,x)<t^{-\beta\gamma}\right\}\right)\geq 1\quad\text{a.s.} (3.10)

One of the key inputs for the proof of these propositions is the upper bound on the probability of u(t,x)u(t,x) being small in a bounded domain. This result will be obtained in Proposition 3.3 using Proposition 2.3.

Proposition 3.3.

Let γ>1/24\gamma>1/24 and ϵ(0,1)\epsilon\in(0,1). There exists a positive constant C=C(γ,ϵ)C=C(\gamma,\epsilon) such that for all large a1a\geq 1, b>0b>0, and integers l1,l21l_{1},l_{2}\geq 1,

(For some (t,x)(a,a+l1]×(b,b+l2],u(t,x)<eγt)Cl1l2eCa2ϵ.\displaystyle\mathds{P}\big{(}\text{For some }(t,x)\in(a,a+l_{1}]\times(b,b+l_{2}],u(t,x)<e^{-\gamma t}\big{)}\leq Cl_{1}l_{2}e^{-Ca^{2-\epsilon}}. (3.11)
Remark 3.4.

[CK17, Theorem 1.4] established an upper bound on the lower tail probability of the infimum of u(t,x)u(t,x) being smaller than fixed ϵ>0\epsilon>0. Proposition 3.3 shows that a tighter upper bound (3.11) holds when tt gets larger.

We first prove Proposition 3.1 and 3.2 assuming Proposition 3.3 in two ensuing subsections and then, Proposition 3.3 will be finally proved in Section 3.2.3.

3.2.1. Proof of Proposition 3.1

The equality in (3.10) between the macroscopic Hausdorff dimension of 𝒮β(𝒱(γ))\mathcal{S}_{\beta}(\mathcal{V}(\gamma)) and the set {(t,x)(1,)×:u(βlogt,x)<tβγ}\{(t,x)\in(1,\infty)\times\mathds{R}:u(\beta\log t,x)<t^{-\beta\gamma}\} is straightforward.

We now prove that the dimension of the latter set is at most 11. Observe that for all a1a\geq 1, βlog(a+1)βloga=βlog(1+1a)βlog2\beta\log(a+1)-\beta\log a=\beta\log\big{(}1+\frac{1}{a}\big{)}\leq\beta\log 2. Fix an arbitrary constant q>1q>1. Proposition 3.3 implies that for any δ>0\delta>0, γ>1/24\gamma>1/24, β,b>0\beta,b>0, there exists C=C(δ,γ)>0C=C(\delta,\gamma)>0 such that uniformly for all a(en/q,en+1]a\in(e^{n/q},e^{n+1}] with all large nn,

\displaystyle\mathds{P} (For some t(a,a+1]:infx(b,b+1]u(βlogt,x)<tβγ))Cexp(C(βn/q)2δ)\displaystyle\Big{(}\text{For some }t\in(a,a+1]:\inf_{x\in(b,b+1]}u(\beta\log t,x)<t^{-\beta\gamma})\Big{)}\leq C\exp\Big{(}-C(\beta n/q)^{2-\delta}\Big{)} (3.12)

Below we introduce few notations which will be used throughout the rest of the proof. Define n:=(en,en+1]\mathcal{I}_{n}:=(e^{n},e^{n+1}], n(q):=(en/q,en+1]\mathcal{I}^{(q)}_{n}:=(e^{n/q},e^{n+1}], 𝒥n(q):=(0,en/q]\mathcal{J}^{(q)}_{n}:=(0,e^{n/q}] and

𝒢β,γ:={(t,x)(1,)×(0,)\displaystyle\mathcal{G}_{\beta,\gamma}:=\Big{\{}(t,x)\in(1,\infty)\times(0,\infty) :u(βlogt,x)>tβγ},\displaystyle\,:\,u(\beta\log t,x)>t^{-\beta\gamma}\Big{\}}, (3.13)
n:=n(β,γ):=𝒢β,γ(n(q)×n(q)),\displaystyle\mathcal{L}_{n}:=\mathcal{L}_{n}(\beta,\gamma):=\mathcal{G}_{\beta,\gamma}\cap(\mathcal{I}^{(q)}_{n}\times\mathcal{I}^{(q)}_{n}), n:=(n×𝒥n(q))(𝒥n(q)×n).\displaystyle\quad\mathcal{L}^{\prime}_{n}:=\big{(}\mathcal{I}_{n}\times\mathcal{J}^{(q)}_{n}\big{)}\cup\big{(}\mathcal{J}^{(q)}_{n}\times\mathcal{I}_{n}\big{)}.

We now claim and prove that

Dim(𝒢β,γ)1a.s.\displaystyle\mathrm{Dim}_{\mathds{H}}(\mathcal{G}_{\beta,\gamma})\leq 1\quad\text{a.s.} (3.14)

Before proceeding to the proof of (3.14), let us explain how (3.14) completes the proof of the inequality in (3.9). As one will see below, similar argument as in the proof of (3.14) will imply

Dim({(t,x)(1,)×(,0):u(βlogt,x)<tβγ})1.\displaystyle\mathrm{Dim}_{\mathds{H}}\Big{(}\big{\{}(t,x)\in(1,\infty)\times(-\infty,0):u(\beta\log t,x)<t^{-\beta\gamma}\big{\}}\Big{)}\leq 1. (3.15)

Moreover, we have Dim((1,)×{0})=1\mathrm{Dim}_{\mathds{H}}((1,\infty)\times\{0\})=1 (see [BT92, Section 4.1]). Combining these with (3.14) and recalling that Dim(AB)=max{Dim(A),Dim(B)}\mathrm{Dim}_{\mathds{H}}(A\cup B)=\max\{\mathrm{Dim}_{\mathds{H}}(A),\mathrm{Dim}_{\mathds{H}}(B)\} for any two sets A,B2A,B\subset\mathbb{R}^{2} show the upper bound in the macroscopic Hausdorff dimension from (3.9).

Now we proceed to prove (3.14). Recall 𝕊n\mathds{S}_{n} from Definition 2.5. From (3.13), it follows

𝒢β,γ𝕊n+1nn.\mathcal{G}_{\beta,\gamma}\cap\mathds{S}_{n+1}\subseteq\mathcal{L}_{n}\cup\mathcal{L}^{\prime}_{n}. (3.16)

Using Proposition 2.7, we can deduce that Dim(n=1n)1\mathrm{Dim}_{\mathds{H}}\left(\cup_{n=1}^{\infty}\mathcal{L}^{\prime}_{n}\right)\leq 1. From this fact and (3.16), we have

Dim(𝒢β,γ)max{1,Dim(n=1n)}a.s.\mathrm{Dim}_{\mathds{H}}(\mathcal{G}_{\beta,\gamma})\leq\max\Big{\{}1,\mathrm{Dim}_{\mathds{H}}\big{(}\bigcup_{n=1}^{\infty}\mathcal{L}_{n}\big{)}\Big{\}}\quad\text{a.s.} (3.17)

In the view of (3.17), we now bound the dimension of n=1n\cup_{n=1}^{\infty}\mathcal{L}_{n}. Note that we can cover n(q)×n(q)\mathcal{I}^{(q)}_{n}\times\mathcal{I}^{(q)}_{n} with O(e2n)O(e^{2n}) many squares of the form (a,a+1]×(b,b+1](a,a+1]\times(b,b+1] for all integers n1n\geq 1. Let us denote the set of all these squares needed to cover n(q)×n(q)\mathcal{I}^{(q)}_{n}\times\mathcal{I}^{(q)}_{n} by Θn,q\Theta_{n,q}. Out of those O(e2n)O(e^{2n}) squares in Θn,q\Theta_{n,q}, n\mathcal{L}_{n} can be covered with those satisfying

infx(b,b+1]u(βlogt,x)<tβγ,for some t(a,a+1],.\inf_{x\in(b,b+1]}u(\beta\log t,x)<t^{-\beta\gamma},\text{for some }t\in(a,a+1],. (3.18)

Thus, for all large integers n1n\geq 1 and for all real ρ>0\rho>0,

𝔼[νn,ρ(n)]\displaystyle\mathds{E}\big{[}\nu_{n,\rho}(\mathcal{L}_{n})\big{]} enρa,a+1]×(b,b+1]Θn,q(infx(b,b+1]u(βlogt,x)<tβγ,for some t(a,a+1])\displaystyle\leq e^{-n\rho}\sum_{a,a+1]\times(b,b+1]\in\Theta_{n,q}}\mathds{P}\big{(}\inf_{x\in(b,b+1]}u(\beta\log t,x)<t^{-\beta\gamma},\text{for some }t\in(a,a+1]\big{)}
Cexp((2ρ)nC(βn/q)2δ).\displaystyle\leq C\exp\big{(}(2-\rho)n-C\big{(}\beta n/q\big{)}^{2-\delta}\big{)}.

where the last inequality follows from (3.12). Here, the positive constant CC is independent of (ρ,n,q)(\rho,n,q). Note that the right hand side of the above display geometrically decays with nn. Summing both sides of the above display over nn, we deduce that n=1νρn(n)<\sum_{n=1}^{\infty}\nu_{\rho}^{n}(\mathcal{L}_{n})<\infty a.s., for every ρ>0\rho>0. From the definition of the macroscopic Hausdorff dimension, this implies that Dim(n=1n)=0\mathrm{Dim}_{\mathds{H}}\left(\bigcup_{n=1}^{\infty}\mathcal{L}_{n}\right)=0 a.s. Together with (3.17), the above result completes the proof of (3.14).

3.2.2. Proof of Proposition 3.2

Note that

{(s,x)(1,)×:u(βlogs,x)<sβγ}{t}×{x:u(βlogt,x)<tβγ},\left\{(s,x)\in(1,\infty)\times\mathds{R}:u(\beta\log s,x)<s^{-\beta\gamma}\right\}\supseteq\{t\}\times\left\{x\in\mathds{R}:u(\beta\log t,x)<t^{-\beta\gamma}\right\},

for all t1t\geq 1. From the above display, the first inequality of (3.10) follows via the monotonocity of the Hausdorff dimension. Now we proceed to prove the second inequality. As in Section 3.1, we define the following random sets for any t>1t>1,

𝔖β,γ(t):={(t,x)(1,)×:u(βlogt,x)<tβγ},𝔖^β,γ(t):=𝔖β,γ(t)n=0(en,en+1].\displaystyle\mathfrak{S}^{(t)}_{\beta,\gamma}:=\left\{(t,x)\in(1,\infty)\times\mathds{R}\,:\,u(\beta\log t,x)<t^{-\beta\gamma}\right\},\quad\hat{\mathfrak{S}}^{(t)}_{\beta,\gamma}:=\mathfrak{S}^{(t)}_{\beta,\gamma}\cap\bigcup_{n=0}^{\infty}\left(e^{n},e^{n+1}\right].

Let us choose and fix arbitrary reals δ\delta and ϵ\epsilon satisfying 0<ϵ<δ<10<\epsilon<\delta<1. For any n0n\in\mathds{Z}_{\geq 0} and j[0,en(1δ))0j\in[0,e^{n(1-\delta)})\cap\mathds{Z}_{\geq 0}, recall the definitions aj,n:=en+jenδa_{j,n}:=e^{n}+je^{n\delta} and j,n(δ):=(aj,n,aj+1,n].\mathcal{I}_{j,n}(\delta):=(a_{j,n},a_{j+1,n}]. In what follows, we claim and prove that for all sufficiently large t0=t0(γ)>1t_{0}=t_{0}(\gamma)>1 , there exist c1=c1(t0)>0,c2=c2(t0)>0c_{1}=c_{1}(t_{0})>0,c_{2}=c_{2}(t_{0})>0 and C=C(t0)>0C=C(t_{0})>0 such that for all tt0t\geq t_{0},

maxj[0,en(1δ))\displaystyle\max_{j\in[0,e^{n(1-\delta)})\cap\mathds{Z}} (infxj,n(δ)u(βlogt,x)>tγβ)\displaystyle\mathds{P}\Big{(}\inf_{x\in\mathcal{I}_{j,n}(\delta)}u(\beta\log t,x)>t^{-\gamma\beta}\Big{)} (3.19)
exp(c1ec2t4en(δϵ))+Cnexp(n(δϵ)n3t+n(log2+γt)).\displaystyle\leq\exp\left(-c_{1}e^{c_{2}t^{4}}e^{n(\delta-\epsilon)}\right)+C^{n}\exp\left(n(\delta-\epsilon)-n^{3}t+n(\log 2+\gamma t)\right). (3.20)

Let us fix x1,,xmj,n(δ)x_{1},...,x_{m}\in\mathcal{I}_{j,n}(\delta) such that for all n>1n>1, the following conditions are satisfied:

min1ijm|xixj|enϵ,12en(δϵ)m2en(δϵ).\displaystyle\min_{1\leq i\neq j\leq m}|x_{i}-x_{j}|\geq e^{n\epsilon},\qquad\frac{1}{2}e^{n(\delta-\epsilon)}\leq m\leq 2e^{n(\delta-\epsilon)}. (3.21)

We first bound the probability of min1jmu(t,xj)\min_{1\leq j\leq m}u(t,x_{j}) exceeding the value eγte^{-\gamma t}. We will use this to prove (3.19).

Recall Yj(k)Y^{(k)}_{j} with k=nk=n from Proposition 2.8. By the union bound, we get

(min1jmu(t,xj)>eγt)\displaystyle\mathds{P}\Big{(}\min_{1\leq j\leq m}u(t,x_{j})>e^{-\gamma t}\Big{)}\leq (min1jmYj(n)>12eγt)\displaystyle\mathds{P}\Big{(}\min_{1\leq j\leq m}Y^{(n)}_{j}>\frac{1}{2}e^{-\gamma t}\Big{)} (3.22)
+(max1jm|u(t,xj)Yj(n)|>12eγt).\displaystyle+\mathds{P}\Big{(}\max_{1\leq j\leq m}|u(t,x_{j})-Y^{(n)}_{j}|>\frac{1}{2}e^{-\gamma t}\Big{)}.

In what follows, we bound two terms of the right hand side of the above display. Recall that {Yj(n)}j=1m\{Y^{(n)}_{j}\}_{j=1}^{m} are independent which implies

(min1jmYj(n)>12eγt)\displaystyle\mathds{P}\Big{(}\min_{1\leq j\leq m}Y^{(n)}_{j}>\frac{1}{2}e^{-\gamma t}\Big{)} =(Yj(n)>12eγt)m\displaystyle=\mathds{P}\Big{(}Y^{(n)}_{j}>\frac{1}{2}e^{-\gamma t}\Big{)}^{m}
((u(t,x)>14eγt)+(|u(t,xj)Yj(n)|>14eγt))m\displaystyle\leq\Big{(}\mathds{P}\big{(}u(t,x)>\frac{1}{4}e^{-\gamma t}\big{)}+\mathds{P}\big{(}|u(t,x_{j})-Y^{(n)}_{j}|>\frac{1}{4}e^{-\gamma t}\big{)}\Big{)}^{m}
(1(u(t,x)<14eγt)+(|u(t,xj)Yj(n)|>14eγt))m.\displaystyle\leq\Big{(}1-\mathds{P}\big{(}u(t,x)<\frac{1}{4}e^{-\gamma t}\big{)}+\mathds{P}\big{(}|u(t,x_{j})-Y^{(n)}_{j}|>\frac{1}{4}e^{-\gamma t}\big{)}\Big{)}^{m}. (3.23)

By Theorem 1.4, for any t0>log4γt_{0}>\frac{\log 4}{\gamma} and ϵ>0\epsilon^{\prime}>0, there exists c1=c1(t0,ϵ)>0c_{1}=c_{1}(t_{0},\epsilon^{\prime})>0 and c2=c2(t0,ϵ)>0c_{2}=c_{2}(t_{0},\epsilon^{\prime})>0 such that for all tt0t\geq t_{0}

(u(t,x)<14eγt)(u(t,x)<e2γt)exp(ct4+ϵ).\mathds{P}\big{(}u(t,x)<\frac{1}{4}e^{-\gamma t}\big{)}\geq\mathds{P}\big{(}u(t,x)<e^{-2\gamma t}\big{)}\geq\exp(-ct^{4+\epsilon^{\prime}}). (3.24)

We fix tt0t\geq t_{0} and use Chebyshev’s inequality in conjunction with Proposition 2.8 to conclude

(|u(t,xj)Yj(k)|>14eγt)Cnen3t+n(log4+γt)12exp(ct4+ϵ),\mathds{P}\Big{(}|u(t,x_{j})-Y^{(k)}_{j}|>\frac{1}{4}e^{-\gamma t}\Big{)}\leq C^{n}e^{-n^{3}t+n(\log 4+\gamma t)}\leq\frac{1}{2}\exp(-ct^{4+\epsilon^{\prime}}), (3.25)

where the last inequality follows for all large nn. Combining (3.24), (3.25) and substituting those into the last line of (3.2.2) shows

(min1jmYj(k)>12eγt)(112exp(ct4+ϵ))mexp(m2ect4+ϵ),\mathds{P}\Big{(}\min_{1\leq j\leq m}Y^{(k)}_{j}>\frac{1}{2}e^{-\gamma t}\Big{)}\leq\big{(}1-\frac{1}{2}\exp(-ct^{4+\epsilon^{\prime}})\big{)}^{m}\leq\exp\left(-\frac{m}{2}e^{-ct^{4+\epsilon^{\prime}}}\right),

where the last inequality holds since 1xex1-x\leq e^{-x} for all x>0x>0. This provides an upper bound to the first term in the right hand side of (3.22). The second term can be bounded above by using the first inequality of (3.25) and union bound. As a result, we obtain the following

r.h.s. of (3.22)exp(m2ect4+ϵ)+mCnen3t+n(log2+γt).\text{r.h.s. of \eqref{eq:minesti}}\leq\exp\left(-\frac{m}{2}e^{-ct^{4+\epsilon^{\prime}}}\right)+mC^{n}e^{-n^{3}t+n(\log 2+\gamma t)}. (3.26)

Hence,

\displaystyle\mathds{P} (𝔖β,γ(t)j,n(δ)= for some j[0,en(1δ)))\displaystyle\Big{(}\mathfrak{S}^{(t)}_{\beta,\gamma}\cap\mathcal{I}_{j,n}(\delta)=\varnothing\text{ for some }j\in[0,e^{n(1-\delta)})\cap\mathds{Z}\Big{)}
\displaystyle\leq (maxj(0,en(1δ)]infxj,n(γ)tγβu(βlogt,x)>1)\displaystyle\mathds{P}\Big{(}\max_{j\in(0,e^{n(1-\delta)}]\cap\mathds{Z}}\inf_{x\in\mathcal{I}_{j,n}(\gamma)}t^{\gamma\beta}u(\beta\log t,x)>1\Big{)}
en(1δ)[exp(en(δϵ)2ect4+ϵ)+Cnen(δϵ)n3t+n(log2+γt)].\displaystyle\leq e^{n(1-\delta)}\left[\exp\left(-\frac{e^{n(\delta-\epsilon)}}{2}e^{-ct^{4+\epsilon^{\prime}}}\right)+C^{n}e^{n(\delta-\epsilon)-n^{3}t+n(\log 2+\gamma t)}\right].

Note that the summation of the right hand side of the above inequality over nn is finite. By the Borel-Cantelli lemma, the following holds

𝔖β,γ(t)j,n(δ) for all j[0,en(1δ)),\mathfrak{S}^{(t)}_{\beta,\gamma}\cap\mathcal{I}_{j,n}(\delta)\neq\varnothing\text{ for all }j\in[0,e^{n(1-\delta)})\cap\mathds{Z}, (3.27)

with probability 11 for all sufficiently large n2n\geq 2. Like as in the proof of Theorem 1.1(a), (3.27) shows that μn,1δ(𝒮β(𝒱(γ)))\mu_{n,1-\delta}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma))) eventually exceeds en(1δ)e^{n(1-\delta)} with probability 11. From Proposition 2.6, it now follows that νn,1γ(𝒮β(𝒱(γ)))C\nu_{n,1-\gamma}(\mathcal{S}_{\beta}(\mathcal{V}(\gamma)))\geq C with probability 11 implying that

Dim(𝔖β,γ(t))1δ,a.s.\mathrm{Dim}_{\mathds{H}}(\mathfrak{S}^{(t)}_{\beta,\gamma})\geq 1-\delta,\quad\text{a.s.}

Letting δ\delta to 0 in the above display completes the proof.

3.2.3. Proof of Proposition 3.3

We need to bound the probability of u(t,x)u(t,x) getting smaller than eγte^{-\gamma t} as (t,x)(t,x) varies in the set (a,a+l1]×(b,b+l2](a,a+l_{1}]\times(b,b+l_{2}]. The first step to get such an upper bound is to localize u(t,x)u(t,x) in smaller boxes of bounded size. In each of the local box, one can control the probability of u(t,x)u(t,x) taking jumps using the available tail probability of the one point distribution and the local modulus of continuity of the parabolic Anderson equation. All of these upper bounds on the probabilities of the events restricted to the smaller box will be finally combined to complete the proof. Although our proof is motivated from [KKX18, Proposition 3.14], there is striking difference between our techniques for the valleys and those used in [KKX18] for the peaks. In the latter case, there was apriori knowledge on the tail bounds of the supremum of u(t,x)u(t,x) in an interval along spatial direction (see [Che16, Theorem 5.1], [KKX18, Proposition 3.14]). However, deriving such tail bound for the infimum of u(t,x)u(t,x) is far more challenging. We will bypass this difficulty by suitable use of one point lower tail of u(t,x)u(t,x) with its spatio-temporal modulus of continuity.

Fix δ>0\delta>0 and ϵ(δ,1)\epsilon\in(\delta,1). We fix some large aa and bb which will be specified later. Let m1:=m1(a,ϵ)m_{1}:=m_{1}(a,\epsilon) and m2:=m2(a,ϵ)m_{2}:=m_{2}(a,\epsilon) be the least integers satisfying m1ea2ϵ>l1m_{1}e^{-a^{2-\epsilon}}>l_{1} and m2ea2ϵ>l2m_{2}e^{-a^{2-\epsilon}}>l_{2}. Define

ai:=a+iea2ϵ,andbj:=b+jea2ϵ,Iia:=(ai,ai+1],andIjb:=(bj,bj+1]\displaystyle a_{i}:=a+ie^{-a^{2-\epsilon}},\quad\text{and}\quad b_{j}:=b+je^{-a^{2-\epsilon}},\qquad I^{a}_{i}:=(a_{i},a_{i+1}],\quad\text{and}\quad I^{b}_{j}:=(b_{j},b_{j+1}]

for any integers i[0,m11]i\in[0,m_{1}-1], j[0,m21]j\in[0,m_{2}-1] where am1:=a+1a_{m_{1}}:=a+1 and bm2:=b+1.b_{m_{2}}:=b+1. Note that m1+12l1ea2ϵm_{1}+1\leq 2l_{1}e^{a^{2-\epsilon}} and m2+12l2ea2ϵm_{2}+1\leq 2l_{2}e^{a^{2-\epsilon}}. By the union bound,

\displaystyle\mathds{P} (For some (t,x)(a,a+1]×(b,b+1],u(t,x)<eγt)(𝐈)+(𝐈𝐈),\displaystyle\Big{(}\text{For some }(t,x)\in(a,a+1]\times(b,b+1],u(t,x)<e^{-\gamma t}\Big{)}\leq(\mathbf{I})+(\mathbf{II}), (3.28)
(𝐈)\displaystyle(\mathbf{I}) :=((i,j)2([0,m1]×[0,m2]){u(ai,bj)<2eγai})\displaystyle:=\mathds{P}\Big{(}\bigcup_{(i,j)\in\mathds{Z}^{2}\cap([0,m_{1}]\times[0,m_{2}])}\big{\{}u(a_{i},b_{j})<2e^{-\gamma a_{i}}\big{\}}\Big{)}
(𝐈𝐈)\displaystyle(\mathbf{II}) :=((i,j)2([0,m1]×[0,m2]){supt,sIiasupx,yIjb|u(t,x)u(s,y)|>eγa}).\displaystyle:=\mathds{P}\Big{(}\bigcup_{(i,j)\in\mathds{Z}^{2}\cap([0,m_{1}]\times[0,m_{2}])}\big{\{}\sup_{t,s\in I^{a}_{i}}\sup_{x,y\in I^{b}_{j}}|u(t,x)-u(s,y)|>e^{-\gamma a}\big{\}}\Big{)}.

The proof will be completed by showing that (𝐈)(\mathbf{I}) and (𝐈𝐈)(\mathbf{II}) are both bounded above by Cl1l2eCa2ϵCl_{1}l_{2}e^{-Ca^{2-\epsilon}} for some constant C>0C>0. The first term (𝐈)(\mathbf{I}) will be bounded above using Proposition 2.3 and the second term (𝐈𝐈)(\mathbf{II}) will be bounded using the tail bound on the spatio-temporal modulus of continuity of u(t,x)u(t,x). We provide the details below. Applying the union bound for γ~(1/24,γ)\tilde{\gamma}\in(1/24,\gamma), we have

(𝐈)\displaystyle(\mathbf{I}) (m1+1)(m2+1)supt(a,a+1]supx(b,b+1](u(t,x)<eγ~t)\displaystyle\leq(m_{1}+1)(m_{2}+1)\cdot\sup_{t\in(a,a+1]}\sup_{x\in(b,b+1]}\mathds{P}\big{(}u(t,x)<e^{-\tilde{\gamma}t}\big{)}
4l1l2exp(2a2ϵc(γ~124)33δ2a2δ)Cl1l2exp(Ca2δ),\displaystyle\leq 4l_{1}l_{2}\exp\Big{(}2a^{2-\epsilon}-c\left(\tilde{\gamma}-\frac{1}{24}\right)^{3-\frac{3\delta}{2}}a^{2-\delta}\Big{)}\leq Cl_{1}l_{2}\exp({-Ca^{2-\delta}}),

where we have used (m1+1)(m2+1)4l1l2exp(2a2ϵ)(m_{1}+1)(m_{2}+1)\leq 4l_{1}l_{2}\exp(2a^{2-\epsilon}) and Proposition 2.3 to obtain the second inequality. The last inequality follows for all sufficiently large aa since 2δ>2ϵ2-\delta>2-\epsilon. This shows the upper bound on (𝐈)(\mathbf{I}).

We now proceed to bound (𝐈𝐈)(\mathbf{II}). By [KKX18, Lemma 3.13], there exists N>0N>0 such that for all k2k\in\mathds{R}_{\geq 2}, q(0,1(6/k)),q\in(0,1-(6/k)), and η(q,1(6/k))\eta\in(q,1-(6/k)),

𝔼[suptsIiasupxyIjb|u(t,x)u(s,y)||ts|1/4+|xy|1/2|q|k]CeNk3(a+1),\mathds{E}\Big{[}\sup_{t\neq s\in I^{a}_{i}}\sup_{x\neq y\in I^{b}_{j}}\Big{|}\frac{u(t,x)-u(s,y)}{||t-s|^{1/4}+|x-y|^{1/2}|^{q}}\Big{|}^{k}\Big{]}\leq Ce^{Nk^{3}(a+1)}, (3.29)

where CC depends only on (k,η,q)(k,\eta,q). Since the lengths of |Iia||I_{i}^{a}| and |Ijb||I_{j}^{b}| are less than exp(a2ϵ)\exp(-a^{2-\epsilon}), we have

𝔼[supt,sIiasupx,yIjb|u(t,x)u(s,y)|]CeNk3(a+1)2kq1exp(kqa2ϵ4).\mathds{E}\Big{[}\sup_{t,s\in I^{a}_{i}}\sup_{x,y\in I^{b}_{j}}|u(t,x)-u(s,y)|\Big{]}\leq Ce^{Nk^{3}(a+1)}2^{kq-1}\exp\Big{(}-\frac{kqa^{2-\epsilon}}{4}\Big{)}. (3.30)

Applying the union bound and the above inequality yields

(𝐈𝐈)\displaystyle(\mathbf{II}) (m1+1)(m2+1)CeNk3(a+1)+kγa2kqexp(kqa2ϵ4)\displaystyle\leq(m_{1}+1)(m_{2}+1)\cdot Ce^{Nk^{3}(a+1)+k\gamma a}\cdot 2^{kq}\exp\Big{(}-\frac{kqa^{2-\epsilon}}{4}\Big{)}
Cl1l22kqexp(Nk3(a+1)+kγa+a2ϵ(2kq4)),\displaystyle\leq Cl_{1}l_{2}\cdot 2^{kq}\exp\Big{(}Nk^{3}(a+1)+k\gamma a+a^{2-\epsilon}\big{(}2-\frac{kq}{4}\big{)}\Big{)},

where the last inequality follows since (m1+1)(m2+1)4l1l2exp(2a2ϵ).(m_{1}+1)(m_{2}+1)\leq 4l_{1}l_{2}\exp(2a^{2-\epsilon}). By choosing kk and qq such that 2(kq/4)<02-(kq/4)<0, the last line of the above display can be bounded above by Cl1l2exp(Ca2ϵ)Cl_{1}l_{2}\exp(-Ca^{2-\epsilon}) for all large a>1a>1 with some constant C>0C>0. This provides the upper bound to (𝐈𝐈)(\mathbf{II}). Substituting the upper bound on (𝐈)(\mathbf{I}) and (𝐈𝐈)(\mathbf{II}) into the right hand side of (3.28) completes the proof.

4. Tail Estimates

The main goal of this section is to prove Theorem 1.4. Apart from this, we also provide the proofs of Proposition 2.2 and 2.3 towards the end of this section. Before proceeding to main technical body of this section, we will recall some known facts and introduce few notations. Recall that the Cole-Hopf solution of the KPZ equation is none other than the logarithm of the solution of (1.1). When started from the delta initial measure, the logarithm of the solution of (1.1) corresponds to the solution of the KPZ equation started from the narrow wedge initial data. We will denote the latter by 𝐧𝐰\mathcal{H}^{\mathbf{nw}} and define

Υt(x):=𝐧𝐰(t,t2/3x)+t24t1/3.\displaystyle\Upsilon_{t}(x):=\frac{\mathcal{H}^{\mathbf{nw}}(t,t^{2/3}x)+\frac{t}{24}}{t^{1/3}}. (4.1)

Below we recall few results on the tail probabilities of Υt\Upsilon_{t} from [CG20b, CG20a, CGH21].

Proposition 4.1 (Theorem 1.1 of [CG20a]).

Fix δ(0,1/3)\delta\in(0,1/3) and t0>0t_{0}>0. Then, there exists s0=s0(t0)>0s_{0}=s_{0}(t_{0})>0 and c1=c1(t0),c2=c2(t0)>0c_{1}=c_{1}(t_{0}),c_{2}=c_{2}(t_{0})>0 such that for all ss0s\geq s_{0} and tt0t\geq t_{0},

ec1s3+ec1t1/3s5/2(Υt(x)+x22s)ec2t1/3s5/2+ec2s3δ+ec2s3.e^{-c_{1}s^{3}}+e^{-c_{1}t^{1/3}s^{5/2}}\leq\mathds{P}\big{(}\Upsilon_{t}(x)+\frac{x^{2}}{2}\leq-s\big{)}\leq e^{-c_{2}t^{1/3}s^{5/2}}+e^{-c_{2}s^{3-\delta}}+e^{-c_{2}s^{3}}. (4.2)
Proposition 4.2 (Proposition 1.10 of [CG20b]).

For any t0>0t_{0}>0, there exist s0=s0(t0)>0s_{0}=s_{0}(t_{0})>0 and c1(t0)>c2(t0)>0c_{1}(t_{0})>c_{2}(t_{0})>0 such that,for t>t0t>t_{0}, s>s0s>s_{0} and xx\in\mathds{R},

ec1s3/2(Υt(x)+x22s)ec2s3/2.e^{-c_{1}s^{3/2}}\leq\mathds{P}\big{(}\Upsilon_{t}(x)+\frac{x^{2}}{2}\geq s\big{)}\leq e^{-c_{2}s^{3/2}}. (4.3)
Proposition 4.3 (Proposition 4.2 of [CGH21]).

For any t0>0t_{0}>0 and ν[0,1)\nu\in[0,1), there exist s0=s0(t0,ν)s_{0}=s_{0}(t_{0},\nu) and c1=c1(t0,ν)>c2=c2(t0,ν)>0c_{1}=c_{1}(t_{0},\nu)>c_{2}=c_{2}(t_{0},\nu)>0 such that, for tt0t\geq t_{0} and s>s0s>s_{0},

ec1s3/2(supx{Υt(x)+νx22}s)ec2s3/2.e^{-c_{1}s^{3/2}}\leq\mathds{P}\Big{(}\sup_{x\in\mathds{R}}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\geq s\Big{)}\leq e^{-c_{2}s^{3/2}}.

Now we proceed to prove Theorem 1.4.

4.1. Proof of Theorem 1.4

We use Proposition 2.1 to prove this theorem. By the convolution formula (see (2.1)), it suffices to show

(u0(xy)et1/3Υt(t2/3y)𝑑ye(γ124)t)ect4+ϵ\displaystyle\mathds{P}\Big{(}\int^{\infty}_{-\infty}u_{0}(x-y)e^{t^{1/3}\Upsilon_{t}(t^{-2/3}y)}dy\leq e^{-(\gamma-\frac{1}{24})t}\Big{)}\geq e^{-ct^{4+\epsilon}} (4.4)

for all large tt. Recall that there exists K>0K>0 such that u0(z)Ku_{0}(z)\leq K for all zz\in\mathds{R}. Fix ν(0,1)\nu\in(0,1). It is straightforward to check that there exists C=C(K,γ,t0)>0C=C(K,\gamma,t_{0})>0 such that

{supx{Υt(x)+νx22}Ct2/3}\displaystyle\Big{\{}\sup_{x\in\mathds{R}}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\leq-Ct^{2/3}\Big{\}} {t2/3u0(xt2/3y)et1/3Υt(y)𝑑ye(γ124)t}\displaystyle\subseteq\Big{\{}t^{2/3}\int^{\infty}_{-\infty}u_{0}(x-t^{2/3}y)e^{t^{1/3}\Upsilon_{t}(y)}dy\leq e^{-(\gamma-\frac{1}{24})t}\Big{\}} (4.5)
={u0(xy)et1/3Υt(t2/3y)𝑑ye(γ124)t},\displaystyle=\Big{\{}\int^{\infty}_{-\infty}u_{0}(x-y)e^{t^{1/3}\Upsilon_{t}(t^{-2/3}y)}dy\leq e^{-(\gamma-\frac{1}{24})t}\Big{\}},

for all tt0t\geq t_{0}. In the last equality, we used the change of variable yt2/3yy\mapsto t^{-2/3}y. Thanks to the above display, proving (4.4) boils down to showing the following result which is certainly the main technical contribution of this section.

Theorem 4.4.

For C>0C>0, ϵ>0\epsilon>0 and ν(0,1)\nu\in(0,1), there exists c=c(t0,C,ν,ϵ)>0c=c(t_{0},C,\nu,\epsilon)>0 and t0=t0(C,ν,ϵ)>0t_{0}=t_{0}(C,\nu,\epsilon)>0 such that for tt0t\geq t_{0},

(supx(Υt(x)+νx22)Ct2/3)exp(ct4+ϵ)\mathds{P}\Big{(}\sup_{x\in\mathds{R}}\left(\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\right)\leq-Ct^{2/3}\Big{)}\geq\exp\big{(}-ct^{4+\epsilon}\big{)} (4.6)
Remark 4.5.

In (4.5), Theorem 1.4 is reduced to solving relatively simpler problem of finding lower bound to the lower tail probability of supxΥt(x)\sup_{x\in\mathds{R}}\Upsilon_{t}(x). Similar reduction is possible when the function f:f:\mathds{R}\to\mathds{R} defined by f(x):=t1/3logu0(t2/3x)f(x):=t^{-1/3}\log u_{0}(t^{2/3}x) satisfies similar conditions as in Definition 1.1 of [CG20b]. This implies Theorem 1.4 can be extended to a large class of initial data since the rest of steps in our proof does not need any specification of u0u_{0}.

For proving Theorem 4.4, we need the following three propositions. The first proposition will show that the supremum of Υt(x)+νx22\Upsilon_{t}(x)+\frac{\nu x^{2}}{2} in xx-variable is attained with high probability in a compact interval. The second proposition will find a tail bound on the fluctuations Υt(x)+νx22\Upsilon_{t}(x)+\frac{\nu x^{2}}{2} on a compact interval and the last proposition is a generalization of the FKG type inequality for the KPZ equation found in [CQ13]. We first state those propositions. This will be followed by the proof of Theorem 4.4. Thereafter, we will complete the proof of those propositions in three ensuing subsections.

Proposition 4.6.

For any t0>0t_{0}>0 and ν(0,1)\nu\in(0,1), there exists M0=M0(t0,ν)>0M_{0}=M_{0}(t_{0},\nu)>0, c=c(t0,ν)>0c=c(t_{0},\nu)>0 such that for tt0t\geq t_{0} and M>M0M>M_{0},

(argsupx{Υt(x)+νx22}[M,M])1ecM3.\mathds{P}\Big{(}\mathrm{argsup}_{x\in\mathds{R}}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\in[-M,M]\Big{)}\geq 1-e^{-cM^{3}}. (4.7)
Proposition 4.7.

Fix aa\in\mathds{R} and ϵ(0,1)\epsilon\in(0,1). For any t0>0t_{0}>0, there exists s0=s0(t0)>0s_{0}=s_{0}(t_{0})>0, c=c(t0)>0c=c(t_{0})>0 such that for t>t0t>t_{0} and s>s0s>s_{0}

(supx[a,a+ϵs/16]|Υt(x)+x22Υt(a)a22|ϵs)ecs3/2.\displaystyle\mathds{P}\Big{(}\sup_{x\in[a,a+\epsilon\sqrt{s}/16]}\left|\Upsilon_{t}(x)+\frac{x^{2}}{2}-\Upsilon_{t}(a)-\frac{a^{2}}{2}\right|\geq\sqrt{\epsilon}s\Big{)}\leq e^{-cs^{3/2}}. (4.8)
Proposition 4.8.

Suppose [a,b][a,b] and [c,d][c,d] are disjoint intervals in \mathds{R}. Then for all ss\in\mathds{R} and ν>0\nu>0, we have

\displaystyle\mathds{P} (supx[a,b][c,d](Υt(x)+νx22)s)\displaystyle\Big{(}\sup_{x\in[a,b]\cup[c,d]}\left(\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\right)\leq s\Big{)}
(supx[a,b](Υt(x)+νx22)s)(supx[c,d](Υt(x)+νx22)s).\displaystyle\geq\mathds{P}\Big{(}\sup_{x\in[a,b]}\left(\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\right)\leq s\Big{)}\cdot\mathds{P}\Big{(}\sup_{x\in[c,d]}\left(\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\right)\leq s\Big{)}. (4.9)

4.1.1. Proof of Theorem 4.4

Choose and fix an arbitrary ϵ>0\epsilon>0. Let us define s:=Ct2/3s:=Ct^{2/3} where CC is same as in Theorem 4.4. Note that

(supx{Υt(x)+νx22}s)\displaystyle\mathds{P}\Big{(}\sup_{x\in\mathds{R}}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\leq-s\Big{)}\geq (𝐀)(𝐁),\displaystyle(\mathbf{A})-(\mathbf{B}), (4.10)

where

(𝐀)\displaystyle(\mathbf{A}) :=(supx[s2+ϵ,s2+ϵ]{Υt(x)+νx22}s),\displaystyle:=\mathds{P}\Big{(}\sup_{x\in[-s^{2+\epsilon},s^{2+\epsilon}]}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\leq-s\Big{)},
(𝐁)\displaystyle(\mathbf{B}) :=(argsupx{Υt(x)+νx22}[s2+ϵ,s2+ϵ]).\displaystyle:=\mathds{P}\Big{(}\mathrm{argsup}_{x\in\mathds{R}}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\notin[-s^{2+\epsilon},s^{2+\epsilon}]\Big{)}.

By Proposition 4.6, we may bound (𝐁)(\mathbf{B}) from above by exp(cs6+3ϵ)\exp(-cs^{6+3\epsilon}) whenever ss0s\geq s_{0} and tt0t\geq t_{0} where s0>s_{0}> and c>0c>0 will only depend on some fixed t0>0t_{0}>0. When tt0t\geq t_{0} is sufficiently large such that s(=Ct2/3)s(=Ct^{2/3}) exceeds s0s_{0}, then, we can certainly write

(𝐁)exp(cs6+3ϵ).\displaystyle(\mathbf{B})\leq\exp(-cs^{6+3\epsilon}).

Now we proceed to find lower bound to (𝐀)(\mathbf{A}). To this end, we decompose interval [s2+ϵ,s2+ϵ][-s^{2+\epsilon},s^{2+\epsilon}] into smaller intervals Ii:=[ai,ai+1]I_{i}:=[a_{i},a_{i+1}] where

ai:=s2+ϵ+(i1)s1ϵfor i=1,,k1andak:=s2+ϵ,a_{i}:=-s^{2+\epsilon}+(i-1)s^{-1-\epsilon}\quad\text{for }i=1,...,k-1\quad\text{and}\quad a_{k}:=s^{2+\epsilon},

with k:=32s3+2ϵk:=\lceil 32s^{3+2\epsilon}\rceil. Applying the FKG inequality of Proposition 4.8 shows that

(supx[s2+ϵ,s2+ϵ]{Υt(x)+νx22}s)i=1k(supxIi{Υt(x)+νx22}s),\displaystyle\mathds{P}\Big{(}\sup_{x\in[-s^{2+\epsilon},s^{2+\epsilon}]}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\leq-s\Big{)}\geq\prod_{i=1}^{k}\mathds{P}\Big{(}\sup_{x\in I_{i}}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\leq-s\Big{)}, (4.11)

Note that |Ii|s1ϵ/16|I_{i}|\leq s^{-1-\epsilon}/16 for each ii. On each interval IiI_{i}, by the union bound, we obtain

(supxIi{Υt(x)+νx22}s)\displaystyle\mathds{P}\Big{(}\sup_{x\in I_{i}}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\leq-s\Big{)}\geq (Υt(ai)+ai222s)\displaystyle\mathds{P}\Big{(}\Upsilon_{t}(a_{i})+\frac{a_{i}^{2}}{2}\leq-2s\Big{)}
(supxIi|Υt(x)+x22Υt(ai)ai22|s)\displaystyle-\mathds{P}\Big{(}\sup_{x\in I_{i}}\big{|}\Upsilon_{t}(x)+\frac{x^{2}}{2}-\Upsilon_{t}(a_{i})-\frac{a_{i}^{2}}{2}\big{|}\geq s\Big{)} (4.12)

We may use Proposition 4.1 to lower bound the first term in the right hand side of the above inequality. To this end, we get for large ss0s\geq s_{0} and tt0t\geq t_{0},

(Υt(ai)+ai222s)exp(c1t13s52)+exp(c2s3).\displaystyle\mathds{P}\Big{(}\Upsilon_{t}(a_{i})+\frac{a_{i}^{2}}{2}\leq-2s\Big{)}\geq\exp(-c_{1}t^{\frac{1}{3}}s^{\frac{5}{2}})+\exp(-c_{2}s^{3}).

Since s=Ct2/3s=Ct^{2/3}, the right hand side is bounded below by exp(cs3)\exp(-cs^{3}) for all large t>0t>0. To bound the second term, we use Proposition 4.7. Letting ε:=s24ϵ/3\varepsilon:=s^{-2-4\epsilon/3}, s~:=s2+2ϵ/3\tilde{s}:=s^{2+2\epsilon/3} and applying stationarity of the spatial process Υt(x)+x2/2\Upsilon_{t}(x)+x^{2}/2 shows

(supxIi|Υt(x)+x22Υt(ai)ai22|s)=(sup|x|εs~/16|Υt(x)+x22Υt(0)|εs~)ecs3+ϵ\displaystyle\mathds{P}\Big{(}\sup_{x\in I_{i}}\left|\Upsilon_{t}(x)+\frac{x^{2}}{2}-\Upsilon_{t}(a_{i})-\frac{a_{i}^{2}}{2}\right|\geq s\Big{)}=\mathds{P}\Big{(}\sup_{|x|\leq\varepsilon\sqrt{\tilde{s}}/16}\left|\Upsilon_{t}(x)+\frac{x^{2}}{2}-\Upsilon_{t}(0)\right|\geq\sqrt{\varepsilon}\tilde{s}\Big{)}\leq e^{-cs^{3+\epsilon}}

where the last inequality follows by applying Proposition 4.7. Combining the bounds on both terms on the right side of (4.12), we may write

r.h.s. of (4.12)ecs3\displaystyle\text{r.h.s. of \eqref{eq:Divide}}\geq e^{-cs^{3}} (4.13)

for all large t>0t>0. This provides a lower bound to each of the terms in the product of (4.11). Substituting those lower bounds into (4.11) and recalling that k=32s3+2ϵk=\lceil 32s^{3+2\epsilon}\rceil yields

(𝐀)=(supx[s2+ϵ,s2+ϵ]{Υt(x)+νx22}s)e𝔠s6+2ϵ\displaystyle(\mathbf{A})=\mathds{P}\Big{(}\sup_{x\in[-s^{2+\epsilon},s^{2+\epsilon}]}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\leq-s\Big{)}\geq e^{-\mathfrak{c}s^{6+2\epsilon}}

for all large t>0t>0 where 𝔠\mathfrak{c} is a positive constant which does not depend on tt or ϵ\epsilon. Putting the lower bound on (𝐀)(\mathbf{A}) and the upper bound on (𝐁)(\mathbf{B}) together into (4.10) shows

(supx{Υt(x)+νx22}s)e𝔠s6+2ϵecs6+3ϵ21e𝔠s6+2ϵ\mathds{P}\Big{(}\sup_{x\in\mathds{R}}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}\leq-s\Big{)}\geq e^{-\mathfrak{c}s^{6+2\epsilon}}-e^{-cs^{6+3\epsilon}}\geq 2^{-1}e^{-\mathfrak{c}s^{6+2\epsilon}} (4.14)

for all large t>0t>0. Notice that s6+2ϵ=C6+2ϵt4+4ϵ/3s^{6+2\epsilon}=C^{6+2\epsilon}t^{4+4\epsilon/3}. This completes the proof since ϵ\epsilon is an arbitrary constant.

4.1.2. Proof of Proposition 4.6

Note that (4.7) will be proved once we show the following bound

(E)ecM3,where E:={supx[M,M]{Υt(x)+νx22}>Υt(0)}\mathds{P}(E)\leq e^{-cM^{3}},\quad\text{where }E:=\Big{\{}\sup_{x\in\mathds{R}\setminus[-M,M]}\big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\big{\}}>\Upsilon_{t}(0)\Big{\}}

Fix ν(ν,1)\nu^{\prime}\in(\nu,1) and choose a constant κ>0\kappa>0 such that (νν)>2κ(\nu^{\prime}-\nu)>2\kappa. By the union bound, we may write

(E)(supx[M,M]{Υt(x)+νx22}κM2)+(Υt(0)κM2).\mathds{P}(E)\leq\mathds{P}(\sup_{x\in\mathds{R}\setminus[-M,M]}\{\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\}\geq-\kappa M^{2})+\mathds{P}(\Upsilon_{t}(0)\leq-\kappa M^{2}). (4.15)

By Proposition 4.1, we may bound (Υt(0)κM2)\mathds{P}(\Upsilon_{t}(0)\leq-\kappa M^{2}) by exp(cM5)\exp(-cM^{5}) for all tt0t\geq t_{0} where cc is an absolute constant which only depends ν,ν,t0\nu,\nu^{\prime},t_{0}. It remains to bound the first term in the right hand side of the above display. To this end, denoting (νν)/2κ=:θ(\nu^{\prime}-\nu)/2-\kappa=:\theta, we write

(supx[M,M]{Υt(x)+νx22}κM2)(supx[M,M]{Υt(x)+νx22}θM2)exp(cM3)\mathds{P}\Big{(}\sup_{x\in\mathds{R}\setminus[-M,M]}\{\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\}\geq-\kappa M^{2}\Big{)}\leq\mathds{P}\Big{(}\sup_{x\in\mathds{R}\setminus[-M,M]}\{\Upsilon_{t}(x)+\frac{\nu^{\prime}x^{2}}{2}\}\geq\theta M^{2}\Big{)}\leq\exp(-cM^{3})

where the last inequality follows from Proposition 4.3 for all tt0t\geq t_{0} and MM0M\geq M_{0}. Combining the above bounds and substituting those into (4.15) completes the proof.

4.1.3. Proof of Proposition 4.7

We will prove this result using Proposition 4.1 and 4.2 from [DG21]. By proper identification of the notation used in this section with those used in [DG21, Proposition 4.1,4.2], we see that there exist c=c(t0)>0c=c(t_{0})>0 and s0=s0(t0)s_{0}=s_{0}(t_{0}) such that for any ϵ(0,1)\epsilon\in(0,1),

(inf|x|ϵs{Υt(x)Υt(0)}ϵs)\displaystyle\mathds{P}\Big{(}\inf_{|x|\leq\epsilon\sqrt{s}}\{\Upsilon_{t}(x)-\Upsilon_{t}(0)\}\leq-\sqrt{\epsilon}s\Big{)} ecs3/2,\displaystyle\leq e^{-cs^{3/2}},
(sup|x|ϵs/16{Υt(x)Υt(0)}ϵs)\displaystyle\mathds{P}\Big{(}\sup_{|x|\leq\epsilon\sqrt{s}/16}\{\Upsilon_{t}(x)-\Upsilon_{t}(0)\}\geq\sqrt{\epsilon}s\Big{)} ecs3/2.\displaystyle\leq e^{-cs^{3/2}}.

Notice that the above inequalities holds (with possibly a different constant cc) if we replace Υt(x)+x22\Upsilon_{t}(x)+\frac{x^{2}}{2}. The aftermath of these substitutions can be summarized in the following inequality: there exists c,s0c,s_{0} depending on t0t_{0} such that for all ss0s\geq s_{0} and tt0t\geq t_{0},

(sup|x|ϵs/16|Υt(x)+x22Υt(0)|ϵs)ecs3/2.\displaystyle\mathds{P}\Big{(}\sup_{|x|\leq\epsilon\sqrt{s}/16}\big{|}\Upsilon_{t}(x)+\frac{x^{2}}{2}-\Upsilon_{t}(0)\big{|}\geq\sqrt{\epsilon}s\Big{)}\leq e^{-cs^{3/2}}.

Now (4.8) follows from the above display by the stationarity of the spatial process of Υt(x)+x22\Upsilon_{t}(x)+\frac{x^{2}}{2}.

4.1.4. Proof of Proposition 4.8

By using the FKG inequality for the KPZ equation (see Proposition 1 in [CQ13] or, Proposition 2.7 in [CGH21], we may write that any k,kk,k^{\prime}\in\mathds{N}, k<kk<k^{\prime}, t>0t>0, x1,,xk[a,b]x_{1},...,x_{k}\in[a,b] and xk+1,,xk[c,d]x_{k+1},\ldots,x_{k^{\prime}}\in[c,d],

(l=1k{Υt(xl)+νxl22s})(l=1k{Υt(xl)+νxl22s})(l=k+1k{Υt(xl)+νxl22s}).\mathds{P}\Big{(}\bigcap_{l=1}^{k^{\prime}}\Big{\{}\Upsilon_{t}(x_{l})+\frac{\nu x_{l}^{2}}{2}\leq s\Big{\}}\Big{)}\geq\mathds{P}\Big{(}\bigcap_{l=1}^{k^{\prime}}\Big{\{}\Upsilon_{t}(x_{l})+\frac{\nu x_{l}^{2}}{2}\leq s\Big{\}}\Big{)}\cdot\mathds{P}\Big{(}\bigcap_{l=k^{\prime}+1}^{k}\Big{\{}\Upsilon_{t}(x_{l})+\frac{\nu x_{l}^{2}}{2}\leq s\Big{\}}\Big{)}.

Suppose x1,,xkx_{1},\ldots,x_{k} be the first kk terms of an enumerations of the rational numbers from [a,b][a,b] and similarly, xk+1,,xkx_{k+1},\ldots,x_{k^{\prime}} be the first kkk^{\prime}-k terms in the rational enumeration from [c,d][c,d]. Letting kk\to\infty and kkk^{\prime}-k\to\infty shows

\displaystyle\mathds{P} (x[a,b][c,d]x{Υt(x)+νx22s})(x[a,b]x{Υt(x)+νx22s}}{x[c,d]x{Υt(x)+νx22s}).\displaystyle\Big{(}\bigcap_{\begin{subarray}{c}x\in[a,b]\cup[c,d]\\ x\in\mathds{Q}\end{subarray}}\Big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\leq s\Big{\}}\Big{)}\geq\mathds{P}\Big{(}\bigcap_{\begin{subarray}{c}x\in[a,b]\\ x\in\mathds{Q}\end{subarray}}\Big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\leq s\Big{\}}\Big{\}}\cdot\mathds{P}\Big{\{}\bigcap_{\begin{subarray}{c}x\in[c,d]\\ x\in\mathds{Q}\end{subarray}}\Big{\{}\Upsilon_{t}(x)+\frac{\nu x^{2}}{2}\leq s\Big{\}}\Big{)}.

Since Υt(x)\Upsilon_{t}(x) is almost surely continuous in xx for any tt, (4.8) follows immediately from the above display.

4.2. Proof of Proposition 2.2

To prove this result, we use Theorem 1.4 of [CG20a] which provides upper and lower bound to the upper tail probabilities of the Cole-Hopf solution of the KPZ equation started from a large class of initial data including any bounded initial data. Since the Cole-Hopf solution is the logarithm of the solution of (1.1), those tail probability bounds are also applicable for the parabolic Anderson equation. To apply [CG20a, Theorem 1.4], one needs u~0,t(z):=t1/3logu0(t2/3z)\tilde{u}_{0,t}(z):=t^{-1/3}\log u_{0}(t^{2/3}z) to satisfy two conditions of Definition 1.1 of [CG20a]. The first condition requires u~0,t(z)\tilde{u}_{0,t}(z) to be upper bounded by a parabola in zz-variable for all tt0t\geq t_{0} and the second condition requires u~0,t(z)\tilde{u}_{0,t}(z) to be lower bounded by a finite constant in an interval around 0. If u0u_{0} is a bounded positive initial data, then, the above two conditions will be satisfied for u~0,t()\tilde{u}_{0,t}(\cdot). Hence, by Theorem 1.4 of [CG20a], for any t0>0t_{0}>0, there exists c1>c2>0c_{1}>c_{2}>0 and s0s_{0} such that for all t>t0t>t_{0} and s>s0s>s_{0},

ec1s3/2(u(t,0)>et24+s(t2)1/3+23logt)ec2s3/2.\displaystyle e^{-c_{1}s^{3/2}}\leq\mathds{P}\left(u(t,0)>e^{-\frac{t}{24}+s\left(\frac{t}{2}\right)^{1/3}+\frac{2}{3}\log t}\right)\leq e^{-c_{2}s^{3/2}}. (4.16)

From the above inequalities, (2.2) for x=0x=0 follows by substituting s=21/3((124γ)t2/32logt3t1/3)s=2^{1/3}\left((\frac{1}{24}-\gamma)t^{2/3}-\frac{2\log t}{3t^{1/3}}\right). Since u0u_{0} is a bounded positive initial data, the inequalities in (4.16) also hold for x0x\neq 0 where the corresponding constants c1,c2c_{1},c_{2} would not depend on xx. The reason behind this fact lies in the convolution formula (2.1) for the one point distribution of u(t,x)u(t,x) and the proof can be completed using similar argument as in [CG20a, Theorem 1.4]. Once (4.16) holds for x0x\neq 0, we obtain (2.2) for x0x\neq 0 by making the appropriate substitution of ss by γ\gamma as prescribed above. This completes the proof.

4.3. Proof of Proposition 2.3

The proof of this proposition is analogous to how we have proved Theorem 2.2 using Theorem 1.4 of [CG20a]. In this case, we need to use Theorem 1.2 of [CG20a]. The arguments are similar as above. We skip the details for brevity.

Acknowledgments

The first author thanks Sayan Das for many useful conversations and numerous suggestions on the first draft of this paper. The second author thanks Professor Kunwoo Kim for valuable discussions with his continuous support and encouragement. The second author was supported by the NRF (National Research Foundation of Korea) grants 2019R1A5A1028324 and 2020R1A2C4002077.

References

  • [BC95] L. Bertini and N. Cancrini. The stochastic heat equation: Feynman-kac formula and intermittence. Journal of statistical Physics, 78(5):1377–1401, 1995.
  • [BT89] M. T Barlow and S. J Taylor. Fractional dimension of sets in discrete spaces. Journal of Physics A: Mathematical and General, 22(13):2621, 1989.
  • [BT92] M. T. Barlow and S. J. Taylor. Defining fractal subsets of d\mathbb{Z}^{d}. Proceedings of the London Mathematical Society, 3(1):125–152, 1992.
  • [CC19] M. Cafasso and T. Claeys. A Riemann-Hilbert approach to the lower tail of the kpz equation. arXiv preprint arXiv:1910.02493, 2019.
  • [CCKK17] L. Chen, M. Cranston, D. Khoshnevisan, and K. Kim. Dissipation and high disorder. The Annals of Probability, 45(1):82–99, 2017.
  • [CD15] L. Chen and R. C. Dalang. Moments and growth indices for the nonlinear stochastic heat equation with rough initial conditions. The Annals of Probability, 43(6):3006–3051, 2015.
  • [CG20a] I. Corwin and P. Ghosal. KPZ equation tails for general initial data. Electronic Journal of Probability, 25:1–38, 2020.
  • [CG20b] I. Corwin and P. Ghosal. Lower tail of the KPZ equation. Duke Mathematical Journal, 169(7):1329–1395, 2020.
  • [CGH21] I. Corwin, P. Ghosal, and A. Hammond. KPZ equation correlations in time. The Annals of Probability, 49(2):832–876, 2021.
  • [CH16] I. Corwin and A. Hammond. KPZ line ensemble. Probability Theory and Related Fields, 166(1):67–185, 2016.
  • [Che15] X. Chen. Precise intermittency for the parabolic anderson equation with an (1+1)(1+1)-dimensional time–space white noise. In Annales de l’IHP Probabilités et statistiques, volume 51, pages 1486–1499, 2015.
  • [Che16] X. Chen. Spatial asymptotics for the parabolic Anderson models with generalized time-space Gaussian noise. Ann. Probab., 44(2):1535–1598, 2016.
  • [CHN16] L. Chen, Y. Hu, and D. Nualart. Regularity and strict positivity of densities for the nonlinear stochastic heat equation. arXiv preprint arXiv:1611.03909, 2016.
  • [CJK13] D. Conus, M. Joseph, and D. Khoshnevisan. On the chaotic character of the stochastic heat equation, before the onset of intermitttency. The Annals of Probability, 41(3B):2225–2260, 2013.
  • [CK17] Le Chen and Kunwoo Kim. On comparison principle and strict positivity of solutions to the nonlinear stochastic fractional heat equations. In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, volume 53, pages 358–388. Institut Henri Poincaré, 2017.
  • [CM94] R. A. Carmona and S. A. Molchanov. Parabolic Anderson problem and intermittency. Mem. Amer. Math. Soc., 108(518):viii+125, 1994.
  • [CQ13] I. Corwin and J. Quastel. Crossover distributions at the edge of the rarefaction fan. The Annals of Probability, 41(3A):1243–1314, 2013.
  • [DG21] S. Das and P. Ghosal. Law of iterated logarithms and fractal properties of the KPZ equation. arXiv preprint arXiv:2101.00730, 2021.
  • [DOV18] D. Dauvergne, J. Ortmann, and B. Virag. The directed landscape. arXiv e-prints, page arXiv:1812.00309, December 2018.
  • [Flo14] G. R. M. Flores. On the (strict) positivity of solutions of the stochastic heat equation. The Annals of Probability, 42(4):1635–1643, 2014.
  • [GD05] J. D. Gibbon and C. R. Doering. Intermittency and regularity issues in 3D Navier-Stokes turbulence. Arch. Ration. Mech. Anal., 177(1):115–150, 2005.
  • [GIP15] M. Gubinelli, P. Imkeller, and N. Perkowski. Paracontrolled distributions and singular PDEs. Forum Math. Pi, 3:e6, 75, 2015.
  • [GJ14] P. Gonçalves and M. Jara. Nonlinear fluctuations of weakly asymmetric interacting particle systems. Arch. Ration. Mech. Anal., 212(2):597–644, 2014.
  • [GKM07] J. Gärtner, W. König, and S. Molchanov. Geometric characterization of intermittency in the parabolic Anderson model. Ann. Probab., 35(2):439–499, 2007.
  • [GL20] P. Ghosal and Y. Lin. Lyapunov exponents of the SHE for general initial data. arXiv e-prints, page arXiv:2007.06505, July 2020.
  • [GP17] M. Gubinelli and N. Perkowski. KPZ reloaded. Comm. Math. Phys., 349(1):165–269, 2017.
  • [GT05] J. D. Gibbon and E. S. Titi. Cluster formation in complex multi-scale systems. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 461(2062):3089–3097, 2005.
  • [Hai13] M. Hairer. Solving the KPZ equation. Ann. of Math. (2), 178(2):559–664, 2013.
  • [KKMS20] D. Khoshnevisan, K. Kim, C. Mueller, and S.-Y. Shiu. Dissipation in parabolic spdes. Journal of Statistical Physics, 179(2):502–534, 2020.
  • [KKX17] D. Khoshnevisan, K. Kim, and Y. Xiao. Intermittency and multifractality: A case study via parabolic stochastic pdes. The Annals of Probability, 45(6A):3697–3751, 2017.
  • [KKX18] D. Khoshnevisan, K. Kim, and Y. Xiao. A macroscopic multifractal analysis of parabolic stochastic pdes. Communications in Mathematical Physics, 360(1):307–346, 2018.
  • [KLMS09] W. König, H. Lacoin, P. Mörters, and N. Sidorova. A two cities theorem for the parabolic Anderson model. Ann. Probab., 37(1):347–392, 2009.
  • [Kup16] A. Kupiainen. Renormalization group and stochastic PDEs. Ann. Henri Poincaré, 17(3):497–535, 2016.
  • [MN08] C. Mueller and D. Nualart. Regularity of the density for the stochastic heat equation. Electronic Journal of Probability, 13:2248–2258, 2008.
  • [MQR16] K. Matetski, J. Quastel, and D. Remenik. The KPZ fixed point. arXiv e-prints, page arXiv:1701.00018, December 2016.
  • [Mue91] C. Mueller. On the support of solutions to the heat equation with noise. Stochastics: An International Journal of Probability and Stochastic Processes, 37(4):225–245, 1991.
  • [Qua12] J. Quastel. Introduction to KPZ. In Current developments in mathematics, 2011, pages 125–194. Int. Press, Somerville, MA, 2012.
  • [Tsa18] L.-C. Tsai. Exact lower tail large deviations of the KPZ equation. arXiv preprint arXiv:1809.03410, 2018.
  • [Wal86] J. B. Walsh. An introduction to stochastic partial differential equations. In École d’Été de Probabilités de Saint Flour XIV-1984, pages 265–439. Springer, 1986.
  • [ZTPSM00] M. G. Zimmermann, R. Toral, O. Piro, and M. San Miguel. Stochastic spatiotemporal intermittency and noise-induced transition to an absorbing phase. Phys. Rev. Lett., 85:3612–3615, Oct 2000.