This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Stability and regularization for ill-posed Cauchy problem of a stochastic parabolic differential equation

Fangfang Dou Corrsponding author. Email: [email protected]. School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, China Peimin Lü School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, China Yu Wang School of Mathematics, Southwest Jiaotong University, Chengdu, China.
Abstract

In this paper, we investigate an ill-posed Cauchy problem involving a stochastic parabolic equation. We first establish a Carleman estimate for this equation. Leveraging this estimate, we derive the conditional stability and convergence rate of the Tikhonov regularization method for the aforementioned ill-posed Cauchy problem. To complement our theoretical analysis, we employ kernel-based learning theory to implement the completed Tikhonov regularization method for several numerical examples.


2020 Mathematics Subject Classification. 35R30,65N21,60H15.


Key Words. Carleman estimate, Cauchy problem of stochastic parabolic differential equation, conditional stability, regularization, numerical approximation.

1 Introduction

To beginning with, we introduce some notations concerning stochastic analysis. More details for these can be found in [24].

Let (Ω,,𝔽,)(\Omega,{\cal F},{\mathbb{F}},{\mathbb{P}}) with 𝔽={t}t0{\mathbb{F}}=\{{\cal F}_{t}\}_{t\geq 0} be a complete filtered probability space on which a one-dimensional standard Brownian motion {W(t)}t0\{W(t)\}_{t\geq 0} is defined. Let HH be a Fréchet space. We denote by L𝔽2(0,T;H)L_{{\mathbb{F}}}^{2}(0,T;H) the Fréchet space consisting of all HH-valued 𝔽{\mathbb{F}}-adapted processes X()X(\cdot) such that 𝔼(|X()|L2(0,T;H)2)<\mathbb{E}(|X(\cdot)|_{L^{2}(0,T;H)}^{2})<\infty; by L𝔽(0,T;H)L_{{\mathbb{F}}}^{\infty}(0,T;H) the Fréchet space consisting of all HH-valued 𝔽{\mathbb{F}}-adapted bounded processes; and by L𝔽2(Ω;C([0,T];H))L_{{\mathbb{F}}}^{2}(\Omega;C([0,T];H)) the Fréchet space consisting of all HH-valued 𝔽{\mathbb{F}}-adapted continuous processes XX such that 𝔼(|X|C([0,T];H)2)<\mathbb{E}(|X|_{C([0,T];H)}^{2})<\infty. All of the above spaces are equipped with the canonical quasi-norms. Furthermore, all of the above spaces are Banach spaces equipped with the canonical norms, if HH is a Banach space.

For simplicity, we use the notation yiyi(x)=y(x)xi,y_{i}\equiv y_{i}(x)=\frac{\partial y(x)}{\partial x_{i}}, where xix_{i} is the ii-th coordinate of a generic point x=(x1,,xn)x=(x_{1},\cdots,x_{n}) in n{\mathbb{R}}^{n}. In a similar manner, we use the notation ziz_{i}, viv_{i}, etc. for the partial derivatives of zz and vv with respect to xix_{i}. Also, we denote the scalar product in n{\mathbb{R}}^{n} by ,\big{\langle}\cdot,\cdot\big{\rangle}, and use CC to denote a generic positive constant independent of the solution yy, which may change from line to line.

Let T>0T>0, GnG\subset{\mathbb{R}}^{n} (nn\in{\mathbb{N}}) be a given bounded domain with the C4C^{4} boundary G\partial G, and Γ\Gamma be a given nonempty open subset of G\partial G.

Let a1L𝔽(0,T;L(G;n))a_{1}\in L^{\infty}_{\mathbb{F}}(0,T;L^{\infty}(G;{\mathbb{R}}^{n})), a2L𝔽(0,T;L(G))a_{2}\in L^{\infty}_{\mathbb{F}}(0,T;L^{\infty}(G)) and a3L𝔽(0,T;W1,(G))a_{3}\in L^{\infty}_{\mathbb{F}}(0,T;W^{1,\infty}(G)). We assume fL𝔽2(0,T;L2(G))f\in L^{2}_{\mathbb{F}}(0,T;L^{2}(G)), g1H1(Γ)g_{1}\in H^{1}(\Gamma), g2L𝔽2(0,T;L2(Γ))g_{2}\in L^{2}_{\mathbb{F}}(0,T;L^{2}(\Gamma)), and aij:Ω×[0,T]×G¯n×na^{ij}:\;\Omega\times[0,T]\times\overline{G}\to{\mathbb{R}}^{n\times n}\; (i,j=1,2,,ni,j=1,2,\cdots,n) satisfies

(H1) aijL𝔽2(Ω;C1([0,T];W2,(G)))a^{ij}\in L_{{\mathbb{F}}}^{2}(\Omega;C^{1}([0,T];W^{2,\infty}(G))) and aij=ajia^{ij}=a^{ji};

(H2) i,j=1naij(ω,t,x)ξiξjs0|ξ|2,(ω,t,x,ξ)(ω,t,x,ξ1,,ξn)Ω×(0,T)×G×n\sum\limits^{n}_{i,j=1}a^{ij}(\omega,t,x)\xi^{i}\xi^{j}\geq s_{0}|\xi|^{2},\ (\omega,t,x,\xi)\equiv(\omega,t,x,\xi^{1},\cdots,\xi^{n})\in\Omega\times(0,T)\times G\times{\mathbb{R}}^{n} for some s0>0s_{0}>0.

Now, the Cauchy problem of the forward stochastic parabolic differential equation can be described as follows.

{dyi,j=1n(aijyi)jdt=[a1,y+a2y+f]dt+a3ydW(t) in (0,T)×G,y=g1 on (0,T)×Γ,yν=g2 on (0,T)×Γ.\begin{cases}\displaystyle dy-\sum^{n}_{i,j=1}(a^{ij}y_{i})_{j}dt=[\big{\langle}a_{1},\nabla y\big{\rangle}+a_{2}y+f]dt+a_{3}ydW(t)&\mbox{ in }(0,T)\times G,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle y=g_{1}&\mbox{ on }(0,T)\times\Gamma,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\frac{\partial y}{\partial\nu}=g_{2}&\mbox{ on }(0,T)\times\Gamma.\end{cases} (1.1)

Let 𝐆Γ={GG|GGΓ}\mathbf{G}_{\Gamma}\buildrel\triangle\over{=}\{G^{\prime}\subset G|\partial G^{\prime}\cap\partial G\subset\Gamma\} and

HΓ2(G)=Δ{ηHloc2(G):η|GH2(G),G𝐆Γ,η|Γ=g1,νη|Γ=g2}.H_{\Gamma}^{2}(G)\mathop{\buildrel\Delta\over{=}}\{\eta\in H^{2}_{loc}(G):\eta|_{G^{\prime}}\in H^{2}(G^{\prime}),\quad\forall G^{\prime}\in\mathbf{G}_{\Gamma},\quad\eta|_{\Gamma}=g_{1},\;\partial_{\nu}\eta|_{\Gamma}=g_{2}\}.

The aim of this paper is to study the Cauchy Problem with the Lateral Data for the stochastic parabolic differential equation:

Problem (CP) Find a function yL𝔽2(Ω;C([0,T];Lloc2(G)))L𝔽2(0,T;HΓ2(G))y\in L_{{\mathbb{F}}}^{2}(\Omega;C([0,T];L^{2}_{loc}(G)))\cap L_{{\mathbb{F}}}^{2}(0,T;H_{\Gamma}^{2}(G)) that satisfies system (1.1).

Remark 1.1.

It would be quite interesting to study the general case where g1L𝔽2(0,T;H1(Γ))g_{1}\in L^{2}_{\mathbb{F}}(0,T;H^{1}(\Gamma)). However, this will lead to the term 𝔼0TΓi,j=1nθηaijviνjdydΓ\mathbb{E}\int_{0}^{T}\int_{\Gamma}\sum\limits_{i,j=1}^{n}\theta\eta a^{ij}v_{i}\nu^{j}dyd\Gamma in (2.28), which we have no idea how to handle.

In certain applications involving diffusive, thermal, and heat transfer problems, measuring some boundary data can be challenging. For instance, in nuclear reactors and steel furnaces in the steel industry, the interior boundary can be difficult to measure. Similarly, the use of liquid crystal thermography in visualization can pose the same problem. To address this issue, engineers attempt to reconstruct the status of the boundary using measurements from the accessible boundary. This, in turn, creates the Cauchy problem for parabolic equations. Due to the requirements in real applications, numerous researchers have focused on solving the Cauchy problem of parabolic differential equations, particularly on the inverse problems for deterministic parabolic equations, as seen in [14, 19, 17, 16, 15, 22, 31, 33], and their respective references. Among these works, some have studied the identification of coefficients in parabolic equations through lateral Cauchy observations, such as uniqueness and stability [14, 33, 19, 31], and numerical reconstruction [15], assuming that the initial value is known. Meanwhile, the determination of the initial value has been considered in [17, 16, 22] under the assumption that all coefficients in the governing equation are known.

Stochastic parabolic equations have a wide range of applications for simulating various behaviors of stochastic models which are utilized in numerous fields such as random growth of bacteria populations, propagation of electric potential in neuron models, and physical systems that are subject to thermal fluctuations (e.g.,[18, 24]). In addition, they can be considered as simplified models for complex phenomena such as turbulence, intermittence, and large-scale structure (e.g.,[8]). Given the significant applications of these models, stochastic parabolic equations have been extensively studied in both physics and mathematics. Therefore, it is natural to study the Cauchy problem of stochastic parabolic equations in such situations. However, due to the complexity of stochastic parabolic equations, some tools become ineffective for solving these problems. Thus, the research on inverse problems for stochastic parabolic differential equations is relatively scarce. In [1], the author proved backward uniqueness of solutions to stochastic semilinear parabolic equations and tamed Navier-Stokes equations driven by linearly multiplicative Gaussian noises, via the logarithmic convexity property known to hold for solutions to linear evolution equations in Hilbert spaces with self-adjoint principal parts. In [11, 27], authors studied inverse random source problems for the stochastic time fractional diffusion equation driven by a spatial Gaussian random field, proving the uniqueness and representation for the inverse problems, as well as proposing a numerical method using Fourier methods and Tikhonov regularization. Carleman estimates play an important role in the study of inverse problems for stochastic parabolic differential equations, such as inverse source problems [23, 35], determination of the history for stochastic diffusion processes [4, 23, 30, 34], and unique continuation properties [6, 32]. We refer the reader to [25] for a survey on some recent advances in Carleman estimates and their applications to inverse problems for stochastic partial differential equations.

In this paper, our objective is to solve Problem (CP), i.e., we aim to retrieve the solution of equation (1.1) with observed data from the lateral boundary. To this end, we first prove the conditional stability based on a new Carleman estimate for the stochastic parabolic equation (1.1). Then we construct a Tikhonov functional for the Cauchy problem based on the Tikhonov regularization strategy and prove the uniqueness of the minimizer of the Tikhonov functional, as well as the convergence rate for the optimization problem using variational principles, Riesz theorems, and Carleman estimates established previously.

Generally, the optimization problem for the Tikhonov functional is difficult to solve in the study of inverse problems in stochastic partial differential equations (SPDEs). This is because it involves solving the adjoint problem of the original problem, which is challenging to handle. In fact, one of the primary differences between stochastic parabolic equations and deterministic parabolic equations is that at least one partial derivative of the solution does not exist, making it impossible to express the solution of that equation explicitly. Fortunately, we can express the mild solution of the stochastic parabolic equations using the fundamental solution of the corresponding deterministic equation[7, 9]. This idea suggests that we can use kernel-based theory to numerically solve the minimization problem of the Tikhonov functional without computing the adjoint problem. Furthermore, we can solve the problem in one step without iteration, thus reducing the computational cost to some extent. This technique has gained attention in the study of numerical computation for ordinary and partial differential equations, and the use of fundamental solutions as kernels has proven effective for solving inverse problems in deterministic evolution partial differential equations. As far as we know, our work is the first attempt to apply the regularization method combining with kernel-based theory to solve inverse problems for stochastic parabolic differential equations.

The strong solution of equation (1.1) is useful in the proof of convergence rate of regularization method. Thus, we recall it here.

Definition 1.1.

We call yL𝔽2(Ω;C([0,T];Lloc2(G)))L𝔽2(0,T;HΓ2(G))y\in L_{{\mathbb{F}}}^{2}(\Omega;C([0,T];L^{2}_{loc}(G)))\cap L_{{\mathbb{F}}}^{2}(0,T;H_{\Gamma}^{2}(G)) a solution to the equation (1.1) if for any t[0,T]t\in[0,T] and a.e. xG𝐆Γx\in G^{\prime}\in\mathbf{G}_{\Gamma}, it holds

y(t,x)y(0,x)=0t{i,j=1n(aij(s,x)yi(s,x))j+[a1(s,x),y(s,x)+a2(s,x)y(s,x)+f(s,x)y(s,x)]}𝑑s+0ta3(s,x)y(s,x)𝑑W(s),-a.s. \begin{array}[]{ll}\displaystyle\quad y(t,x)-y(0,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle=\int_{0}^{t}\Big{\{}\sum^{n}_{i,j=1}\big{(}a^{ij}(s,x)y_{i}(s,x)\big{)}_{j}+[\big{\langle}a_{1}(s,x),\nabla y(s,x)\big{\rangle}+a_{2}(s,x)y(s,x)+f(s,x)y(s,x)]\Big{\}}ds\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\quad+\int_{0}^{t}a_{3}(s,x)y(s,x)dW(s),\qquad{\mathbb{P}}\mbox{-}\hbox{\rm a.s.{ }}\end{array} (1.2)

It should be noted that the assumption regarding the solution, namely that yL𝔽2(Ω;C([0,T];y\in L_{{\mathbb{F}}}^{2}(\Omega;C([0,T]; Lloc2(G)))L𝔽2(0,T;HΓ2(G))L^{2}_{loc}(G)))\cap L_{{\mathbb{F}}}^{2}(0,T;H_{\Gamma}^{2}(G)), implies a higher degree of smoothness than is strictly necessary for establishing Hölder stability of the Cauchy problem. However, this additional smoothness is required in order to facilitate the regularization process.

The remainder of this paper is organized as follows. Section 2 presents a proof of a Hölder-type conditional stability result, along with the establishment of the Carleman estimate as a particularly useful tool in this proof. The regularization method, using the Tikhonov regularization strategy, is introduced in Section 3 and we showcase both the uniqueness and convergence rate of the regularization solution. Section 4 provides numerically simulated reconstructions aided by kernel-based learning theory.

2 Conditional Stability

In this section, we prove a stability estimate for the Cauchy problem.

Theorem 2.1.

(Hölder stability estimate). For any given GGG^{\prime}\subset\subset G and ε(0,T2)\varepsilon\in\big{(}0,\frac{T}{2}\big{)}, there exists δ0(0,1)\delta_{0}\in\left(0,1\right), β(0,1)\beta\in(0,1) and a constant C>0C>0 such that if for δ(0,δ0)\delta\in\left(0,\delta_{0}\right),

max{|f|L𝔽2(0,T;L2(G)),|g1|H1(Γ),|g2|L𝔽2(0,T;L2(Γ))}δ,\mathop{\rm max}\{|f|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))},~{}|g_{1}|_{H^{1}(\Gamma)},~{}\left|g_{2}\right|_{L_{\mathbb{F}}^{2}\left(0,T;L^{2}(\Gamma)\right)}\}\leq\delta, (2.1)

then

𝔼εTεG(y2+|y|2)𝑑x𝑑tC(1+|y|L𝔽2((0,T);H1(G))2)δ2β,δ(0,δ0).{\mathbb{E}}\int_{\varepsilon}^{T-\varepsilon}\int_{G^{\prime}}\big{(}y^{2}+|\nabla y|^{2}\big{)}dxdt\leq C(1+|y|^{2}_{L^{2}_{\mathbb{F}}((0,T);H^{1}(G))})\delta^{2\beta},\quad\forall\,\delta\in(0,\delta_{0}). (2.2)

We first recall the following exponentially weighted energy identity, which will play a key role in the sequel.

Lemma 2.1.

[29, Theorem 3.1] Let nn be a positive integer,

bij=bjiL𝔽2(Ω;C1([0,T];W2,(n))),i,j=1,2,,n,b^{ij}=b^{ji}\in L_{{\mathbb{F}}}^{2}(\Omega;C^{1}([0,T];W^{2,\infty}({\mathbb{R}}^{n}))),\qquad i,j=1,2,\cdots,n, (2.3)

and C1,4((0,T)×n),ΨC1,2((0,T)×n)\ell\in C^{1,4}((0,T)\times{\mathbb{R}}^{n}),\ \Psi\in C^{1,2}((0,T)\times{\mathbb{R}}^{n}). Assume uu is an H2(n)H^{2}({\mathbb{R}}^{n})-valued continuous semi-martingale. Set θ=e\theta=e^{\ell} and v=θuv=\theta u. Then for a.e. xnx\in{\mathbb{R}}^{n} and {\mathbb{P}}-a.s. ωΩ\omega\in\Omega,

20Tθ[i,j=1n(bijvi)j+𝒜v][dui,j=1n(bijui)jdt]+20Ti,j=1n(bijvidv)j\displaystyle 2\int_{0}^{T}\theta\Big{[}-\sum^{n}_{i,j=1}(b^{ij}v_{i})_{j}+{\cal A}v\Big{]}\Big{[}du-\sum^{n}_{i,j=1}(b^{ij}u_{i})_{j}dt\Big{]}+2\int_{0}^{T}\sum^{n}_{i,j=1}(b^{ij}v_{i}dv)_{j}
+20Ti,j=1n[i,j(2bijbijivivjbijbijivivj)+Ψbijvivbij(𝒜i+Ψi2)v2]jdt\displaystyle\displaystyle\quad+2\int_{0}^{T}\sum^{n}_{i,j=1}\Big{[}\sum_{i^{\prime},j^{\prime}}\Big{(}2b^{ij}b^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}v_{i}v_{j^{\prime}}-b^{ij}b^{i^{\prime}j^{\prime}}\ell_{i}v_{i^{\prime}}v_{j^{\prime}}\Big{)}+\Psi b^{ij}v_{i}v-b^{ij}\Big{(}{\cal A}\ell_{i}+\frac{\Psi_{i}}{2}\Big{)}v^{2}\Big{]}_{j}dt
=20Ti,j=1n{i,j[2bij(biji)j(bijbiji)j]btij2+Ψbij}vivjdt\displaystyle\displaystyle=2\int_{0}^{T}\sum^{n}_{i,j=1}\Big{\{}\sum_{i^{\prime},j^{\prime}}\Big{[}2b^{ij^{\prime}}\Big{(}b^{i^{\prime}j}\ell_{i^{\prime}}\Big{)}_{j^{\prime}}-\Big{(}b^{ij}b^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}\Big{)}_{j^{\prime}}\Big{]}-\frac{b_{t}^{ij}}{2}+\Psi b^{ij}\Big{\}}v_{i}v_{j}dt
+0Tv2𝑑t+20T[i,j=1n(bijvi)j+𝒜v][i,j=1n(bijvi)j+(𝒜t)v]𝑑t\displaystyle\displaystyle\quad+\int_{0}^{T}{\cal B}v^{2}dt+2\int_{0}^{T}\Big{[}-\sum^{n}_{i,j=1}(b^{ij}v_{i})_{j}+{\cal A}v\Big{]}\Big{[}-\sum^{n}_{i,j=1}(b^{ij}v_{i})_{j}+({\cal A}-\ell_{t})v\Big{]}dt
+(i,j=1nbijvivj+𝒜v2)|0T0Tθ2i,j=1nbij[(dui+idu)(duj+jdu)]0Tθ2𝒜(du)2,\displaystyle\displaystyle\quad+\Big{(}\sum^{n}_{i,j=1}b^{ij}v_{i}v_{j}+{\cal A}v^{2}\Big{)}\Big{|}_{0}^{T}-\int_{0}^{T}\theta^{2}\sum^{n}_{i,j=1}b^{ij}[(du_{i}+\ell_{i}du)(du_{j}+\ell_{j}du)]-\int_{0}^{T}\theta^{2}{\cal A}(du)^{2}, (2.4)

where

{𝒜=i,j=1n(bijijbjijibijij)Ψ,=2[𝒜Ψi,j=1n(𝒜biji)j]𝒜ti,j=1n(bijΨj)i.\left\{\begin{array}[]{ll}\displaystyle{\cal A}\buildrel\triangle\over{=}-\sum^{n}_{i,j=1}(b^{ij}\ell_{i}\ell_{j}-b^{ij}_{j}\ell_{i}-b^{ij}\ell_{ij})-\Psi,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle{\cal B}\buildrel\triangle\over{=}2\Big{[}{\cal A}\Psi-\sum^{n}_{i,j=1}({\cal A}b^{ij}\ell_{i})_{j}\Big{]}-{\cal A}_{t}-\sum^{n}_{i,j=1}(b^{ij}\Psi_{j})_{i}.\end{array}\right. (2.5)

In the sequel, for a positive integer pp, denote by O(μp)O(\mu^{p}) a function of order μp\mu^{p} for large μ\mu (which is independent of λ\lambda); by Oμ(λp)O_{\mu}(\lambda^{p}) a function of order λp\lambda^{p} for fixed μ\mu and for large λ\lambda.

Proof of Theorem 2.1.  We borrow some idea from [21].

Take a bounded domain JnJ\subset{\mathbb{R}}^{n} such that JG¯=Γ\partial J\cap\overline{G}=\Gamma and that G~=JGΓ\widetilde{G}=J\cup G\cup\Gamma enjoys a C4C^{4} boundary G~\partial\widetilde{G}. Then we have

GG~,GG~¯Γ,GΓG~and G~G contains some nonempty open subset. G\subset\widetilde{G},\quad\overline{\partial G\cap\widetilde{G}}\subset\Gamma,\quad\partial G\setminus\Gamma\subset\partial\widetilde{G}\quad\mbox{and }\widetilde{G}\setminus G\mbox{ contains some nonempty open subset. } (2.6)

Let G0G~GG_{0}\subset\subset\widetilde{G}\setminus G be an open subdomain. We know that there is a ψC4(G~)\psi\in C^{4}(\widetilde{G}) satisfying (see [29, Lemma 5.1] for example)

{ψ>0 in G~,ψ=0 on G~,|ψ|>0 in GG~G0.\left\{\begin{array}[]{ll}\displaystyle\psi>0&\mbox{ in }\widetilde{G},\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\psi=0&\mbox{ on }\partial\widetilde{G},\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle|\nabla\psi|>0&\mbox{ in }G\subset\widetilde{G}\setminus G_{0}.\end{array}\right. (2.7)

Since GGG^{\prime}\subset\subset G, we can choose a sufficiently large N>0N>0 such that

G{x:xG~,ψ(x)>4N|ψ|L(G~)}G.G^{\prime}\subset\Big{\{}x:\,x\in\widetilde{G},\;\psi(x)>\frac{4}{N}|\psi|_{L^{\infty}(\widetilde{G})}\Big{\}}\cap G. (2.8)

Further, let ρ=12(12κ)T>0\rho=\frac{1}{\sqrt{2}}\Big{(}\frac{1}{2}-\kappa\Big{)}T>0, there exists a positive number cc such that

cρ2<|ψ|L(G~)<2cρ2.c\rho^{2}<|\psi|_{L^{\infty}(\widetilde{G})}<2c\rho^{2}. (2.9)

Then, define

ϕ(t,x)=ψ(x)c(tt0)2,α(t,x)=eμϕ(t,x)\phi(t,x)=\psi(x)-c(t-t_{0})^{2},\quad\alpha(t,x)=e^{\mu\phi(t,x)} (2.10)

for a fix t0[2ρ,T2ρ]t_{0}\in[\sqrt{2}\rho,T-\sqrt{2}\rho], and denote βk=βk(μ)=eμ[kN|ψ|L(Ω~)cρ2N],k=1,2,3,4\beta_{k}=\beta_{k}(\mu)=e^{\mu\big{[}\frac{k}{N}|\psi|_{L^{\infty}(\widetilde{\Omega})}-\frac{c\rho^{2}}{N}\big{]}},k=1,2,3,4. Set

Qk={(t,x):xG¯,α(x,t)>βk},k=1,2,3,4.Q_{k}=\{(t,x):\,x\in\overline{G},\;\alpha(x,t)>\beta_{k}\},\quad k=1,2,3,4. (2.11)

Clearly, QkQ_{k} is independent of μ\mu. Moreover, ψ(x)>4N|ψ|L(G~)\psi(x)>\frac{4}{N}|\psi|_{L^{\infty}(\widetilde{G})} for any (t,x)(t0ρN,t0+ρN)×G(t,x)\in\Big{(}t_{0}-\frac{\rho}{\sqrt{N}},t_{0}+\frac{\rho}{\sqrt{N}}\Big{)}\times G^{\prime}, and thus

ψ(x)c(tt0)2>4N|ψ|L(G~)cρ2N.\psi(x)-c(t-t_{0})^{2}>\frac{4}{N}|\psi|_{L^{\infty}(\widetilde{G})}-\frac{c\rho^{2}}{N}.

Hence, we see α(t,x)>β4\alpha(t,x)>\beta_{4} in (t,x)(t0ρN,t0+ρN)×G(t,x)\in\Big{(}t_{0}-\frac{\rho}{\sqrt{N}},t_{0}+\frac{\rho}{\sqrt{N}}\Big{)}\times G^{\prime}, and thus Q4(t0ρN,t0+ρN)×GQ_{4}\supset\Big{(}t_{0}-\frac{\rho}{\sqrt{N}},t_{0}+\frac{\rho}{\sqrt{N}}\Big{)}\times G^{\prime}. On the other hand, for any (t,x)Q1(t,x)\in Q_{1},

ψ(x)c(tt0)2>1N|ψ|L(G~)cρ2N.\psi(x)-c(t-t_{0})^{2}>\frac{1}{N}|\psi|_{L^{\infty}(\widetilde{G})}-\frac{c\rho^{2}}{N}.

This yields

|ψ|L(G~)1N|ψ|L(G~)+cρ2N>c(tt0)2.|\psi|_{L^{\infty}(\widetilde{G})}-\frac{1}{N}|\psi|_{L^{\infty}(\widetilde{G})}+\frac{c\rho^{2}}{N}>c(t-t_{0})^{2}.

Together with (2.9) we have

2(11N)cρ2+cρ2N>c(tt0)2.2\Big{(}1-\frac{1}{N}\Big{)}c\rho^{2}+\frac{c\rho^{2}}{N}>c(t-t_{0})^{2}.

Therefore, we conclude

(t0ρN,t0+ρN)×GQ1(t02ρ,t0+2ρ)×G¯.\Big{(}t_{0}-\frac{\rho}{\sqrt{N}},t_{0}+\frac{\rho}{\sqrt{N}}\Big{)}\times G^{\prime}\subset Q_{1}\subset(t_{0}-\sqrt{2}\rho,t_{0}+\sqrt{2}\rho)\times\overline{G}. (2.12)

Next, for any (t,x)Q1(t,x)\in\partial Q_{1}, we know xG¯x\in\overline{G} and α(t,x)β1\alpha(t,x)\geq\beta_{1}. In above set, if xGx\in G, α(t,x)=β1\alpha(t,x)=\beta_{1}, and if xGx\in\partial G, it must hold xΓx\in\Gamma. Indeed, if xGΓx\in\partial G\setminus\Gamma, from GΓG~\partial G\setminus\Gamma\subset\partial\widetilde{G} we get ψ(x)=0\psi(x)=0. On the other hand, since α(t,x)β1\alpha(t,x)\geq\beta_{1}, we know

ψ(x)c(tt0)2=c(tt0)21N|ψ|L(G~)cρ2N.\psi(x)-c(t-t_{0})^{2}=-c(t-t_{0})^{2}\geq\frac{1}{N}|\psi|_{L^{\infty}(\widetilde{G})}-\frac{c\rho^{2}}{N}.

And thus

0c(tt0)21N(cρ2|ψ|L(G~)),0\leq c(t-t_{0})^{2}\leq\frac{1}{N}(c\rho^{2}-|\psi|_{L^{\infty}(\widetilde{G})}),

which contradicts (2.9). Therefore, we have

Q1=Σ1Σ2,\partial Q_{1}=\Sigma_{1}\cup\Sigma_{2}, (2.13)

where

Σ1[0,T]×Γ,Σ2={(t,x)[0,T]×G:xG,α(t,x)=β1}.\begin{array}[]{ll}\displaystyle\Sigma_{1}\subset[0,T]\times\Gamma,\qquad\Sigma_{2}=\{(t,x)\in[0,T]\times G:\,x\in G,\,\alpha(t,x)=\beta_{1}\}.\end{array} (2.14)

Let ηC0(Q2)\eta\in C_{0}^{\infty}(Q_{2}) such that 0η10\leq\eta\leq 1 and η=1\eta=1 in Q3Q_{3}. For any yy solving (1.1), let z=ηyz=\eta y, then zz solves

{dzi,j=1n(aijzi)jdt=(a1,z+a2z+f~+ηf)dt+a3zdB(t) in Q1,z=zν=0 on Σ2.\left\{\begin{array}[]{ll}\displaystyle dz-\sum^{n}_{i,j=1}(a^{ij}z_{i})_{j}dt=\big{(}\big{\langle}a_{1},\nabla z\big{\rangle}+a_{2}z+\tilde{f}+\eta f\big{)}dt+a_{3}zdB(t)&\mbox{ in }Q_{1},\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle z=\frac{\partial z}{\partial\nu}=0&\mbox{ on }\Sigma_{2}.\end{array}\right. (2.15)

Here f~=i,j=1n(ajijηiy+aijηijy+2aijηiyj)a1,ηy+ηtyL𝔽2(0,T;L2(G))\displaystyle\tilde{f}=-\sum^{n}_{i,j=1}(a^{ij}_{j}\eta_{i}y+a^{ij}\eta_{ij}y+2a^{ij}\eta_{i}y_{j})-\big{\langle}a_{1},\nabla\eta\big{\rangle}y+\eta_{t}y\in L^{2}_{\mathbb{F}}(0,T;L^{2}(G)) and f~\tilde{f} is supported in Q2Q3Q_{2}\setminus Q_{3}.

Applying Lemma 2.1 to (2.15) with u=zu=z, bij=aijb^{ij}=a^{ij}, =λα\ell=\lambda\alpha and Ψ=2i,j=1naijij\displaystyle\Psi=2\sum^{n}_{i,j=1}a^{ij}\ell_{ij}, integrating it in GG and taking mean value, noting that zz is supported in Q1Q_{1}, we see

2𝔼Q1θ[i,j=1n(aijvi)j+𝒜v][dzi,j=1n(aijzi)jdt]𝑑x+2Q1i,j=1n(aijvidv)jdx+2𝔼Q1i,j=1n[i,j(2aijaijivivjaijaijivivj)+Ψaijvivaij(𝒜i+Ψi2)v2]jdxdt2i,j=1n𝔼Q1cijvivj𝑑x𝑑t+0Tv2𝑑x𝑑t+𝔼Q1|i,j=1n(aijvi)j+𝒜v|2𝑑x𝑑t𝔼Q1θ2i,j=1naij(dzi+idz)(dzj+jdz)dx𝔼Q1θ2𝒜(dz)2𝑑x,\begin{array}[]{ll}\displaystyle 2{\mathbb{E}}\int_{Q_{1}}\theta\Big{[}-\sum^{n}_{i,j=1}(a^{ij}v_{i})_{j}+{\cal A}v\Big{]}\Big{[}dz-\sum^{n}_{i,j=1}(a^{ij}z_{i})_{j}dt\Big{]}dx+2\int_{Q_{1}}\sum^{n}_{i,j=1}(a^{ij}v_{i}dv)_{j}dx\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\quad+2{\mathbb{E}}\int_{Q_{1}}\sum^{n}_{i,j=1}\Big{[}\sum_{i^{\prime},j^{\prime}}\Big{(}2a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}v_{i}v_{j^{\prime}}-a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i}v_{i^{\prime}}v_{j^{\prime}}\Big{)}+\Psi a^{ij}v_{i}v-a^{ij}\Big{(}{\cal A}\ell_{i}+\frac{\Psi_{i}}{2}\Big{)}v^{2}\Big{]}_{j}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\geq 2\sum^{n}_{i,j=1}{\mathbb{E}}\int_{Q_{1}}c^{ij}v_{i}v_{j}dxdt+\int_{0}^{T}{\cal B}v^{2}dxdt+{\mathbb{E}}\int_{Q_{1}}\Big{|}-\sum^{n}_{i,j=1}(a^{ij}v_{i})_{j}+{\cal A}v\Big{|}^{2}dxdt\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\quad-{\mathbb{E}}\int_{Q_{1}}\theta^{2}\sum^{n}_{i,j=1}a^{ij}(dz_{i}+\ell_{i}dz)(dz_{j}+\ell_{j}dz)dx-{\mathbb{E}}\int_{Q_{1}}\theta^{2}{\cal A}(dz)^{2}dx,\end{array} (2.16)

where

{𝒜=i,j=1n(aijijajijiaijij)Ψ,=2[AΨi,j=1n(Aaiji)j]Ati,j=1n(aijΨj)it2,cij=i,j[2aij(aiji)j(aijaiji)j]atij2+Ψaij.\left\{\begin{array}[]{ll}\displaystyle{\cal A}=-\sum^{n}_{i,j=1}\big{(}a^{ij}\ell_{i}\ell_{j}-a^{ij}_{j}\ell_{i}-a^{ij}\ell_{ij}\big{)}-\Psi,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle{\cal B}=2\Big{[}A\Psi-\sum^{n}_{i,j=1}\big{(}Aa^{ij}\ell_{i}\big{)}_{j}\Big{]}-A_{t}-\sum^{n}_{i,j=1}\big{(}a^{ij}\Psi_{j}\big{)}_{i}-\ell_{t}^{2},\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle c^{ij}=\sum_{i^{\prime},j^{\prime}}\Big{[}2a^{ij^{\prime}}\big{(}a^{i^{\prime}j}\ell_{i^{\prime}}\big{)}_{j^{\prime}}-\big{(}a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}\big{)}_{j^{\prime}}\Big{]}-\frac{a_{t}^{ij}}{2}+\Psi a^{ij}.\end{array}\right. (2.17)

Now, we estimate 𝒜{\cal A}, {\cal B} and cijc^{ij}.

𝒜=i,j=1n(aijijajijiaijij)Ψ=λ2μ2α2i,j=1naijψiψj+λμαi,j=1najijψiλμ2αi,j=1naijψiψjλμαi,j=1naijψij=λ2μ2α2i,j=1naijψiψj+λαO(μ2),\begin{array}[]{ll}\displaystyle{\cal A}\!\!\!&\displaystyle=-\sum^{n}_{i,j=1}\big{(}a^{ij}\ell_{i}\ell_{j}-a^{ij}_{j}\ell_{i}-a^{ij}\ell_{ij}\big{)}-\Psi\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=-\lambda^{2}\mu^{2}\alpha^{2}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}+\lambda\mu\alpha\sum^{n}_{i,j=1}a^{ij}_{j}\psi_{i}-\lambda\mu^{2}\alpha\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}-\lambda\mu\alpha\sum^{n}_{i,j=1}a^{ij}\psi_{ij}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=-\lambda^{2}\mu^{2}\alpha^{2}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}+\lambda\alpha O(\mu^{2}),\end{array} (2.18)

In order to estimate {\cal B}, we first do some computations. For Ψ\Psi, we have

Ψ=2i,j=1naij(λμ2αψiψj+λμαψij)=2λμ2αi,j=1naijψiψj+λαO(μ).\Psi=2\sum^{n}_{i,j=1}a^{ij}\big{(}\lambda\mu^{2}\alpha\psi_{i}\psi_{j}+\lambda\mu\alpha\psi_{ij}\big{)}=2\lambda\mu^{2}\alpha\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}+\lambda\alpha O(\mu). (2.19)

Next, we have

ijj=λμ3αψiψjψj+λαO(μ2),ijij=λμ4αψiψjψiψj+λαO(μ3).\ell_{i^{\prime}j^{\prime}j}=\lambda\mu^{3}\alpha\psi_{i^{\prime}}\psi_{j^{\prime}}\psi_{j}+\lambda\alpha O(\mu^{2}),\quad\ell_{i^{\prime}j^{\prime}ij}=\lambda\mu^{4}\alpha\psi_{i^{\prime}}\psi_{j^{\prime}}\psi_{i}\psi_{j}+\lambda\alpha O(\mu^{3}).

Therefore, we get

Ψj=2i,j=1n(aijij)j=2i,j(ajijij+aijijj)=2λμ3αi,j=1naijψiψjψj+λαO(μ2),\Psi_{j}=2\sum_{i^{\prime},j^{\prime}=1}^{n}\big{(}a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}}\big{)}_{j}=2\sum_{i^{\prime},j^{\prime}}\big{(}a^{i^{\prime}j^{\prime}}_{j}\ell_{i^{\prime}j^{\prime}}+a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}j}\big{)}=2\lambda\mu^{3}\alpha\sum_{i^{\prime},j^{\prime}=1}^{n}a^{i^{\prime}j^{\prime}}\psi_{i^{\prime}}\psi_{j^{\prime}}\psi_{j}+\lambda\alpha O(\mu^{2}),

and

Ψij=2i,j=1n(aijijij+aijijij+2ajijiji)=2λμ4αi,j=1naijψiψjψiψj+λαO(μ3).\Psi_{ij}=2\sum_{i^{\prime},j^{\prime}=1}^{n}\big{(}a^{i^{\prime}j^{\prime}}_{ij}\ell_{i^{\prime}j^{\prime}}+a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}ij}+2a^{i^{\prime}j^{\prime}}_{j}\ell_{i^{\prime}j^{\prime}i}\big{)}=2\lambda\mu^{4}\alpha\sum_{i^{\prime},j^{\prime}=1}^{n}a^{i^{\prime}j^{\prime}}\psi_{i^{\prime}}\psi_{j^{\prime}}\psi_{i}\psi_{j}+\lambda\alpha O(\mu^{3}).

Hence, we find

i,j=1n(aijΨj)i=i,j=1n(aiijΨj+aijΨji)=2λμ4α(i,j=1naijψiψj)2+λαO(μ3).-\sum^{n}_{i,j=1}\big{(}a^{ij}\Psi_{j}\big{)}_{i}=-\sum^{n}_{i,j=1}\big{(}a^{ij}_{i}\Psi_{j}+a^{ij}\Psi_{ji}\big{)}=-2\lambda\mu^{4}\alpha\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+\lambda\alpha O(\mu^{3}). (2.20)

Further, from (2.18) and (2.19), we have

𝒜Ψ=2λ3μ4α3(i,j=1naijψiψj)2+λ3α3O(μ3)+λ2α2O(μ4).{\cal A}\Psi=-2\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+\lambda^{3}\alpha^{3}O(\mu^{3})+\lambda^{2}\alpha^{2}O(\mu^{4}). (2.21)

From the definition of AA, we find

𝒜j=i,j=1n(ajijij+2aijijjajjijiajijij+ajijij+aijijj)=i,j=1n(2aijijj+aijijj)+(λα+λ2α2)O(μ2)=2λ2μ3α2i,j=1naijψiψjψj+Oμ(λ)+λ2α2O(μ2).\begin{array}[]{ll}\displaystyle{\cal A}_{j}\!\!\!&\displaystyle=-\sum_{i^{\prime},j^{\prime}=1}^{n}\big{(}a_{j}^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}\ell_{j^{\prime}}+2a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}\ell_{j^{\prime}j}-a_{j^{\prime}j}^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}-a_{j^{\prime}}^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j}+a_{j}^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}}+a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}j}\big{)}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=-\sum_{i^{\prime},j^{\prime}=1}^{n}\big{(}2a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}\ell_{j^{\prime}j}+a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}j}\big{)}+\big{(}\lambda\alpha+\lambda^{2}\alpha^{2}\big{)}O(\mu^{2})\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=-2\lambda^{2}\mu^{3}\alpha^{2}\sum_{i^{\prime},j^{\prime}=1}^{n}a^{i^{\prime}j^{\prime}}\psi_{i^{\prime}}\psi_{j^{\prime}}\psi_{j}+O_{\mu}(\lambda)+\lambda^{2}\alpha^{2}O(\mu^{2}).\end{array}

Hence, we see

i,j=1n𝒜jaiji=2λ3μ4α3(i,j=1naijψiψj)2+Oμ(λ2)+λ3α3O(μ3),\sum^{n}_{i,j=1}{\cal A}_{j}a^{ij}\ell_{i}=-2\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+O_{\mu}(\lambda^{2})+\lambda^{3}\alpha^{3}O(\mu^{3}),

which leads to

i,j=1n(𝒜aiji)j=i,j=1n𝒜jaiji+𝒜i,j=1n(ajiji+aijij)=3λ3μ4α3(i,j=1naijψiψj)2+Oμ(λ2)+λ3α3O(μ3).\begin{array}[]{ll}\displaystyle\sum^{n}_{i,j=1}\big{(}{\cal A}a^{ij}\ell_{i}\big{)}_{j}\negthinspace\negthinspace\negthinspace&\displaystyle=\sum^{n}_{i,j=1}{\cal A}_{j}a^{ij}\ell_{i}+{\cal A}\sum^{n}_{i,j=1}\big{(}a_{j}^{ij}\ell_{i}+a^{ij}\ell_{ij}\big{)}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=-3\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+O_{\mu}(\lambda^{2})+\lambda^{3}\alpha^{3}O(\mu^{3}).\end{array} (2.22)

Further, we have

𝒜t=i,j=1n(aijijajiji+aijij)t=i,j=1n[aij(ij)tajijit+aijijt]+λ2α2O(μ2)+λαO(μ2)+λαTO(μ3)=λ2α2TO(μ3)+λ2α2O(μ2)+λαO(μ2)+λαTO(μ3).\begin{array}[]{ll}\displaystyle{\cal A}_{t}\negthinspace\negthinspace\negthinspace&\displaystyle=-\sum^{n}_{i,j=1}\Big{(}a^{ij}\ell_{i}\ell_{j}-a^{ij}_{j}\ell_{i}+a^{ij}\ell_{ij}\Big{)}_{t}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=-\sum^{n}_{i,j=1}\Big{[}a^{ij}(\ell_{i}\ell_{j})_{t}-a^{ij}_{j}\ell_{it}+a^{ij}\ell_{ijt}\Big{]}+\lambda^{2}\alpha^{2}O(\mu^{2})+\lambda\alpha O(\mu^{2})+\lambda\alpha TO(\mu^{3})\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=\lambda^{2}\alpha^{2}TO(\mu^{3})+\lambda^{2}\alpha^{2}O(\mu^{2})+\lambda\alpha O(\mu^{2})+\lambda\alpha TO(\mu^{3}).\end{array} (2.23)

Finally, it holds

t2=λ2μ2α2ϕt2=Oμ(λ2).\ell_{t}^{2}=\lambda^{2}\mu^{2}\alpha^{2}\phi_{t}^{2}=O_{\mu}(\lambda^{2}). (2.24)

From the definition of BB (see (2.17)), and combing (2.20)–(2.29), we have

=4λ3μ4α3(i,j=1naijψiψj)2+6λ3μ4α3(i,j=1naijψiψj)22λμ4α(i,j=1naijψiψj)2+λ3α3O(μ3)+Oμ(λ2)=2λ3μ4α3(i,j=1naijψiψj)2+λ3α3O(μ3)+Oμ(λ2).\begin{array}[]{ll}{\cal B}\negthinspace\negthinspace\negthinspace&=\displaystyle-4\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+6\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}-2\lambda\mu^{4}\alpha\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\quad+\lambda^{3}\alpha^{3}O(\mu^{3})+O_{\mu}(\lambda^{2})\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&=\displaystyle 2\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+\lambda^{3}\alpha^{3}O(\mu^{3})+O_{\mu}(\lambda^{2}).\end{array}

Hence, we know

2s02λ3μ4α3|ψ|4+λ3α3O(μ3)+Oμ(λ2).{\cal B}\geq 2s_{0}^{2}\lambda^{3}\mu^{4}\alpha^{3}|\nabla\psi|^{4}+\lambda^{3}\alpha^{3}O(\mu^{3})+O_{\mu}(\lambda^{2}). (2.25)

Now we estimate cijc^{ij}. By direct computation, we have

i,j=1ncijvivj=i,j=1n{i,j=1n[2aijaijij+aijaijij+2aijajiji(aijaij)ji]atij2}vivj=i,j=1n{i,j=1n[2λμ2αaijaijψiψj+λμ2αaijaijψiψj+λαO(μ)]+O(1)}vivj=2λμ2α(i,j=1naijψivj)2+λμ2α(i,j=1naijψiψj)(i,j=1naijvivj)+λα|v|2O(μ)+O(1)|v|2[s02λμ2α|ψ|2+λαO(μ)+O(1)]|v|2.\negthinspace\negthinspace\negthinspace\begin{array}[]{ll}\displaystyle\sum^{n}_{i,j=1}c^{ij}v_{i}v_{j}\negthinspace\negthinspace\negthinspace&\displaystyle=\sum^{n}_{i,j=1}\Big{\{}\sum_{i^{\prime},j^{\prime}=1}^{n}\Big{[}2a^{ij^{\prime}}a^{i^{\prime}j}\ell_{i^{\prime}j^{\prime}}+a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}}+2a^{ij^{\prime}}a_{j^{\prime}}^{i^{\prime}j}\ell_{i^{\prime}}-(a^{ij}a^{i^{\prime}j^{\prime}})_{j^{\prime}}\ell_{i^{\prime}}\Big{]}-\frac{a_{t}^{ij}}{2}\Big{\}}v_{i}v_{j}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=\sum^{n}_{i,j=1}\Big{\{}\sum_{i^{\prime},j^{\prime}=1}^{n}\Big{[}2\lambda\mu^{2}\alpha a^{ij^{\prime}}a^{i^{\prime}j}\psi_{i^{\prime}}\psi_{j^{\prime}}+\lambda\mu^{2}\alpha a^{ij}a^{i^{\prime}j^{\prime}}\psi_{i^{\prime}}\psi_{j^{\prime}}+\lambda\alpha O(\mu)\Big{]}+O(1)\Big{\}}v_{i}v_{j}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=2\lambda\mu^{2}\alpha\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}v_{j}\Big{)}^{2}\!+\!\lambda\mu^{2}\alpha\Big{(}\sum^{n}_{i,j=1}a^{ij}\psi_{i}\psi_{j}\Big{)}\Big{(}\sum^{n}_{i,j=1}a^{ij}v_{i}v_{j}\Big{)}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\quad+\lambda\alpha|\nabla v|^{2}O(\mu)+O(1)|\nabla v|^{2}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\geq[s_{0}^{2}\lambda\mu^{2}\alpha|\nabla\psi|^{2}+\lambda\alpha O(\mu)+O(1)]|\nabla v|^{2}.\end{array} (2.26)

Owing to that zz is a solution to equation (2.15), we find

2𝔼Q1θ[i,j=1n(aijvi)j+𝒜v][dzi,j=1n(aijzi)jdt]𝑑x𝔼Q1[i,j=1n(aijvi)j+𝒜v]2𝑑x𝑑t+𝔼Q1θ2(a1,z+a2z+f~+ηf)2𝑑x𝑑t𝔼Q1[i,j=1n(aijvi)j+𝒜v]2𝑑x𝑑t+3|a1|L(0,T;L(G;n))2𝔼Q1θ2|z|2𝑑x𝑑t+3|a2|L(0,T;L(G))2𝔼Q1θ2z2𝑑x𝑑t+6𝔼Q1θ2(f~2+η2f2)𝑑x𝑑t.\begin{array}[]{ll}\displaystyle\quad 2{\mathbb{E}}\int_{Q_{1}}\theta\Big{[}-\sum^{n}_{i,j=1}\big{(}a^{ij}v_{i}\big{)}_{j}+{\cal A}v\Big{]}\Big{[}dz-\sum^{n}_{i,j=1}\big{(}a^{ij}z_{i}\big{)}_{j}dt\Big{]}dx\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq{\mathbb{E}}\int_{Q_{1}}\Big{[}-\sum^{n}_{i,j=1}\big{(}a^{ij}v_{i}\big{)}_{j}+{\cal A}v\Big{]}^{2}dxdt+{\mathbb{E}}\int_{Q_{1}}\theta^{2}\Big{(}\big{\langle}a_{1},\nabla z\big{\rangle}+a_{2}z+\tilde{f}+\eta f\Big{)}^{2}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq{\mathbb{E}}\int_{Q_{1}}\Big{[}-\sum^{n}_{i,j=1}\big{(}a^{ij}v_{i}\big{)}_{j}+{\cal A}v\Big{]}^{2}dxdt+3|a_{1}|^{2}_{L^{\infty}(0,T;L^{\infty}(G;{\mathbb{R}}^{n}))}{\mathbb{E}}\int_{Q_{1}}\theta^{2}|\nabla z|^{2}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\quad+3|a_{2}|^{2}_{L^{\infty}(0,T;L^{\infty}(G))}{\mathbb{E}}\int_{Q_{1}}\theta^{2}z^{2}dxdt+6{\mathbb{E}}\int_{Q_{1}}\theta^{2}(\tilde{f}^{2}+\eta^{2}f^{2})dxdt.\end{array} (2.27)

It is clear that Q2¯Σ2¯=\overline{Q_{2}}\cap\overline{\Sigma_{2}}=\emptyset. Hence, η=0\eta=0 on Σ2\Sigma_{2}. Recall that y=g1y=g_{1} on Σ1\Sigma_{1} and g1H1(Γ)g_{1}\in H^{1}(\Gamma). Then it holds that

𝔼Q1i,j=1n(aijvidv)jdx\displaystyle{\mathbb{E}}\int_{Q_{1}}\sum^{n}_{i,j=1}\big{(}a^{ij}v_{i}dv\big{)}_{j}dx =𝔼Σ1i,j=1naijviνidvdΓ\displaystyle=\mathbb{E}\int_{\Sigma_{1}}\sum_{i,j=1}^{n}a^{ij}v_{i}\nu^{i}dvd\Gamma (2.28)
Cλμ𝔼0TΓαθ2(|g2|2+|Γg1|2+λ2μ2α2|g1|2)𝑑Γ𝑑t,\displaystyle\leq C\lambda\mu\mathbb{E}\int_{0}^{T}\int_{\Gamma}\alpha\theta^{2}(|g_{2}|^{2}+|\nabla_{\Gamma}g_{1}|^{2}+\lambda^{2}\mu^{2}\alpha^{2}|g_{1}|^{2})d\Gamma dt,

where Γ\nabla_{\Gamma} denotes the tangential gradient on Γ\Gamma.

By means of z=zν=0z=\frac{\partial z}{\partial\nu}=0 on Σ2\Sigma_{2}, we find

|𝔼Q1i,j=1n[i,j=1n(2aijaijivivjaijaijivivj)+Ψaijvivaij(𝒜i+Ψi2)v2]jdxdt|Cλμ𝔼0TΓαθ2(|g2|2+|Γg1|2+λ2μ2α2|g1|2)𝑑Γ𝑑t.\begin{array}[]{ll}\displaystyle\Big{|}{\mathbb{E}}\int_{Q_{1}}\sum^{n}_{i,j=1}\Big{[}\sum_{i^{\prime},j^{\prime}=1}^{n}\Big{(}2a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}v_{i}v_{j^{\prime}}-a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i}v_{i^{\prime}}v_{j^{\prime}}\Big{)}+\Psi a^{ij}v_{i}v-a^{ij}\Big{(}{\cal A}\ell_{i}+\frac{\Psi_{i}}{2}\Big{)}v^{2}\Big{]}_{j}dxdt\Big{|}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq C\lambda\mu\mathbb{E}\int_{0}^{T}\int_{\Gamma}\alpha\theta^{2}(|g_{2}|^{2}+|\nabla_{\Gamma}g_{1}|^{2}+\lambda^{2}\mu^{2}\alpha^{2}|g_{1}|^{2})d\Gamma dt.\end{array} (2.29)

From (2.26) and (2.25), we know that there is a μ0>0\mu_{0}>0 such that for every μμ0\mu\geq\mu_{0}, one can find a λ0(μ)>0\lambda_{0}(\mu)>0 so that for all λλ0(μ)\lambda\geq\lambda_{0}(\mu), it holds that

𝔼i,j=1nQ1cijvivj𝑑x𝑑tCλμ2𝔼Q1α|v|2𝑑x𝑑t,{\mathbb{E}}\sum^{n}_{i,j=1}\int_{Q_{1}}c^{ij}v_{i}v_{j}dxdt\geq C\lambda\mu^{2}{\mathbb{E}}\int_{Q_{1}}\alpha|\nabla v|^{2}dxdt, (2.30)

and

𝔼Q1Bv2𝑑x𝑑tCλ3μ4𝔼Q1α3v2𝑑x𝑑t.{\mathbb{E}}\int_{Q_{1}}Bv^{2}dxdt\geq C\lambda^{3}\mu^{4}{\mathbb{E}}\int_{Q_{1}}\alpha^{3}v^{2}dxdt. (2.31)

Utilizing the fact that zz solves (2.15) again, we get

𝔼Q1θ2i,j=1naij(dzi+idz)(dzj+jdz)dx=𝔼Q1θ2i,j=1naij[(a3z)i(a3z)j+2λμαψi(a3z)ja3z+λ2μ2α2ψiψja32z2]dxdtC|a3|L𝔽(0,T;W1,(G))2(𝔼Q1θ2(z2+|z|2)𝑑x𝑑t+λ2μ2𝔼Q1θ2α2|z|2𝑑x𝑑t).\begin{array}[]{ll}\displaystyle\quad{\mathbb{E}}\int_{Q_{1}}\theta^{2}\sum^{n}_{i,j=1}a^{ij}(dz_{i}+\ell_{i}dz)(dz_{j}+\ell_{j}dz)dx\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle={\mathbb{E}}\int_{Q_{1}}\theta^{2}\sum^{n}_{i,j=1}a^{ij}\Big{[}(a_{3}z)_{i}(a_{3}z)_{j}+2\lambda\mu\alpha\psi_{i}(a_{3}z)_{j}a_{3}z+\lambda^{2}\mu^{2}\alpha^{2}\psi_{i}\psi_{j}a_{3}^{2}z^{2}\Big{]}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq C|a_{3}|^{2}_{L^{\infty}_{\mathbb{F}}(0,T;W^{1,\infty}(G))}\Big{(}{\mathbb{E}}\int_{Q_{1}}\theta^{2}\big{(}z^{2}+|\nabla z|^{2}\big{)}dxdt+\lambda^{2}\mu^{2}{\mathbb{E}}\int_{Q_{1}}\theta^{2}\alpha^{2}|z|^{2}dxdt\Big{)}.\end{array} (2.32)

By zi=θ1(viiv)=θ1(viλμαψiv)z_{i}=\theta^{-1}(v_{i}-\ell_{i}v)=\theta^{-1}(v_{i}-\lambda\mu\alpha\psi_{i}v) and vi=θ(zi+iz)=θ(zi+λμαψiz)v_{i}=\theta(z_{i}+\ell_{i}z)=\theta(z_{i}+\lambda\mu\alpha\psi_{i}z), we get

1Cθ2(|z|2+λ2μ2α2z2)|v|2+λ2μ2α2v2Cθ2(|z|2+λ2μ2α2z2).\frac{1}{C}\theta^{2}\big{(}|\nabla z|^{2}+\lambda^{2}\mu^{2}\alpha^{2}z^{2}\big{)}\leq|\nabla v|^{2}+\lambda^{2}\mu^{2}\alpha^{2}v^{2}\leq C\theta^{2}\big{(}|\nabla z|^{2}+\lambda^{2}\mu^{2}\alpha^{2}z^{2}\big{)}. (2.33)

Combing (2.16), (2.27)–(2.33), we know that there is a μ1μ0\mu_{1}\geq\mu_{0} such that for all μμ1\mu\geq\mu_{1}, we can find a λ1(μ1)λ0(μ0)\lambda_{1}(\mu_{1})\geq\lambda_{0}(\mu_{0}) so that for every λλ1(μ1)\lambda\geq\lambda_{1}(\mu_{1}), it holds that

λ3μ4𝔼Q1α3θ2z2𝑑x𝑑t+λμ2𝔼Q1αθ2|z|2𝑑x𝑑tC𝔼Q1θ2(f~2+f2)𝑑x𝑑t+Cλμ𝔼0TΓαθ2|g2|2𝑑Γ𝑑t+CλμΓαθ2(|Γg1|2+λ2μ2α2|g1|2)𝑑Γ.\begin{array}[]{ll}\displaystyle\quad\lambda^{3}\mu^{4}{\mathbb{E}}\int_{Q_{1}}\alpha^{3}\theta^{2}z^{2}dxdt+\lambda\mu^{2}{\mathbb{E}}\int_{Q_{1}}\alpha\theta^{2}|\nabla z|^{2}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq C{\mathbb{E}}\int_{Q_{1}}\theta^{2}(\tilde{f}^{2}+f^{2})dxdt+C\lambda\mu\mathbb{E}\int_{0}^{T}\int_{\Gamma}\alpha\theta^{2}|g_{2}|^{2}d\Gamma dt+C\lambda\mu\int_{\Gamma}\alpha\theta^{2}(|\nabla_{\Gamma}g_{1}|^{2}+\lambda^{2}\mu^{2}\alpha^{2}|g_{1}|^{2})d\Gamma.\end{array} (2.34)

Recalling that f~=i,j=1n(ajijηiy+aijηijy+2aijηiyj)a1,ηy+ηty\displaystyle\tilde{f}=-\sum^{n}_{i,j=1}(a^{ij}_{j}\eta_{i}y+a^{ij}\eta_{ij}y+2a^{ij}\eta_{i}y_{j})-\big{\langle}a_{1},\nabla\eta\big{\rangle}y+\eta_{t}y, from (2.34), we find that

λ3μ4𝔼Q1α3θ2z2𝑑x𝑑t+λμ2𝔼Q1αθ2|z|2𝑑x𝑑tC𝔼Q2Q3θ2(|y|2+|y|2)𝑑x𝑑t+Cλμ𝔼0TΓαθ2|g2|2𝑑Γ𝑑t+CλμΓαθ2(|Γg1|2+λ2μ2α2|g1|2)𝑑Γ+C𝔼Qθ2f2𝑑x𝑑t,\begin{array}[]{ll}\displaystyle\quad\lambda^{3}\mu^{4}{\mathbb{E}}\int_{Q_{1}}\alpha^{3}\theta^{2}z^{2}dxdt+\lambda\mu^{2}{\mathbb{E}}\int_{Q_{1}}\alpha\theta^{2}|\nabla z|^{2}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq C{\mathbb{E}}\int_{Q_{2}\setminus Q_{3}}\theta^{2}\big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt+C\lambda\mu\mathbb{E}\int_{0}^{T}\int_{\Gamma}\alpha\theta^{2}|g_{2}|^{2}d\Gamma dt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\quad+C\lambda\mu\int_{\Gamma}\alpha\theta^{2}(|\nabla_{\Gamma}g_{1}|^{2}+\lambda^{2}\mu^{2}\alpha^{2}|g_{1}|^{2})d\Gamma+C\mathbb{E}\int_{Q}\theta^{2}f^{2}dxdt,\end{array} (2.35)

which, together with z=yz=y in Q3Q_{3}, implies that

λ3μ4𝔼Q3α3θ2y2𝑑x𝑑t+λμ2𝔼Q3αθ2|y|2𝑑x𝑑tC𝔼Q2Q3θ2(|y|2+|y|2)𝑑x𝑑t+Cλμ𝔼0TΓαθ2|g2|2𝑑Γ𝑑t+CλμΓαθ2(|Γg1|2+λ2μ2α2|g1|2)𝑑Γ+C𝔼Qθ2f2𝑑x𝑑t\begin{array}[]{ll}\displaystyle\quad\lambda^{3}\mu^{4}{\mathbb{E}}\int_{Q_{3}}\alpha^{3}\theta^{2}y^{2}dxdt+\lambda\mu^{2}{\mathbb{E}}\int_{Q_{3}}\alpha\theta^{2}|\nabla y|^{2}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq C{\mathbb{E}}\int_{Q_{2}\setminus Q_{3}}\theta^{2}\big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt+C\lambda\mu\mathbb{E}\int_{0}^{T}\int_{\Gamma}\alpha\theta^{2}|g_{2}|^{2}d\Gamma dt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\quad+C\lambda\mu\int_{\Gamma}\alpha\theta^{2}(|\nabla_{\Gamma}g_{1}|^{2}+\lambda^{2}\mu^{2}\alpha^{2}|g_{1}|^{2})d\Gamma+C\mathbb{E}\int_{Q}\theta^{2}f^{2}dxdt\end{array} (2.36)

From the choice of Q3Q_{3} and (2.8), we know that

λ3μ4𝔼Q3α3θ2y2𝑑x𝑑t+λμ2𝔼Q3αθ2|y|2𝑑x𝑑te2λβ4𝔼t0ρNt0+ρNG(λ3y2+λ|y|2)𝑑x𝑑t.\begin{array}[]{ll}\displaystyle\quad\lambda^{3}\mu^{4}{\mathbb{E}}\int_{Q_{3}}\alpha^{3}\theta^{2}y^{2}dxdt+\lambda\mu^{2}{\mathbb{E}}\int_{Q_{3}}\alpha\theta^{2}|\nabla y|^{2}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\geq e^{2\lambda\beta_{4}}{\mathbb{E}}\int_{t_{0}-\frac{\rho}{\sqrt{N}}}^{t_{0}+\frac{\rho}{\sqrt{N}}}\int_{G\,^{\prime}}\big{(}\lambda^{3}y^{2}+\lambda|\nabla y|^{2}\big{)}dxdt.\end{array} (2.37)

From the definition of Q2Q_{2} and Q3Q_{3}, we find

𝔼Q2Q3θ2(|y|2+|y|2)𝑑x𝑑te2λβ3𝔼Q1(|y|2+|y|2)𝑑x𝑑t.\begin{array}[]{ll}\displaystyle{\mathbb{E}}\int_{Q_{2}\setminus Q_{3}}\theta^{2}\big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt\leq e^{2\lambda\beta_{3}}{\mathbb{E}}\int_{Q_{1}}\big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt.\end{array} (2.38)

Now, we fix μ=μ1\mu=\mu_{1} in β3\beta_{3}, β4\beta_{4} and θ\theta. Let γ=max(t,x)G¯×(0,T)α(t,x)\gamma=\mathop{\rm max}_{(t,x)\in\overline{G}\times(0,T)}\alpha(t,x). From (2.36)–(2.38), we obtain that

e2λβ4𝔼t0ρNt0+ρNG(λ3y2+λ|y|2)𝑑x𝑑tCe2λβ3𝔼Q1(|y|2+|y|2)𝑑x𝑑t+Ce2λγ(|f|L𝔽2(0,T;L2(G))2+|g1|H1(Γ)2+|g2|L𝔽2(0,T;L2(Γ))2),\begin{array}[]{ll}\displaystyle\quad e^{2\lambda\beta_{4}}{\mathbb{E}}\int_{t_{0}-\frac{\rho}{\sqrt{N}}}^{t_{0}+\frac{\rho}{\sqrt{N}}}\int_{G^{\prime}}\big{(}\lambda^{3}y^{2}+\lambda|\nabla y|^{2}\big{)}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq Ce^{2\lambda\beta_{3}}{\mathbb{E}}\int_{Q_{1}}\big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt+Ce^{2\lambda\gamma}\big{(}|f|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))}^{2}+|g_{1}|_{H^{1}(\Gamma)}^{2}+\left|g_{2}\right|_{L_{\mathbb{F}}^{2}\left(0,T;L^{2}(\Gamma)\right)}^{2}\big{)},\end{array}

which implies that for all λλ1\lambda\geq\lambda_{1}, it holds that

𝔼t0ρNt0+ρNG(y2+|y|2)𝑑x𝑑tCe2λ(β4β3)𝔼Q1(|y|2+|y|2)𝑑x𝑑t+Ce2λγ(|f|L𝔽2(0,T;L2(G))2+|g1|H1(Γ)2+|g2|L𝔽2(0,T;L2(Γ))2).\begin{array}[]{ll}\displaystyle\quad{\mathbb{E}}\int_{t_{0}-\frac{\rho}{\sqrt{N}}}^{t_{0}+\frac{\rho}{\sqrt{N}}}\int_{G\,^{\prime}}\big{(}y^{2}+|\nabla y|^{2}\big{)}dxdt\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq Ce^{-2\lambda(\beta_{4}-\beta_{3})}{\mathbb{E}}\int_{Q_{1}}\big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt+Ce^{2\lambda\gamma}\big{(}|f|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))}^{2}+|g_{1}|_{H^{1}(\Gamma)}^{2}+\left|g_{2}\right|_{L_{\mathbb{F}}^{2}\left(0,T;L^{2}(\Gamma)\right)}^{2}\big{)}.\end{array} (2.39)

By taking t0=2ρ+2iρNt_{0}=\sqrt{2}\rho+\frac{2i\rho}{\sqrt{N}}, i=0,1,,mi=0,1,\cdots,m such that

2ρ+2mρNT2ρ2ρ+(2m+1)ρN,\sqrt{2}\rho+\frac{2m\rho}{\sqrt{N}}\leq T-\sqrt{2}\rho\leq\sqrt{2}\rho+\frac{(2m+1)\rho}{\sqrt{N}},

and ε=2ρ\varepsilon=\sqrt{2}\rho, we get

𝔼εTεG(y2+|y|2)𝑑x𝑑t\displaystyle\quad{\mathbb{E}}\int_{\varepsilon}^{T-\varepsilon}\int_{G^{\prime}}\big{(}y^{2}+|\nabla y|^{2}\big{)}dxdt
=𝔼2ρT2ρG(y2+|y|2)𝑑x𝑑t\displaystyle={\mathbb{E}}\int_{\sqrt{2}\rho}^{T-\sqrt{2}\rho}\int_{G^{\prime}}\big{(}y^{2}+|\nabla y|^{2}\big{)}dxdt
i=1m𝔼2ρ+(i1)ρN2ρ+(i+1)ρNG(y2+|y|2)𝑑x𝑑t\displaystyle\leq\sum_{i=1}^{m}{\mathbb{E}}\int_{\sqrt{2}\rho+\frac{(i-1)\rho}{\sqrt{N}}}^{\sqrt{2}\rho+\frac{(i+1)\rho}{\sqrt{N}}}\int_{G^{\prime}}\big{(}y^{2}+|\nabla y|^{2}\big{)}dxdt
Ce2λ(β4β3)𝔼Q1(|y|2+|y|2)𝑑x𝑑t+Ce2λγ(|f|L𝔽2(0,T;L2(G))2+|g1|H1(Γ)2+|g2|L𝔽2(0,T;L2(Γ))2).\displaystyle\leq Ce^{-2\lambda(\beta_{4}-\beta_{3})}{\mathbb{E}}\int_{Q_{1}}\big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt+Ce^{2\lambda\gamma}\big{(}|f|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))}^{2}+|g_{1}|_{H^{1}(\Gamma)}^{2}+\left|g_{2}\right|_{L_{\mathbb{F}}^{2}\left(0,T;L^{2}(\Gamma)\right)}^{2}\big{)}. (2.40)

We now balance the terms in the right hand side of (2) via choosing λ=λ(δ)\lambda=\lambda(\delta) such that eλγδ=Ceλ(β4β3)e^{\lambda\gamma}\delta=Ce^{-\lambda(\beta_{4}-\beta_{3})}. This implies that

λ=lnδγ+β4β3.\lambda=\frac{-\ln\delta}{\gamma+\beta_{4}-\beta_{3}}.

Hence, for δ(0,δ0)\delta\in(0,\delta_{0}), where the number δ0\delta_{0} is so small that lnδγ+β4β3λ2\frac{-\ln\delta}{\gamma+\beta_{4}-\beta_{3}}\geq\lambda_{2}, we have (2.2) with β=β4β3γ+β4β3\beta=\frac{\beta_{4}-\beta_{3}}{\gamma+\beta_{4}-\beta_{3}}.             

Remark 2.1.

The inequality (2.34) is called the Carleman estimate for (1.1).

3 Regularization Method

For uL𝔽2(Ω;C([0,T];L2(G)))L𝔽2(0,T;H2(G))u\in L_{{\mathbb{F}}}^{2}(\Omega;C([0,T];L^{2}(G)))\cap L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G)), set

(Pu)(t,x)=u(t,x)u(0,x)0t{i,j=1n(aij(s,x)ui(s,x))j+[a1(s,x),u(s,x)+a2(s,x)u(s,x)]}𝑑s0ta3(s,x)u(s,x)𝑑W(s),-a.s. \begin{array}[]{ll}\displaystyle(Pu)(t,x)\negthinspace\negthinspace\negthinspace&\displaystyle\buildrel\triangle\over{=}u(t,x)-u(0,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\quad-\int_{0}^{t}\Big{\{}\sum^{n}_{i,j=1}\big{(}a^{ij}(s,x)u_{i}(s,x)\big{)}_{j}+[\big{\langle}a_{1}(s,x),\nabla u(s,x)\big{\rangle}+a_{2}(s,x)u(s,x)]\Big{\}}ds\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\quad-\int_{0}^{t}a_{3}(s,x)u(s,x)dW(s),\qquad{\mathbb{P}}\mbox{-}\hbox{\rm a.s.{ }}\end{array}

Let

=Δ{uL𝔽2(0,T;H2(G))\displaystyle\mathcal{H}\mathop{\buildrel\Delta\over{=}}\{u\in L^{2}_{\mathbb{F}}(0,T;H^{2}(G)) :PuL𝔽2(Ω;H1(0,T;L2(G))),\displaystyle:Pu\in L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G))),
u|(0,T)×Γ=g1,νu|(0,T)×Γ=g2}.\displaystyle\quad u|_{(0,T)\times\Gamma}=g_{1},~{}\partial_{\nu}u|_{(0,T)\times\Gamma}=g_{2}\}.

Denote f^(t)=0tf(s)𝑑s\hat{f}(t)=\int_{0}^{t}f(s)ds. Clearly, we have

|f^|L𝔽2(Ω;H1(0,T;L2(G)))C|f|L𝔽2(0,T;L2(G)).\displaystyle|\hat{f}|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}\leq C|f|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))}. (3.1)

Given a function FF\in\mathcal{H}, the Tikhonov functional is now constructed as

Jγ(u)\displaystyle J_{\gamma}\left(u\right)\negthinspace\negthinspace\negthinspace =\displaystyle=\negthinspace\negthinspace\negthinspace |Puf^|L𝔽2(Ω;H1(0,T;L2(G)))2+γ|uF|L𝔽2(0,T;H2(G))2,\displaystyle|Pu-\hat{f}|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}^{2}+\gamma|u-F|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2}, (3.2)

where uu\in\mathcal{H} and γ(0,1)\gamma\in(0,1).

We have the following result.

Theorem 3.1.

For every γ(0,1)\gamma\in\left(0,1\right), there exists a unique minimizer uγu_{\gamma}\in\mathcal{H} of the functional Jγ(u)J_{\gamma}\left(u\right) in (3.2).

Proof. Let

0={uL𝔽2(0,T;H2(G))\displaystyle\mathcal{H}_{0}\buildrel\triangle\over{=}\{u\in L^{2}_{\mathbb{F}}(0,T;H^{2}(G)) :PuL𝔽2(Ω;H1(0,T;L2(G))),\displaystyle:Pu\in L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G))),
u|(0,T)×Γ=0,νu|(0,T)×Γ=0}.\displaystyle\quad u|_{(0,T)\times\Gamma}=0,~{}\partial_{\nu}u|_{(0,T)\times\Gamma}=0\}.

Fix γ(0,1)\gamma\in(0,1). We define the inner products as follows:

φ,ψ0=Pφ,PψL𝔽2(Ω;H1(0,T;L2(G)))+γφ,ψL𝔽2(0,T;H2(G))\displaystyle\langle\varphi,\psi\rangle_{\mathcal{H}_{0}}=\langle P\varphi,P\psi\rangle_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}+\gamma\langle\varphi,\psi\rangle_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G))} (3.3)

where φ,ψ0\varphi,\psi\in\mathcal{H}_{0}. Let ¯0\overline{\mathcal{H}}_{0} be the completion of 0\mathcal{H}_{0} with respect to the inner product ,0\langle\cdot,\cdot\rangle_{\mathcal{H}_{0}}, and still denoted by 0\mathcal{H}_{0} for the sake of simplicity.

For uu\in\mathcal{H}, let v=uFv=u-F. Then v0v\in\mathcal{H}_{0}. By (3.2), we should minimize the following functional

J¯γ(v)=|Pv+PFf^|L𝔽2(Ω;H1(0,T;L2(G)))2+γ|v|L𝔽2(0,T;H2(G))2,v0.\overline{J}_{\gamma}\left(v\right)=|Pv+PF-\hat{f}|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}^{2}+\gamma|v|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2},\quad v\in\mathcal{H}_{0}. (3.4)

If vγ0v_{\gamma}\in\mathcal{H}_{0} is a minimizer of the functional (3.4), then uγ=vγ+Fu_{\gamma}=v_{\gamma}+F is a minimizer of the functional (3.2). On the other hand, if uγu_{\gamma} is a minimizer of the functional (3.2), then vγ=uγF0v_{\gamma}=u_{\gamma}-F\in\mathcal{H}_{0} is a minimizer of the functional (3.4).

By the variational principle, any minimizer vγv_{\gamma} of the functional (3.4) should satisfy the following condition

Pvγ,PhL𝔽2(0,T;L2(G))+γvγ,hL𝔽2(0,T;H2(G))=Ph,f^PFL𝔽2(Ω;H1(0,T;L2(G))),h0,\begin{array}[]{ll}\displaystyle\big{\langle}Pv_{\gamma},Ph\big{\rangle}_{L_{{\mathbb{F}}}^{2}(0,T;L^{2}(G))}+\gamma\big{\langle}v_{\gamma},h\big{\rangle}_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}=\big{\langle}Ph,\hat{f}-PF\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))},\quad\forall\,h\in\mathcal{H}_{0},\end{array} (3.5)

From (3.3), it can be rewritten as

vγ,h0=Ph,f^PFL𝔽2(Ω;H1(0,T;L2(G))),h0.\big{\langle}v_{\gamma},h\big{\rangle}_{\mathcal{H}_{0}}=\big{\langle}Ph,\hat{f}-PF\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))},\quad\forall\,h\in\mathcal{H}_{0}. (3.6)

From (3.1), it holds that

|Ph,f^PFL𝔽2(Ω;H1(0,T;L2(G)))|C(|PF|L𝔽2(Ω;H1(0,T;L2(G)))+|f|L𝔽2(0,T;L2(G)))|h|0.\big{|}\big{\langle}Ph,\hat{f}-PF\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}\big{|}\leq C(|PF|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}+|f|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))})|h|_{\mathcal{H}_{0}}. (3.7)

Hence, the right hand side of (3.6) is a bounded linear functional on 0\mathcal{H}_{0}. By Riesz representation theorem, there exists an element wγ0w_{\gamma}\in\mathcal{H}_{0} such that

Ph,f^PFL𝔽2(Ω;H1(0,T;L2(G)))=wγ,h0,h0.\big{\langle}Ph,\hat{f}-PF\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}=\big{\langle}w_{\gamma},h\big{\rangle}_{\mathcal{H}_{0}},\qquad\forall\,h\in\mathcal{H}_{0}.

This and (3.6) imply that

vγ,h0=wγ,h0,h0.\big{\langle}v_{\gamma},h\big{\rangle}_{\mathcal{H}_{0}}=\big{\langle}w_{\gamma},h\big{\rangle}_{\mathcal{H}_{0}},\qquad\forall\,h\in\mathcal{H}_{0}.

Hence, the minimizer vγ=wγ.v_{\gamma}=w_{\gamma}.             

Remark 3.1.

In the proof of Theorem 3.1, we utilized solely the variational principle and Riesz’s theorem, without invoking the Carleman estimate. However, we make use of this estimate in Theorem 3.2, where we establish the rate of convergence of minimizers uγu_{\gamma} to the precise solution, provided that certain conditions are met.

Assume that there exists an exact solution yy^{\ast} of the problem (1.1) with the exact data

fL𝔽2(0,T;L2(G)),y|Γ=g1H1(Γ),νy|Γ=g2L𝔽2(0,T;L2(Γ)).f^{*}\in L^{2}_{\mathbb{F}}(0,T;L^{2}(G)),\quad y^{\ast}|_{\Gamma}=g_{1}^{\ast}\in H^{1}(\Gamma),\quad\partial_{\nu}y^{\ast}|_{\Gamma}=g_{2}^{\ast}\in L_{\mathbb{F}}^{2}(0,T;L^{2}(\Gamma)).

By Theorem 2.1, the exact solution yy^{\ast} is unique. Because of the existence of y,y^{\ast}, there also exists an exact function FF^{\ast}\in\mathcal{H} such that

|F|L𝔽2(0,T;H2(G))C|y|L𝔽2(0,T;H2(G)).\displaystyle|F^{*}|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G))}\leq C|y^{*}|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G))}. (3.8)

Here is an example of such a function FF^{\ast}. Let the function ρC2(Q¯)\rho\in C^{2}\left(\overline{Q}\right) be such that ρ(x)=1\rho\left(x\right)=1 in a small neighborhood

ρ(t,x)={1,(t,x)Nσ((0,T)×Γ)={(t,x)Q:dist((t,x),(0,T)×Γ)<σ},0,xQN2σ((0,T)×Γ),\rho\left(t,x\right)=\begin{cases}\displaystyle 1,&(t,x)\in N_{\sigma}\left((0,T)\times\Gamma\right)=\left\{(t,x)\in Q:\,\operatorname*{dist}\left((t,x),(0,T)\times\Gamma\right)<\sigma\right\},\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle 0,&x\in Q\setminus N_{2\sigma}\left((0,T)\times\Gamma\right),\end{cases}

where σ>0\sigma>0 is a sufficiently small number. Then FF^{\ast} can be constructed as F(t,x)=ρ(t,x)y(t,x)F^{\ast}\left(t,x\right)=\rho\left(t,x\right)y^{\ast}\left(t,x\right). Let δ>0\delta>0 be a sufficiently small number characterizing the error in the data. We assume that

|ff|L𝔽2(0,T;L2(G))δ,|g1g1|H1(Γ)δ,|g2g2|L𝔽2(0,T;L2(Γ))δ,\begin{array}[]{ll}\displaystyle|f^{*}-f|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))}\leq\delta,\quad|g_{1}^{\ast}-g_{1}|_{H^{1}(\Gamma)}\leq\delta,\quad|g_{2}^{\ast}-g_{2}|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(\Gamma))}\leq\delta,\end{array} (3.9)

and

|PFPF|L𝔽2(Ω;H1(0,T;L2(G)))+|FF|L𝔽2(0,T;H2(G))δ.\displaystyle|PF^{*}-PF|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}+|F^{\ast}-F|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}\leq\delta. (3.10)
Theorem 3.2.

(Convergence rate). Assume (3.9) and (3.10), and let the regularization parameter γ=δ2α\gamma=\delta^{2\alpha}, where α(0,1]\alpha\in\left(0,1\right] is a constant. Let G,ε,δ0G^{\prime},\varepsilon,\delta_{0}, and β\beta be the same as in Theorem 2.1. Then there exists a sufficiently small number δ1(0,1)\delta_{1}\in\left(0,1\right) and a constant C>0C>0 such that if δ1(0,δ01/α)\delta_{1}\in(0,\delta_{0}^{1/\alpha}), then

|yγy|L𝔽2(ε,Tε;H1(G))C(1+|y|L𝔽2(0,T;H2(G)))δαβ,δ(0,δ1),|y_{\gamma}-y^{\ast}|_{L_{{\mathbb{F}}}^{2}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}\leq C\big{(}1+|y^{\ast}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}\big{)}\delta^{\alpha\beta},\quad\forall\,\delta\in\left(0,\delta_{1}\right), (3.11)

where yγ(δ)y_{\gamma\left(\delta\right)} is the minimizer of the functional (3.2).

Proof. Let v=yFv^{\ast}=y^{\ast}-F^{\ast}. Then v0v^{\ast}\in\mathcal{H}_{0} and Pv=f^PFPv^{\ast}=\hat{f^{*}}-PF^{\ast}. Hence,

Pv,PhL𝔽2(Ω;H1(0,T;L2(G)))+γv,hL𝔽2(0,T;H2(G))=Ph,f^PFL𝔽2(Ω;H1(0,T;L2(G)))+γv,hL𝔽2(0,T;H2(G)),h0.\begin{array}[]{ll}\displaystyle\big{\langle}Pv^{\ast},Ph\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}+\gamma\big{\langle}v^{\ast},h\big{\rangle}_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle=\big{\langle}Ph,\hat{f^{*}}-PF^{\ast}\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}+\gamma\big{\langle}v^{\ast},h\big{\rangle}_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))},\quad\forall\,h\in\mathcal{H}_{0}.\end{array} (3.12)

Subtract identity (3.5) from identity (3.12) and denote v~γ=vvγ\widetilde{v}_{\gamma}=v^{\ast}-v_{\gamma}, f~=f^f^\widetilde{f}=\hat{f^{*}}-\hat{f} and F~=FF\widetilde{F}=F^{\ast}-F. Then

Pv~γ,PhL𝔽2(Ω;H1(0,T;L2(G)))+γv~γ,hL𝔽2(0,T;H2(G))=Ph,f~PF~L𝔽2(Ω;H1(0,T;L2(G)))+γv,hL𝔽2(0,T;H2(G)),h0.\begin{array}[]{ll}\displaystyle\big{\langle}P\widetilde{v}_{\gamma},Ph\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}+\gamma\big{\langle}\widetilde{v}_{\gamma},h\big{\rangle}_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle=\big{\langle}Ph,\widetilde{f}-P\widetilde{F}\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}+\gamma\big{\langle}v^{\ast},h\big{\rangle}_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))},\quad\forall\,h\in\mathcal{H}_{0}.\end{array}

Setting here h=v~γh\buildrel\triangle\over{=}\widetilde{v}_{\gamma}, we obtain

|Pv~γ|L𝔽2(Ω;H1(0,T;L2(G)))2+γ|v~γ|L𝔽2(0,T;H2(G))2=Pv~γ,f~PF~L𝔽2(Ω;H1(0,T;L2(G)))+γv,v~γL𝔽2(0,T;H2(G)).\begin{array}[]{ll}\displaystyle|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}^{2}+\gamma|\widetilde{v}_{\gamma}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle=\big{\langle}P\widetilde{v}_{\gamma},\widetilde{f}-P\widetilde{F}\big{\rangle}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}+\gamma\big{\langle}v^{\ast},\widetilde{v}_{\gamma}\big{\rangle}_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}.\end{array} (3.13)

Applying the Cauchy-Schwarz inequality to (3.13), we obtain

|Pv~γ|L𝔽2(Ω;H1(0,T;L2(G)))2+γ|v~γ|L𝔽2(0,T;H2(G))212|Pv~γ|L𝔽2(Ω;H1(0,T;L2(G)))2+12|f~PF~|L𝔽2(Ω;H1(0,T;L2(G)))2+γ2|v|L𝔽2(0,T;H2(G))2+γ2|v~γ|L𝔽2(0,T;H2(G))2.\begin{array}[]{ll}\displaystyle|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}^{2}+\gamma|\widetilde{v}_{\gamma}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq\frac{1}{2}|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}^{2}+\frac{1}{2}\big{|}\widetilde{f}-P\widetilde{F}\big{|}_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}^{2}+\frac{\gamma}{2}|v^{\ast}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\quad+\frac{\gamma}{2}|\widetilde{v}_{\gamma}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2}.\end{array} (3.14)

Hence, by (3.9) and (3.10)

|Pv~γ|L𝔽2(Ω;H1(0,T;L2(G)))2+γ|v~γ|L𝔽2(0,T;H2(G))2Cδ2+γ|v|L𝔽2(0,T;H2(G))2.|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}^{2}+\gamma|\widetilde{v}_{\gamma}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2}\leq C\delta^{2}+\gamma|v^{\ast}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2}. (3.15)

Since γ=δ2α\gamma=\delta^{2\alpha}, where α(0,1]\alpha\in\left(0,1\right], then δ2γ\delta^{2}\leq\gamma. Hence, (3.15) implies that

|v~γ|L𝔽2(0,T;H2(G))2C(1+|v|L𝔽2(0,T;H2(G))2),|\widetilde{v}_{\gamma}|^{2}_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}\leq C\big{(}1+|v^{\ast}|^{2}_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}\big{)}, (3.16)
|Pv~γ|L𝔽2(Ω;H1(0,T;L2(G)))2C(1+|v|L𝔽2(0,T;H2(G))2)δ2α.|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(\Omega;H^{1}(0,T;L^{2}(G)))}^{2}\leq C\big{(}1+|v^{\ast}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}^{2}\big{)}\delta^{2\alpha}. (3.17)

Let wγ=v~γ(1+|v|L𝔽2(0,T;H2(G)))1w_{\gamma}=\widetilde{v}_{\gamma}\big{(}1+|v^{\ast}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}\big{)}^{-1}. Then (3.16), (3.17) and Theorem 2.1 imply that

|wγ|L𝔽2(ε,Tε;H1(G))Cδαβ,δ(0,δ1).|w_{\gamma}|_{L_{{\mathbb{F}}}^{2}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}\leq C\delta^{\alpha\beta},\quad\forall\,\delta\in\left(0,\delta_{1}\right).

Therefore,

|v~γ|L𝔽2(ε,Tε;H1(G))C(1+|v|L𝔽2(0,T;H2(G)))δαβ,δ(0,δ1).|\widetilde{v}_{\gamma}|_{L_{{\mathbb{F}}}^{2}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}\leq C\left(1+|v^{\ast}|_{L_{{\mathbb{F}}}^{2}(0,T;H^{2}(G))}\right)\delta^{\alpha\beta},\quad\forall\,\delta\in\left(0,\delta_{1}\right). (3.18)

Next, since v~γ=(yyγ)+(FF)\widetilde{v}_{\gamma}=\left(y^{\ast}-y_{\gamma}\right)+\left(F-F^{\ast}\right), by (3.9) and (3.10),

|FF|L𝔽2(ε,Tε;H1(G))δ,|F^{\ast}-F|_{L_{{\mathbb{F}}}^{2}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}\leq\delta,

then

|v~γ|L𝔽2(ε,Tε;H1(G))|yγy|L𝔽2(ε,Tε;H1(G))|FF|L𝔽2(ε,Tε;H1(G))|yγy|L𝔽2(ε,Tε;H1(G))δ.\begin{array}[]{ll}\displaystyle|\widetilde{v}_{\gamma}|_{L_{{\mathbb{F}}}^{2}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}\negthinspace\negthinspace\negthinspace&\displaystyle\geq|y_{\gamma}-y^{\ast}|_{L_{{\mathbb{F}}}^{2}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}-|F^{\ast}-F|_{L_{{\mathbb{F}}}^{2}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\geq|y_{\gamma}-y^{\ast}|_{L_{{\mathbb{F}}}^{2}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}-\delta.\end{array} (3.19)

Since numbers β,δ(0,1)\beta,\delta\in\left(0,1\right) and since α(0,1]\alpha\in\left(0,1\right], then δαβ>δ\delta^{\alpha\beta}>\delta. Thus, using (3.8), (3.18) and (3.19), we obtain (3.11).             

4 Numerical Approximations

In this section, we aim to numerically solve the ill-posed Cauchy problem of the stochastic parabolic differential equation given by (1.1). For the sake of simplicity, we set a1=0a_{1}=0, a2=0a_{2}=0, a3=1a_{3}=1, f=0f=0 and T=1T=1 in the system for all numerical tests to follow. Since an explicit expression for the exact solution is unavailable, we resort to numerically solving the initial-boundary value problem

{dyi,j=1n(aijyi)jdt=[a1,y+a2y]dt+a3ydW(t)in(0,1)×G,y(0,x)=y0(x)inG,y(t,x)=g1(t,x)on(0,1)×G,\left\{\begin{array}[]{ll}\displaystyle dy-\sum_{i,j=1}^{n}(a^{ij}y_{i})_{j}dt=[\langle a_{1},\nabla y\rangle+a_{2}y]dt+a_{3}ydW(t)\quad\mbox{in}\quad(0,1)\times G,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle y(0,x)=y_{0}(x)\quad\mbox{in}\quad G,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle y(t,x)=g_{1}(t,x)\quad\mbox{on}\quad(0,1)\times\partial G,\end{array}\right. (4.1)

by employing the finite difference method with time discretized via the Euler-Maruyama method [10] to obtain the Cauchy data on (0,1)×Γ(0,1)\times\Gamma. We then construct numerical approximations for the Cauchy problem (1.1) via Tikhonov regularization, with the aid of kernel-based learning theory, for which we have established a convergence rate guaranteed in Section 3.

We verify the proposed method by using following three examples.

Example 4.1.

Let G=(0,1)G=(0,1) and Γ=1\Gamma={1}. Suppose that

  1. (a)

    y0(x)=x(1x),g1(t,x)=0.y_{0}(x)=x(1-x),\quad g_{1}(t,x)=0.

  2. (b)

    y0(x)={4x,x[0,0.25)43x+43,x[0.25,1],g1(t,x)=0.y_{0}(x)=\begin{cases}4x,&x\in[0,0.25)\\ -\frac{4}{3}x+\frac{4}{3},&x\in[0.25,1]\end{cases},\quad g_{1}(t,x)=0.

The simplest time-discrete approximation is the stochastic version of the Euler approximation, also known as the Euler-Maruyama method [10]. We simply describe the process of solving the initial-boundary value problem (4.1) in Example 4.1 by the Euler-Maruyama method in the following. Let yj,k=y(xj,tk)y_{j,k}=y(x_{j},t_{k}) with xj=jhx_{j}=jh, tk=kτt_{k}=k\tau, j=1,,m+1j=1,\cdots,m+1, k=1,,n+1k=1,\cdots,n+1, where h=1/nh=1/n and τ=1/m\tau=1/m denote the spatial and temporal step sizes, respectively. It was shown that not all heuristic time-discrete approximations of stochastic differential equations converge to the solution process in a useful sense as the time step size τ\tau tends to zero [10]. Since the numerical approximation of observations obtained by direct application of backward finite difference scheme is not adapted to the filtration 𝔽{\mathbb{F}}, we can only solve the initial-boundary value problem (4.1) by the explicit finite difference scheme, i.e.,

yj,k+1yj,kτ=yj1,k2yj,k+yj+1,kh2+W(tk+1)W(tk)τyj,k, 2jm,1kn.\frac{y_{j,k+1}-y_{j,k}}{\tau}=\frac{y_{j-1,k}-2y_{j,k}+y_{j+1,k}}{h^{2}}+\frac{W(t_{k+1})-W(t_{k})}{\tau}y_{j,k},\ 2\leq j\leq m,1\leq k\leq n.

where the initial and boundary value are given by

yj,1=y0(xj), 1jm+1,y1,k=g1(0,tk),ym+1,k=g1(1,tk), 2kn+1.y_{j,1}=y_{0}(x_{j}),\ 1\leq j\leq m+1,\quad y_{1,k}=g_{1}(0,t_{k}),\ y_{m+1,k}=g_{1}(1,t_{k}),\ 2\leq k\leq n+1.

To ensure the numerical scheme is stable, we choose m=15m=15 and n=450n=450 in the computation. By solving above algebraic system, we obtain distribution of yy at meshed grids of the initial-boundary value problem (4.1). In the process of solving Cauchy problem (1.1) numerically, we using ym,ky_{m,k} and ym+1,ky_{m+1,k} instead of the Cauchy data at x=1x=1.

In the following, we numerically solve the optimization problem which is given in (3.2) by kernel-based learning theory.

Suppose that φ(x,t)\varphi(x,t) is the fundamental solution of parabolic equation

yti,j=1n(aijyi)ja1,y+a2y=0in(0,T)×d,y_{t}-\sum_{i,j=1}^{n}(a^{ij}y_{i})_{j}-\langle a_{1},\nabla y\rangle+a_{2}y=0\quad\mbox{in}\quad(0,T)\times\mathbb{R}^{d},

then the mild solution y(x,t)y(x,t) of

{dyi,j=1n(aijyi)jdt=[a1,y+a2y]dt+a3ydW(t)in(0,T)×n,y(x,0)=y0(x),xn,\left\{\begin{array}[]{ll}\displaystyle dy-\sum_{i,j=1}^{n}(a^{ij}y_{i})_{j}dt=[\langle a_{1},\nabla y\rangle+a_{2}y]dt+a_{3}ydW(t)\quad\mbox{in}\quad(0,T)\times\mathbb{R}^{n},\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle y(x,0)=y_{0}(x),\quad x\in\mathbb{R}^{n},\end{array}\right. (4.2)

can be written as

y(x,t)=0tdφ(x,t,y,s)a3(y)𝑑y𝑑W(s)+dφ(x,t,y,0)y0(y)𝑑y.y(x,t)=\int_{0}^{t}\int_{\mathbb{R}^{d}}\varphi(x,t,y,s)a_{3}(y)dydW(s)+\int_{\mathbb{R}^{d}}\varphi(x,t,y,0)y_{0}(y)dy. (4.3)

Since the fundamental solution φ(x,t)\varphi(x,t) is deterministic, so the mild solution y(x,t)y(x,t) given by (4.3) is well-known.

Assume that

y~(x,t)=l=1NλlΦl(x,t),l=1,,N.\tilde{y}(x,t)=\sum_{l=1}^{N}\lambda_{l}\Phi_{l}(x,t),\quad l=1,\cdots,N. (4.4)

Here Φl\Phi_{l} are the basis functions, NN is the number of source points, and λl\lambda_{l} are unknown coefficients to be determined.

From (4.3), we can let Φl(x,t)=φ(xξl,tτl),l=1,,N\Phi_{l}(x,t)=\varphi(x-\xi_{l},t-\tau_{l}),l=1,\cdots,N, where φ\varphi is the fundamental solution of the deterministic parabolic equation, (ξl,τl)(\xi_{l},\tau_{l}) are source points. The suitable choice of the source points can ensure that y~\tilde{y} is analytic in the domain (0,T)×G(0,T)\times G and make the algorithm effective and accurate. However, the optimal rule for the location of source points is still open. In recent work, we occupy uniformly distributed points on DT×[R,R+1]DT\times[-R,R+1] below t=0t=0, where DTDT and RR are post-prior determined, as the source points. This choice is given in [5] and related works, and performs better in comparison with other schemes.

Since y~\tilde{y} be given in (4.4) has already satisfied the equation in system (4.1), coefficients λl,l=1,,N\lambda_{l},l=1,\cdots,N should be determined by Cauchy conditions. From the process of solving the initial boundary value problem we know that, (xm+1,tk)(x_{m+1},t_{k}) and (xm,tk),k=1,,n(x_{m},t_{k}),k=1,\cdots,n can be set as the collocation points, and the problem converts to solving unknowns λ=[λ1,,λN]𝖳\lambda=[\lambda_{1},\cdots,\lambda_{N}]^{\mathsf{T}} from the linear system

Aλ=b,A\lambda=b, (4.5)

where

A=[φ(xm+1ξl,tτl)φ(xmξl,tτl)]2n×N,A=\Big{[}\begin{array}[]{c}\varphi(x_{m+1}-\xi_{l},t-\tau_{l})\\ \varphi(x_{m}-\xi_{l},t-\tau_{l})\end{array}\Big{]}_{2n\times N},

and

b=[y(xm+1,tk),y(xm,tk)]2n𝖳,b=[y(x_{m+1},t_{k}),y(x_{m},t_{k})]_{2n}^{\mathsf{T}},

By comparing the Tikhonov functional be given in (3.2) we know, PuPu is calculated by AλbA\lambda-b. Choosing the regularization parameter γ\gamma by using the L-curve method, that is, a regularized solution near the “corner” of the L-curve [12]

L={(log(λγ2),log(Aλγb2)),γ>0},L=\{(\log{(\|\lambda_{\gamma}\|^{2})},\log{(\|A\lambda_{\gamma}-b\|^{2})}),\gamma>0\},

and denoting the regularized solution of linear system (4.5) by λγ\lambda_{\gamma}^{*}, leads to the approximated solution

y~γ(x,t)=l=1Nλγ,lΦl(x,t)\tilde{y}_{\gamma}^{*}(x,t)=\sum_{l=1}^{N}\lambda_{\gamma,l}^{*}\Phi_{l}(x,t) (4.6)

of problem (1.1).

To illustrate the comparison of the exact solution and its approximation y~γ\tilde{y}_{\gamma}^{*} and in order to avoid “inverse crime”, we choose M=m+nM=m+n and compute y(x,t)y(x,t) by finite difference method again such that y(x,t)y(x,t) and y~γ\tilde{y}_{\gamma}^{*} be defined on the same grid.

Figure 4.1 shows the numerical solution of the initial-boundary problem and the approximation solutions of the Cauchy problem of Example 4.1(a) with different noisy levels. Since the data is given at x=1x=1, the numerical solution seems worse as xx tends to 0, this is consistence with the convergence estimation in section 3 because no information is given at partial boundary {x=0}\{x=0\}. Furthermore, the convergence rate always holds in the temporal interval [ε,Tϵ],ϵ>0[\varepsilon,T-\epsilon],\epsilon>0 in section 3. However, from this figure, we find that the proposed method also works well at t=0t=0.

Denote the relative error by

E(x)=y~γ(x,t)y(x,t)L2(G)y(x,t)L2(G).E(x)=\frac{\|\tilde{y}_{\gamma}^{*}(x,t)-y(x,t)\|_{L^{2}(G)}}{\|y(x,t)\|_{L^{2}(G)}}. (4.7)
Refer to caption
(a) δ=1%\delta=1\%
Refer to caption
(b) δ=5%\delta=5\%
Refer to caption
(c) δ=10%\delta=10\%
Figure 4.1: Approximated solution with different noisy levels of Example 4.1(a).

As in the study of stochastic equations, large numbers of sample paths should be checked in the numerical experiments for simulating the expectation of the solution. Thus, we do the tests with different sample paths. It is interesting from Figure 4.2 that when the number of sample paths is great than 10, the results seems no better off. Thus in the following, we only consider do the experiments with number of sample paths #=10.\#=10.

Refer to caption
(a) Relative errors E(x)E(x)
Refer to caption
(b) Numerical approximations of y(0,x)y(0,x)
Figure 4.2: Numerical results by expectation of solutions of different numbers of sample paths.

Now we consider the problem in Example 4.1(b). In this case, y0y_{0} is a piecewise smooth function. Figure 4.3 shows the approximations of boundary value y(t,0)y(t,0) in (a) and the approximations of initial value y(0,x)y(0,x) with different noise levels in (b). Moreover, to illustrate the effectiveness of the proposed method, the change of relative error along lines xx and tt are also given in Figure 4.3(c) and (d), respectively.

Refer to caption
(a) Approximations of y(t,0)y(t,0)
Refer to caption
(b) Approximations of y(0,x)y(0,x)
Refer to caption
(c) Relative errors E(x)E(x)
Refer to caption
(d) Relative errors E(t)E(t)
Figure 4.3: Numerical results of Example 4.1(b) with different noisy level data δ\delta.

We mention that the Cauchy problem of the deterministic parabolic equation has been considered in [20], that the numerical algorithm based on Carleman weight function proposed can stably invert the solution near the known boundary conditions, more precisely, solutions for t[0.5,1]t\in[0.5,1] for 1-D example. With comparison of the results of Example 4.1, our method works well for recovering the solution for t[0.1,1]t\in[0.1,1] for the stochastic parabolic equation. We believe this algorithm also works well for deterministic parabolic equation, and can be an improvement of previous works. Furthermore, the proposed method also works well for 2-D examples in both rectangular and discal domains.

We explore the effectiveness of the proposed numerical methods in following two examples in bounded domains in 2\mathbb{R}^{2}, as well as the influence of parameter in the kernel-based learning method. We always let δ=3%\delta=3\% unless particularly stated.

Example 4.2.

Suppose that G=[1,1]×[1,1]G=[-1,1]\times[-1,1], and Γ1={x1=1,x2[1,1]},Γ2={x1[1,1]},Γ3=G\{x1=1,x2[1,1]}\Gamma_{1}=\{x_{1}=1,x_{2}\in[-1,1]\},\Gamma_{2}=\{x_{1}\in[-1,1]\},\Gamma_{3}=\partial G\backslash\{x_{1}=-1,x_{2}\in[-1,1]\}. Let

  1. (a)

    y0(x1,x2)=sin(πx1)sin(πx2)+2,g1(t,x1,x2)=2.y_{0}(x_{1},x_{2})=\sin{(\pi x_{1})}\sin{(\pi x_{2})}+2,\quad g_{1}(t,x_{1},x_{2})=2.

  2. (b)

    y0(x1,x2)={3,(x10.5)2+(x20.5)20.1523,(x10.5)2+(x2+0.5)20.1523,(x1+0.5)2+(x20.5)20.1523,(x1+0.5)2+(x2+0.5)20.1521,otherwise.,g1(t,x1,x2)=1.y_{0}(x_{1},x_{2})=\begin{cases}3,\quad(x_{1}-0.5)^{2}+(x_{2}-0.5)^{2}\leq 0.15^{2}\\ 3,\quad(x_{1}-0.5)^{2}+(x_{2}+0.5)^{2}\leq 0.15^{2}\\ 3,\quad(x_{1}+0.5)^{2}+(x_{2}-0.5)^{2}\leq 0.15^{2}\\ 3,\quad(x_{1}+0.5)^{2}+(x_{2}+0.5)^{2}\leq 0.15^{2}\\ 1,\quad otherwise.\end{cases},\quad g_{1}(t,x_{1},x_{2})=1.

We first consider the optimal choices of parameters RR and DTDT in case that the Cauchy data are given on Γ2\Gamma_{2} in the kernel-based approximation processes. We set DT=0.1DT=-0.1 and observe relative errors EE with the change of RR. It can be seen from figure 4.4 that R[2.5,3.5]R\in[2.5,3.5], the methods performs well. Numerical results in Figure 4.5 illustrate that any DT[0.2,0.1]DT\in[-0.2,-0.1] is a reasonable choice. Thus we fix R=3.5,DT=0.1R=3.5,DT=-0.1 in the following computing. The numerical approximations and relative errors for different noise level are shown in Figure 4.6.

Refer to caption
Figure 4.4: The relative error E(x1,x2)E(x_{1},x_{2}) for different RR.
Refer to caption
Figure 4.5: The relative error E(x1,x2)E(x_{1},x_{2}) for different DTDT.
Refer to caption
(a) approximation with δ=1%\delta=1\%
Refer to caption
(b) approximation with δ=5%\delta=5\%
Refer to caption
(c) approximation with δ=10%\delta=10\%
Refer to caption
(d) relative error with δ=1%\delta=1\%
Refer to caption
(e) relative error with δ=5%\delta=5\%
Refer to caption
(f) relative error with δ=10%\delta=10\%
Figure 4.6: The approximations for y(0,x1,x2)y(0,x_{1},x_{2}) and relative errors E(x1,x2)E(x_{1},x_{2}) for different noise levels for Example 4.2(a).

According to the conditional stability and convergence estimate we analyzed in section 2 and 3, the length of partial of boundary for which Cauchy data be given, will also effect on the approximation. Thus, we verify the proposed method for this example with Cauchy data be given in Γj,j=1,2,3\Gamma_{j},j=1,2,3 by Figure 4.7.

Refer to caption
(a) approximation with data on Γ1\Gamma_{1}
Refer to caption
(b) approximation with data on Γ2\Gamma_{2}
Refer to caption
(c) approximation with data on Γ3\Gamma_{3}
Refer to caption
(d) relative error with data on Γ1\Gamma_{1}
Refer to caption
(e) relative error with data on Γ2\Gamma_{2}
Refer to caption
(f) relative error with data on Γ3\Gamma_{3}
Figure 4.7: Approximations and relative errors for Example 4.2(b) with Cauchy data be given on different partial of boundary.

In Example 4.2, GG is a rectangular domain that the initial-boundary problem could be computed well by the finite difference method. However, in case that GG is a general bounded domain, we should improve the finite difference method by heterogeneous grids. To avoid the complicated analysis for the numerical solution of initial-boundary problem, we solve the problem by kernel-based learning theory, by treating the fundamental solution as kernels.

Example 4.3.

Suppose that G={r21}G=\{r^{2}\leq 1\}, where rx12+x22r\triangleq\sqrt{x_{1}^{2}+x_{2}^{2}}, tanθ=x2/x1\tan\theta=x_{2}/x_{1}, and ΓΘ={θ[0,Θ],r=1}\Gamma_{\Theta}=\{\theta\in[0,\Theta],r=1\}. Let y0(x1,x2)={3,0.2x10.6,0.2x20.61,otherwise,,g1(t,x1,x2)=1.y_{0}(x_{1},x_{2})=\begin{cases}3,&0.2\leq x_{1}\leq 0.6,0.2\leq x_{2}\leq 0.6\\ 1,&otherwise,\end{cases},\quad g_{1}(t,x_{1},x_{2})=1.

We show the numerical results when Θ=π3\Theta=\frac{\pi}{3} with varies δ\delta and results with different partial of boundary for Θ=π6,π4,π2\Theta=\frac{\pi}{6},\frac{\pi}{4},\frac{\pi}{2} for δ=3%\delta=3\% in Figure 4.8 and 4.9, respectively.

Refer to caption
(a) approximation with δ=1%\delta=1\%
Refer to caption
(b) approximation with δ=5%\delta=5\%
Refer to caption
(c) approximation with δ=10%\delta=10\%
Refer to caption
(d) relative error with δ=1%\delta=1\%
Refer to caption
(e) relative error with δ=5%\delta=5\%
Refer to caption
(f) relative error with δ=10%\delta=10\%
Figure 4.8: Numerical results with different noisy level for Example 4.3.
Refer to caption
(a) approximation with data on Γπ6\Gamma_{\frac{\pi}{6}}
Refer to caption
(b) approximation with data on Γπ4\Gamma_{\frac{\pi}{4}}
Refer to caption
(c) approximation with data on Γπ2\Gamma_{\frac{\pi}{2}}
Refer to caption
(d) relative error with data on Γπ6\Gamma_{\frac{\pi}{6}}
Refer to caption
(e) relative error with data on Γπ4\Gamma_{\frac{\pi}{4}}
Refer to caption
(f) relative error with data on Γπ2\Gamma_{\frac{\pi}{2}}
Figure 4.9: Numerical results with Cauchy data on different partial boundary for Example 4.3.

In conclusion, we do the numerical experiments for the Cauchy problem of stochastic parabolic equation for examples on both 1-D and 2-D bounded domains by smooth, piecewise smooth and noncontinuous initial conditions, numerical results illustrate that the proposed method works well for all these examples. Although we haven’t solved examples for other irregular 2-D domains and 3-D domains, it can be seen from the kernel-based learning theory that the method should be satisfied, because there is no limitation of dimension nn and shape of domains in the kernel-based learning theory but only distances between source points and collection point be concerned. Moreover, since no iterated method used in the numerical computation, the computational cost is reduced. However, the convergence rate for the kernel-based learning theory is still open.

Acknowledgement

The authors would like to thank Professor Kui Ren for providing valuable suggestions. This work is partially supported by the NSFC (No. 12071061), the Science Fund for Distinguished Young Scholars of Sichuan Province (No. 2022JDJQ0035).

References

  • [1] V. Barbu and M. Röckner. Backward uniqueness of stochastic parabolic like equations driven by gaussian multiplicative noise. Stochastic Processes and their Applications, 126(7):2163–2179, 2016.
  • [2] L. Beilina and M.V. Klibanov. Approximate global convergence and adaptivity for coefficient inverse problems. Springer Science & Business Media, 2012.
  • [3] R.C. Dalang, D. Khoshnevisan, C. Müller, D. Nualart, and Y.M. Xiao. A minicourse on stochastic partial differential equations, volume 1962. Springer, 2009.
  • [4] F.F. Dou and W. Du. Determination of the solution of a stochastic parabolic equation by the terminal value. Inverse Problems, 38(7):075010, 2022.
  • [5] F.F. Dou and Y.C. Hon. Kernel-based approximation for cauchy problem of the time-fractional diffusion equation. Engineering Analysis with Boundary Elements, 36(9):1344–1352, 2012.
  • [6] X. Fu and X. Liu. A weighted identity for stochastic partial differential operators and its applications. Journal of Differential Equations, 262(6):3551–3582, 2017.
  • [7] J.B. Walsh. An introduction to stochastic partial differential equations. Springer, 1986.
  • [8] J. Glimm. Nonlinear and stochastic phenomena: the grand challenge for partial differential equations. SIAM Review, 33, 626–643, 1991.
  • [9] T. Takeshi. Successive approximations to solutions of stochastic differential equations. Journal of Differential Equations, 96, 152-199, 1992.
  • [10] P.E. Kloeden,and E. Platen. Numerical Solution of Stochastic Differential Equations. Springer, 1992.
  • [11] Y. Gong, P. Li, X. Wang, and X. Xu. Numerical solution of an inverse random source problem for the time fractional diffusion equation via phaselift. Inverse Problems, 37(4):045001, 2021.
  • [12] P.C. Hansen. Numerical tools for analysis and solution of fredholm integral equations of the first kind. Inverse Problems, 8(6):849-872, 1992.
  • [13] Y.C. Hon and T. Wei. A fundamental solution method for inverse heat conduction problem. Engineering Analysis with Boundary Elements, 28(5):489–495, 2004.
  • [14] V. Isakov and S. Kindermann. Identification of the diffusion coefficient in a one-dimensional parabolic equation. Inverse Problems, 16(3):665–680, 2000.
  • [15] Y.L. Keung and J. Zou. Numerical identifications of parameters in parabolic systems. Inverse Problems, 14(1):83–100, 1998.
  • [16] M.V. Klibanov and A.V. Tikhonravov. Estimates of initial conditions of parabolic equations and inequalities in infinite domains via lateral cauchy data. Journal of Differential Equations, 237(1):198–224, 2007.
  • [17] M.V. Klibanov. Estimates of initial conditions of parabolic equations and inequalities via lateral Cauchy data. Inverse Problems, 22(2):495, 2006.
  • [18] P. Kotelenez. Stochastic ordinary and stochastic partial differential equations. Transition from microscopic to macroscopic equations. Springer, New York, 2008.
  • [19] M.V. Klibanov. Carleman estimates for global uniqueness, stability and numerical methods for coefficient inverse problems. Journal of Inverse and Ill-Posed Problems, 21(4):477–560, 2013.
  • [20] Klibanov, Michael V.; Koshev, Nikolaj A.; Li, Jingzhi; Yagola, Anatoly G. Numerical solution of an ill-posed Cauchy problem for a quasilinear parabolic equation using a Carleman weight function Journal of Inverse and Ill-Posed Problems, 24(6):761–776, 2016.
  • [21] H. Li and Q. Lü. A quantitative boundary unique continuation for stochastic parabolic equations. Journal of Mathematical Analysis and Applications, 402(2):518–526, 2013.
  • [22] J. Li, M. Yamamoto, and J. Zou. Conditional stability and numerical reconstruction of initial temperature. Communications on Pure & Applied Analysis, 8(1):361–382, 2009.
  • [23] Q. Lü. Carleman estimate for stochastic parabolic equations and inverse stochastic parabolic problems. Inverse Problems, 28(4):045008, 2012.
  • [24] Q. Lü and X. Zhang. Mathematical control theory for stochastic partial differential equations. Springer, Cham, 2021.
  • [25] Q. Lü and X. Zhang. Inverse problems for stochastic partial differential equations: Some progresses and open problems, Numerical Algebra, Control and Optimization, 2023, DOI: 10.3934/naco.2023014.
  • [26] K.R. Müller, S. Mika, K. Tsuda, G. Ratschnd K. Schölkopf. An introduction to kernel-based learning algorithms. In IEEE Transactions on Neural Networks, 12(2):181:201, 2001.
  • [27] P. Niu, T. Helin, and Z. Zhang. An inverse random source problem in a stochastic fractional diffusion equation. Inverse Problems, 36(4):045002, 2020.
  • [28] R. Schaback. Convergence of unsymmetric kernel-based meshless collocation methods. SIAM Journal on Numerical Analysis, 45(1):333–351, 2007.
  • [29] S. Tang and X. Zhang. Null controllability for forward and backward stochastic parabolic equations. SIAM Journal on Control and Optimization, 48(4):2191–2216, 2009.
  • [30] B. Wu, Q. Chen, and Z. Wang. Carleman estimates for a stochastic degenerate parabolic equation and applications to null controllability and an inverse random source problem. Inverse Problems, 36(7):075014, 2020.
  • [31] M. Yamamoto. Carleman estimates for parabolic equations and applications. Inverse Problems, 25(12):123013, 2009.
  • [32] Z. Yin. A quantitative internal unique continuation for stochastic parabolic equations. Mathematical Control and Related Fields, 5(1):165–176, 2015.
  • [33] G. Yuan and M. Yamamoto. Lipschitz stability in the determination of the principal part of a parabolic equation. ESAIM: Control, Optimisation and Calculus of Variations, 15(3):525–554, 2009.
  • [34] G. Yuan. Conditional stability in determination of initial data for stochastic parabolic equations. Inverse Problems, 33(3):035014, 2017.
  • [35] X. Zhang. Unique continuation for stochastic parabolic equations. Differential Integral Equations, 21:81–93, 2008.