This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Higher-Order Regularity of the Free Boundary in
the Inverse First-Passage Problem

Xinfu Chen Department of Mathematics, University of Pittsburgh, [email protected]    John Chadam Department of Mathematics, University of Pittsburgh, [email protected]    David Saunders Department of Statistics and Actuarial Science, University of Waterloo, [email protected]
Abstract

Consider the inverse first-passage problem: Given a diffusion process {𝔛t}t0\{{\mathfrak{X}}_{t}\}_{t\geqslant 0} on a probability space (Ω,,)(\Omega,{\mathcal{F}},{\mathbb{P}}) and a survival probability function pp on [0,)[0,\infty), find a boundary, x=b(t)x=b(t), such that pp is the survival probability that 𝔛{\mathfrak{X}} does not fall below bb, i.e., for each t0t\geqslant 0, p(t)=({ωΩ|𝔛s(ω)b(s),s(0,t)})p(t)={\mathbb{P}}(\{\omega\in\Omega\;|\;{\mathfrak{X}}_{s}(\omega)\geqslant b(s),\ \forall\,s\in(0,t)\}). In earlier work, we analyzed viscosity solutions of a related variational inequality, and showed that they provided the only upper semi-continuous (usc) solutions of the inverse problem. We furthermore proved weak regularity (continuity) of the boundary bb under additional assumptions on pp. The purpose of this paper is to study higher-order regularity properties of the solution of the inverse first-passage problem. In particular, we show that when pp is smooth and has negative slope, the viscosity solution, and therefore also the unique usc solution of the inverse problem, is smooth. Consequently, the viscosity solution furnishes a unique classical solution to the free boundary problem associated with the inverse first-passage problem.

1 Introduction

1.1 The First-Passage Problem and its Inverse

Let 𝔛={𝔛t}t0{\mathfrak{X}}=\{{\mathfrak{X}}_{t}\}_{t\geqslant 0} be the solution to the stochastic differential equation

d𝔛t=μ(𝔛t,t)dt+σ(𝔛t,t)d𝔅tt>0,\displaystyle d{\mathfrak{X}}_{t}=\mu({\mathfrak{X}}_{t},t)\,dt+\sigma({\mathfrak{X}}_{t},t)\,d{\mathfrak{B}}_{t}\quad\forall\,t>0,

where {𝔅t}t0\{{\mathfrak{B}}_{t}\}_{t\geqslant 0} is a standard Brownian motion defined on a probability space (Ω,)(\Omega,{\mathbb{P}}) and μ,σ\mu,\sigma are smooth bounded functions with inf×[0,)σ>0\inf_{{\mathbb{R}}\times[0,\infty)}\sigma>0. The boundary crossing, or first-passage problem for the process 𝔛{\mathfrak{X}} concerns the following:

  1. 1.

    The Forward Problem: Given a function b:(0,)[,)b:(0,\infty)\to[-\infty,\infty), compute the survival probability, 𝒫[b]{\mathcal{P}}[b], that 𝔛{\mathfrak{X}} does not fall below bb, i.e. evaluate

    𝒫[b](t):=({ωΩ|𝔛s(ω)b(s)s(0,t)})t0.\displaystyle{\mathcal{P}}[b](t):={\mathbb{P}}(\{\omega\in\Omega\;|\;{\mathfrak{X}}_{s}(\omega)\geqslant b(s)\ \forall\,s\in(0,t)\})\quad\forall\,t\geqslant 0. (1.1)
  2. 2.

    The Inverse Problem: Given a survival probability pp, find a barrier bb such that 𝒫[b]=p{\mathcal{P}}[b]=p.

The forward problem is classical and the subject of a large literature. According to Zucca and Sacerdote (2009), the inverse problem was first suggested by A.N. Shiryaev during a Banach Centre meeting, for the case where 𝔛{\mathfrak{X}} is a Brownian motion, and the first-passage distribution is exponential, i.e. p(t)=eλtp(t)=e^{-\lambda t} for some λ>0\lambda>0. Dudley and Gutmann (1977) proved the existence of a stopping time for 𝔛{\mathfrak{X}} with a given law. Anulova (1980) demonstrated the existence of a stopping time of the form τ=inf{t|(𝔛t,t)B}\tau=\inf\{t\;|\;({\mathfrak{X}}_{t},t)\in B\} for a closed set B{(x,t)|x[,],t[0,]}B\subset\{(x,t)\;|\;x\in[-\infty,\infty],t\in[0,\infty]\} with the properties that if (x,t)B(x,t)\in B then (x,t)B(-x,t)\in B, and if x0x\geq 0 and (x,t)B(x,t)\in B, then [x,]×{t}B[x,\infty]\times\{t\}\subseteq B. Defining b(t)=inf{x0:(x,t)B}:[0,][0,]b(t)=\inf\{x\geqslant 0:(x,t)\in B\}:[0,\infty]\to[0,\infty], then bb satisfies the two-sided version of the inverse problem, p(t)=({ωΩ|b(s)𝔛s(ω)b(s)s(0,t)}),t0p(t)={\mathbb{P}}(\{\omega\in\Omega\;|\;-b(s)\leqslant{\mathfrak{X}}_{s}(\omega)\leqslant b(s)\ \forall\,s\in(0,t)\}),\,\forall\,t\geqslant 0.

In the 2000’s, the inverse problem became the subject of renewed interest due to applications in financial mathematics. In particular, Avellaneda and Zhu (2001) formulated the one-sided inverse problem given above as a free boundary problem for the Kolmogorov forward equation associated with 𝔛{\mathfrak{X}}, and discussed its numerical solution. Other numerical approaches and applications to credit risk were studied by Hull and White (2001), Iscoe and Kreinin (2002), and Huang and Tian (2006). In Cheng et al. (2006), we presented a rigorous mathematical analysis of the free boundary problem (formulated as a variational inequality), first demonstrating the existence of a unique viscosity solution to the problem, and analyzing integral equations satisfied by the boundary bb, and its asymptotic behaviour for small tt. In Chen et al. (2011), we proved that the boundary bb arising from the variational inequality does indeed solve the probabilistic formulation of the inverse problem. We also studied weak regularity properties of the free boundary bb. In particular, we proved the following (see Chen et al. (2011), Proposition 6):

Proposition 1.

Suppose that 𝔛{\mathfrak{X}} is standard Brownian motion, started at 0 (i.e. μ0\mu\equiv 0, σ1\sigma\equiv 1, and 𝔛0=0{\mathfrak{X}}_{0}=0), and that pp is continuous with p(0)=1p(0)=1. Define:

L(p,T1,T2):=infT1s<tT2p(s)p(t)ts,0T1<T2.L(p,T_{1},T_{2}):=\inf_{T_{1}\leqslant s<t\leqslant T_{2}}\frac{p(s)-p(t)}{t-s},\quad\forall 0\leqslant T_{1}<T_{2}.
  1. 1.

    If L(p,T1,T2)>0L(p,T_{1},T_{2})>0 for some positive T1,T2T_{1},T_{2} with T1<T2T_{1}<T_{2}, then bb is continuous on (T1,T2)(T_{1},T_{2}).

  2. 2.

    Assume that L(p,0,T)>0L(p,0,T)>0 for every T>0T>0. Then bC([0,))b\in C([0,\infty)).

In particular, if p(t)=eλtp(t)=e^{-\lambda t}, then L(p,0,T)=eλT>0L(p,0,T)=e^{-\lambda T}>0, yielding continuity of the boundary in the problem posed by Shiryaev.

The purpose of this paper is to study higher-order regularity properties of the boundary bb in the one-sided inverse problem.

Several other papers have also studied the inverse problem. Ekström and Janson (2016) show that the solution of the inverse first-passage problem is the same as the solution of a related optimal stopping problem, and present an analysis of an associated integral equation for the stopping boundary bb. Integral equations related to the problem are discussed in Peskir (2002) and Peskir and Shiryaev (2005). Abundo (2006) studied the small-time behaviour of the boundary bb. A rigorous construction of the boundary based on a discretization procedure was recently presented in Potiron (2021).

Remark 1.1.

Traditionally, the forward problem is studied for upper-semi-continuous (usc) bb and the conventional survival probability, p^\hat{p}, is defined in term of the first crossing time, τ^\hat{\tau}, by

τ^(ω):=inf{t>0|𝔛t(ω)b(t)},p^(t):=({ωΩ|τ^(ω)>t}).\displaystyle\qquad\hat{\tau}(\omega):=\inf\{t>0\;|\;{\mathfrak{X}}_{t}(\omega)\leqslant b(t)\},\qquad\hat{p}(t):={\mathbb{P}}(\{\omega\in\Omega\;|\;\hat{\tau}(\omega)>t\}).

Here, the survival probability, p=𝒫[b]p={\mathcal{P}}[b] in (1.1), is defined by

τ(ω):=inf{t>0|𝔛t(ω)<b(t)},p(t):=({ωΩ|τ(ω)t}).\displaystyle\tau(\omega):=\inf\{t>0\;|\;{\mathfrak{X}}_{t}(\omega)<b(t)\},\qquad p(t):={\mathbb{P}}(\{\omega\in\Omega\;|\;\tau(\omega)\geqslant t\}).

It is shown in Chen et al. (2011) that 𝒫[b]{\mathcal{P}}[b] is well-defined for each bb. In addition, define bb^{*} and bb_{-}^{*} by

b(t):=max{b(t),lim¯stb(s)},b(t):=lim¯stb(s).\displaystyle b^{*}(t):=\max\Big{\{}b(t),\;\varlimsup_{s\to t}b(s)\Big{\}},\quad b^{*}_{-}(t):=\varlimsup_{s\nearrow t}b(s).

Then (i) 𝒫[b]=𝒫[b]{\mathcal{P}}[b]={\mathcal{P}}[b^{*}], (ii) when b=bb=b^{*}, τ=τ^\tau=\hat{\tau} almost surely, and (iii) when b=bb^{*}=b_{-}^{*}, 𝒫[b]C((0,)){\mathcal{P}}[b]\in C((0,\infty)). Since 𝒫[b]=𝒫[b]{\mathcal{P}}[b]={\mathcal{P}}[b^{*}], it is convenient for the inverse problem to restrict the search to bb in the class of usc functions, i.e., those bb that satisfy b=bb=b^{*}. In Chen et al. (2011) it is shown that for every bP0b\in P_{0}, where

P0:={pC([0,))|p(0)=1p(s)p(t)>0t>s0},\displaystyle P_{0}:=\{p\in C([0,\infty))\;\;|\;\;p(0)=1\geqslant p(s)\geqslant p(t)>0\quad\forall\,t>s\geqslant 0\}, (1.2)

the inverse problem admits a unique usc solution.

1.2 The Free Boundary Problems

We introduce differential operators {\mathcal{L}} and 1{\mathcal{L}}_{1} defined by

ϕ:=tϕ12x(σ2xϕ)+μxϕ,1ϕ:=tϕ12xx2(σ2ϕ)+x(μϕ).\displaystyle{\mathcal{L}}\phi:=\partial_{t}\phi-\tfrac{1}{2}\partial_{x}(\sigma^{2}\partial_{x}\phi)+\mu\partial_{x}\phi,\qquad{\mathcal{L}}_{1}\phi:=\partial_{t}\phi-\tfrac{1}{2}\partial_{xx}^{2}(\sigma^{2}\phi)+\partial_{x}(\mu\phi).

The survival distribution, ww, and survival density, uu, are defined by

w(x,t):=(τt,𝔛t>x),u(x,t):=xw(x,t).\displaystyle w(x,t):={\mathbb{P}}(\tau\geqslant t,{\mathfrak{X}}_{t}>x),\qquad u(x,t):=-\partial_{x}w(x,t).

We denote the distribution of 𝔛t{\mathfrak{X}}_{t} by w0w_{0} and its density by u0u_{0}:

w0(x,t):=(𝔛t>x),u0(x)dx:=(𝔛0(x,x+dx)).\displaystyle w_{0}(x,t):={\mathbb{P}}({\mathfrak{X}}_{t}>x),\qquad u_{0}(x)dx:={\mathbb{P}}({\mathfrak{X}}_{0}\in(x,x+dx)).

When bb is smooth, one can show that (b,w,p)(b,w,p) satisfies

{w=0for x>b(t),t>0,xw(x,t)=0for xb(t),t>0,w(x,0)=w0(x,0)for x,t=0,p(t)=w(b(t),t)for x=b(t),t>0,\displaystyle\left\{\begin{array}[]{ll}{\mathcal{L}}w=0&\hbox{for \ }x>b(t),t>0,\\ \partial_{x}w(x,t)=0&\hbox{for \ }x\leqslant b(t),t>0,\\ w(x,0)=w_{0}(x,0)&\hbox{for \ }x\in{\mathbb{R}},\ \ t=0,\\ p(t)=w(b(t),t)&\hbox{for \ }x=b(t),t>0,\end{array}\right. (1.7)

and (b,u,p)(b,u,p) satisfies

{1u=0for x>b(t),t>0,u(x,t)=0for xb(t),t>0,u(x,0)=u0(x)for x,t=0,2p˙(t)=σ2xu|x=b(t)+for x=b(t),t>0.\displaystyle\left\{\begin{array}[]{ll}{\mathcal{L}}_{1}u=0&\hbox{for \ }x>b(t),t>0,\\ u(x,t)=0&\hbox{for \ }x\leqslant b(t),t>0,\\ u(x,0)=u_{0}(x)&\hbox{for \ }x\in{\mathbb{R}},\ \ t=0,\\ 2\dot{p}(t)=-\sigma^{2}\partial_{x}u|_{x=b(t)+}&\hbox{for \ }x=b(t),t>0.\end{array}\right. (1.12)

Note that (1.7) and (1.12) are equivalent in the class of smooth functions via the transformation

u(x,t)=xw(x,t),w(x,t)=xu(y,t)𝑑yx,t0.\displaystyle u(x,t)=-\partial_{x}w(x,t),\qquad w(x,t)=\int_{x}^{\infty}u(y,t)dy\qquad\forall\,x\in{\mathbb{R}},t\geqslant 0.

When bb is given and regular, say, Lipschitz continuous, the forward problem can be easily handled by first solving the initial–boundary value problem consisting of the first three equations in (1.7) and then evaluating pp from the last equation in (1.7). For the inverse problem, both (1.7) and (1.12) are free boundary problems since the domain Qb:={(x,t)|x>b(t),t>0}Q_{b}:=\{(x,t)\;|\;x>b(t),t>0\}, where the equations, w=0{\mathcal{L}}w=0 and 1u=0{\mathcal{L}}_{1}u=0, are satisfied, is a priori unknown. So far, there is little known concerning the existence, uniqueness, and regularity of classical solutions of the free boundary problems. Here by a classical solution, (b,w)(b,w), of (1.7) we mean that ww0C(×[0,)),xwC(×(0,)),tw,xx2wC(Qb)w-w_{0}\in C({\mathbb{R}}\times[0,\infty)),\partial_{x}w\in C({\mathbb{R}}\times(0,\infty)),\partial_{t}w,\partial^{2}_{xx}w\in C(Q_{b}), and each equation in (1.7) is satisfied; similarly, by a classical solution, (b,u)(b,u), of (1.12), we mean that u+w0xC(×[0,)),tu,xx2uC(Qb)u+w_{0x}\in C({\mathbb{R}}\times[0,\infty)),\partial_{t}u,\partial_{xx}^{2}u\in C(Q_{b}), and each equation in (1.12) is satisfied. In this paper, we investigate the well-posedness of the free boundary problem and the smoothness of the free boundary.

1.3 The Weak Formulation

In Cheng et al. (2006), viscosity solutions for the inverse problem, based on the variational inequality

max{w,wp}=0in ×(0,),\displaystyle\max\{{\mathcal{L}}w,w-p\}=0\quad\hbox{in \ }{\mathbb{R}}\times(0,\infty),

are introduced. It is shown that for any given probability distribution pp on [0,)[0,\infty), there exists a unique viscosity solution. This was followed up in Chen et al. (2011) in which it was shown that the viscosity solution of the variational inequality gives the solution of the (probabilistic) inverse problem. For easy reference, we quote the relevant results.

Definition 1.

Let pP0p\in P_{0} be given where P0P_{0} is as in (1.2). A viscosity solution of the inverse problem associated with pp is a function bb defined by

b(t):=inf{x|w(x,t)<p(t)},t>0,\displaystyle b(t):=\inf\;\{x\in{\mathbb{R}}\;|\;w(x,t)<p(t)\},\quad\forall\,t>0, (1.13)

provided that ww has the following properties:

  1. 1.

    wC(×(0,))w\in C({\mathbb{R}}\times(0,\infty)), limt0w(,t)(𝔛t>)L()=0\lim_{t\searrow 0}\|w({\bm{\cdot}},t)-{\mathbb{P}}({\mathfrak{X}}_{t}>\,\bm{\cdot}\,)\|_{L^{\infty}({\mathbb{R}})}=0;

  2. 2.

    0wp0\leqslant w\leqslant p in ×(0,){\mathbb{R}}\times(0,\infty) and w=0{\mathcal{L}}w=0 in the set {(x,t)|t>0,w(x,t)<p(t)}\{(x,t)\;|\;t>0,w(x,t)<p(t)\};

  3. 3.

    If for a smooth φ\varphi, xx\in{\mathbb{R}} and t>δ>0t>\delta>0, the function φw\varphi-w attains its local minimum on [xδ,x+δ]×[tδ,t][x-\delta,x+\delta]\times[t-\delta,t] at (x,t)(x,t), then φ(x,t)0{\mathcal{L}}\varphi(x,t)\leqslant 0.


One can verify that if (b,w)(b,w) (or (b,u)(b,u)) is a classical solution of the free boundary problem (1.7) (or (1.12)), then bb is a viscosity solution of the inverse problem associated with pp.

Proposition 2 (Well-posedness of the Inverse Problem Cheng et al. (2006); Chen et al. (2011)).

Let pP0p\in P_{0} be given.

  1. 1.

    Cheng et al. (2006): There exists a unique viscosity solution, bb, of the inverse problem associated with pp.

  2. 2.

    Chen et al. (2011): The viscosity solution is a usc solution of the inverse problem, i.e. b=bb=b^{*} and 𝒫[b]=p{\mathcal{P}}[b]=p.

  3. 3.

    Chen et al. (2011): There exists a unique usc solution of the inverse problem associated with pp.


It is clear now that the viscosity solution is the right choice for the inverse problem. For convenience, in the sequel, we shall call bb, (b,w)(b,w), (b,u)(b,u), or (b,w,u)(b,w,u), the solution or the viscosity solution of the inverse problem, where bb is the viscosity solution boundary, ww is the viscosity solution for the survival distribution, and u=xwu=-\partial_{x}w is the viscosity solution for the survival density of the inverse problem associated with pp. We shall also call the curve x=b(t)x=b(t) the free boundary.

1.4 The Main Result: Higher Order Regularity

While the work of Cheng et al. (2006) and Chen et al. (2011) solves the inverse problem, and presents a basic study of weak regularity, here we make a detailed study of the regularity of the free boundary. The main result of this paper is the following, where α+12\llbracket\alpha+\frac{1}{2}\rrbracket denotes the integer part of α+12\alpha+\frac{1}{2}.

Theorem 1 (Regularity of the Free Boundary).

Let pP0p\in P_{0} be given and (b,w,u)(b,w,u) be the (viscosity) solution of the inverse problem associated with pp. Assume that pC1([0,)),p˙<0 on [0,)p\in C^{1}([0,\infty)),\dot{p}<0\hbox{\ on\ }[0,\infty), and for some δ>0\delta>0, either

(i) u0=0 on (,0],u0C1([0,δ]),u0(0+)>0,or   (ii) p¨L1((0,δ)).\displaystyle\hbox{\rm(i) \ }u_{0}=0\hbox{ on }(-\infty,0],\ u_{0}\in C^{1}([0,\delta]),\ u_{0}^{\prime}(0+)>0,\qquad\hbox{\rm or \ (ii) \ }\ddot{p}\in L^{1}((0,\delta)). (1.14)
  1. 1.

    Then (b,w)(b,w) and (b,u)(b,u) are classical solutions of (1.7) and (1.12) respectively with 𝒃𝑪𝟏/𝟐((𝟎,))b\in C^{1/2}((0,\infty)).

  2. 2.

    If, in addition, for 𝜶>𝟏𝟐\alpha>\frac{1}{2} not an integer, (𝒕𝟏,𝒕𝟐)(𝟎,)(t_{1},t_{2})\subset(0,\infty) one has 𝒑𝑪𝜶+𝟏𝟐((𝒕𝟏,𝒕𝟐))p\in C^{\alpha+\frac{1}{2}}((t_{1},t_{2})), then 𝒃𝑪𝜶((𝒕𝟏,𝒕𝟐))b\in C^{\alpha}((t_{1},t_{2})).

  3. 3.

    Finally, if for 𝜶𝟏𝟐\alpha\geqslant\frac{1}{2} not an integer one has 𝒑𝑪𝜶+𝟏𝟐([𝟎,))p\in C^{\alpha+\frac{1}{2}}([0,\infty)), 𝒖𝟎=𝟎u_{0}=0 on (,𝟎](-\infty,0], 𝒖𝟎𝑪𝟐𝜶([𝟎,𝜹])u_{0}\in C^{2\alpha}([0,\delta]), and (𝒖𝟎,𝒑)(u_{0},p) satisfy all compatibility conditions up to order 𝜶+𝟏𝟐\llbracket\alpha+\frac{1}{2}\rrbracket, including in particular the compatibility condition 𝟐𝒑˙(𝟎)=𝝈𝟐(𝟎,𝟎)𝒖𝟎(𝟎+)2\dot{p}(0)=-{\sigma^{2}(0,0)\;u_{0}^{\prime}(0+)} when 𝜶[𝟏𝟐,𝟑𝟐)\alpha\in[\frac{1}{2},\frac{3}{2}), then 𝒃(𝟎)=𝟎b(0)=0 and 𝒃𝑪𝜶([𝟎,))b\in C^{\alpha}([0,\infty)).


Remark 1.2.

To derive the compatibility conditions, we consider, for simplicity, the special case σ2\sigma\equiv\sqrt{2} and μ0\mu\equiv 0. Set U(x,t)=u(x+b(t),t)U(x,t)=u(x+b(t),t). Then

{Ut=Uxx+b˙Uxin (0,)×(0,),U(0,)=0,Ux(0,)=p˙()on {0}×(0,),U(,0)=u0()on [0,)×{0}.\displaystyle\left\{\begin{array}[]{ll}U_{t}=U_{xx}+\dot{b}\,U_{x}&\hbox{in \ }(0,\infty)\times(0,\infty),\vskip 6.0pt plus 2.0pt minus 2.0pt\\ U(0,\cdot)=0,U_{x}(0,\cdot)=-\dot{p}(\cdot)&\hbox{on \ }\{0\}\times(0,\infty),\vskip 6.0pt plus 2.0pt minus 2.0pt\\ U(\cdot,0)=u_{0}(\cdot)&\hbox{on \ }[0,\infty)\times\{0\}.\end{array}\right. (1.18)

The kk-th order compatibility condition is the (k1)(k-1)-th order derivative of p˙(t)=Ux(0,t)\dot{p}(t)=-U_{x}(0,t) at t=0t=0 (with differentiation of UU in time being replaced by differentiation in space by Ut=Uxx+b˙UxU_{t}=U_{xx}+\dot{b}U_{x}):

dkp(t)dtk|t=0=k1Ux(0,t)tk1|t=0=d2k1u0(x)dx2k1|x=0+.\displaystyle\frac{d^{k}p(t)}{dt^{k}}\Big{|}_{t=0}=-\frac{\partial^{k-1}U_{x}(0,t)}{\partial t^{k-1}}\Big{|}_{t=0}=-\frac{d^{2k-1}u_{0}(x)}{dx^{2k-1}}\Big{|}_{x=0}+\cdots.

The kk-th order derivative of bb at t=0t=0 is obtained by differentiating the equation b˙(t)p˙(t)=Uxx(0,t)\dot{b}(t)\dot{p}(t)=U_{xx}(0,t):

dkb(t)dtk|t=0=k1Uxx(0,t)p˙(0)tk1|t=0+=1p˙(0)d2ku0(x)dx2k|x=0+.\displaystyle\frac{d^{k}b(t)}{dt^{k}}\Big{|}_{t=0}=\frac{\partial^{k-1}U_{xx}(0,t)}{\dot{p}(0)\;\partial t^{k-1}}\Big{|}_{t=0}+\cdots=\frac{1}{\dot{p}(0)}\frac{d^{2k}u_{0}(x)}{dx^{2k}}\Big{|}_{x=0}+\cdots.

In particular, the first and second order compatibility conditions are

p˙(0)=u0(0),p¨(0)=u0′′′(0)b˙(0)u0′′(0)|b˙(0)=u0′′(0)/p˙(0).\displaystyle\dot{p}(0)=-u_{0}^{\prime}(0),\qquad\ddot{p}(0)=-u_{0}^{\prime\prime\prime}(0)-\dot{b}(0)u_{0}^{\prime\prime}(0)\Big{|}_{\dot{b}(0)={u_{0}^{\prime\prime}(0)/\dot{p}(0)}}.
Remark 1.3.

For bb to be continuous and bounded, it is necessary to assume that pp is strictly decreasing as in Proposition 1. Indeed, if pp is a constant in an open interval, then b=b=-\infty in that interval.

The main tool for the proof of Theorem 1 is the hodograph transformation, defined by the change of variables x=X(z,t)x=X(z,t), the inverse of z=u(x,t)z=u(x,t). Since u(b(t),t)=0u(b(t),t)=0, we have b(t)=X(0,t)b(t)=X(0,t). Taking σ2\sigma\equiv\sqrt{2} to simplify the exposition, and proceeding formally, we can derive that XX solves quasi-linear pde:

Yt=Yz2Yzz+[zμ(Y,t)]z,p˙(t)Yz(0,t)=1,u0(Y(z,0))=z.\displaystyle Y_{t}=Y_{z}^{-2}Y_{zz}+[z\mu(Y,t)]_{z},\qquad\dot{p}(t)\;Y_{z}(0,t)=-1,\quad u_{0}(Y(z,0))=z. (1.19)

Assuming it can be shown that Xz(ε,)X_{z}({\varepsilon},\cdot) is positive on [0,T][0,T], this system is studied on the set {(z,t)|z[0,ε],t[0,T]}\{(z,t)\,|\,z\in[0,{\varepsilon}],t\in[0,T]\} for any fixed T>0T>0 and a small positive ε{\varepsilon} that depends on TT. To complete the system, we supply the boundary condition for YY on {ε}×(0,T]\{{\varepsilon}\}\times(0,T] by Yz(ε,t)=Xz(ε,t)Y_{z}({\varepsilon},t)=X_{z}({\varepsilon},t).

The classical approach to the hodograph transformation (see, e.g. Friedman (1982) or Kinderlehrer and Stampacchia (1980)) employs a bootstrapping strategy, assuming some initial degree of smoothness on (p,b)(p,b), and then using the regularity theory for (1.19) to strengthen the regularity of bb. In particular, standard results for quasi-linear equations (Ladyzhenskaya et al. (1968); Lieberman (1996)) can be used to derive the existence of a unique classical solution of (1.19), and its regularity (including up to the boundary). The assumptions on (p,b)(p,b) are sufficient to reverse the hodograph transformation, and transfer the boundary regularity of YY to b(t)=Y(0,t)b(t)=Y(0,t) (and to show that indeed Y=XY=X, where XX is defined through x=X(u(x,t),t)x=X(u(x,t),t)). The results achieved through the classical approach are reviewed in Section 4.

In order to prove Theorem 1, we wish to employ the same strategy, but with weaker assumptions on the initial regularity of (p,b)(p,b). In doing so, we encounter two main difficulties, the first technical, and the second fundamental. The technical issue is that above the boundary, uu solves 1u=0{\mathcal{L}}_{1}u=0, and when μ0\mu\neq 0 the operator 1{\mathcal{L}}_{1} has a zeroth order term. The maximum principle arguments employed in analyzing level sets in our proofs require a differential operator with no zeroth order.111In particular, we need that constant functions satisfy the differential equation. We address these related technical hurdles by considering v=u/Kv=u/K for an appropriate scaling function KK, defined as the solution of an auxiliary partial differential equation, such that above the boundary 2v=0{\mathcal{L}}_{2}v=0, where 2=txx+νx{\mathcal{L}}_{2}=\partial_{t}-\partial_{xx}+\nu\partial_{x} for some function ν\nu. The formal hodograph transformation then leads us to consider the partial differential equation:

Yt=Yz2Yzz+ν(Y,t),z(0,ε),t(0,T],Y_{t}=Y_{z}^{-2}\;Y_{zz}+\nu(Y,t),\qquad z\in(0,{\varepsilon}),t\in(0,T], (1.20)

together with the boundary condition Y=0{\mathcal{M}}Y=0, where

Y={Y(z,0)X0(z),z[0,ε],p˙(t)Yz(0,t)+K(Y(0,t),t),t(0,T],Yz(ε,t)Xz(ε,t),t(0,T].{\mathcal{M}}Y=\begin{cases}Y(z,0)-X_{0}(z),&z\in[0,{\varepsilon}],\\ \dot{p}(t)\,Y_{z}(0,t)+K(Y(0,t),t),&t\in(0,T],\\ Y_{z}({\varepsilon},t)-X_{z}({\varepsilon},t),&t\in(0,T].\end{cases} (1.21)

Here, we encounter a fundamental difficulty due to the fact that we do not know a priori that XzX_{z} is regular up to the boundary, i.e., we do not have the equation p˙(t)Xz(0,t)+K(X(0,t),t)=0\dot{p}(t)X_{z}(0,t)+K(X(0,t),t)=0. While we can study the above problem analytically, we have not assumed the requisite regularity to show that Y=XY=X, with X(z,t):=min{xb(t)|v(x,t)=z}X(z,t):=\min\{x\geqslant b(t)\;|\;v(x,t)=z\}.222We can, however, show that XX is well-defined and regular enough inside the domain. It is the regularity for XX up to the boundary (because of the lack of a priori regularity of bb) that is insufficient. This difficulty is surmounted by defining a family of perturbed equations with boundary operators h{\mathcal{M}}^{h}, hh\in{\mathbb{R}}, and making comparisons with their solutions YhY^{h}. In particular, for small h>0h>0, we show that Yh(0,)<b<Yh(0,)Y^{-h}(0,{\bm{\cdot}})<b<Y^{h}(0,{\bm{\cdot}}), Yh(ε,)<X(ε,)<Yh(ε,)Y^{-h}({\varepsilon},{\bm{\cdot}})<X({\varepsilon},{\bm{\cdot}})<Y^{h}({\varepsilon},{\bm{\cdot}}). Letting h0h\searrow 0, we are then able to obtain that XYX\equiv Y, b=Y(0,)b=Y(0,{\bm{\cdot}}), and the required regularity of bb.

Remark 1.4.

For simplicity of exposition, throughout the paper we assume that σ2\sigma\equiv\sqrt{2}. This can be done without loss of generality. Indeed, let

𝐘(x,t)=0x2dzσ(z,t),μ~(y,t)=2μ(x,t)σ(x,t)xσ(x,t)20x2tσ(z,t)σ2(z,t)𝑑z|x=𝐗(y,t)\displaystyle{\mathbf{Y}}(x,t)=\int_{0}^{x}\frac{\sqrt{2}\,dz}{\sigma(z,t)},\quad\tilde{\mu}(y,t)=\frac{\sqrt{2}\mu(x,t)}{\sigma(x,t)}-\frac{\partial_{x}\sigma(x,t)}{\sqrt{2}}-\int_{0}^{x}\frac{\sqrt{2}\partial_{t}\sigma(z,t)}{\sigma^{2}(z,t)}\,dz\Big{|}_{x={\mathbf{X}}(y,t)}

where x=𝐗(y,t)x={\mathbf{X}}(y,t) is the inverse of y=𝐘(x,t)y={\mathbf{Y}}(x,t). Then by Itô’s lemma, the process {𝔜t}\{{\mathfrak{Y}}_{t}\} defined by 𝔜t:=𝐘(𝔛t,t){\mathfrak{Y}}_{t}:={\mathbf{Y}}({\mathfrak{X}}_{t},t) is a diffusion process satisfying d𝔜t=μ~(𝔜t,t)dt+2d𝔅td{\mathfrak{Y}}_{t}=\tilde{\mu}({\mathfrak{Y}}_{t},t)dt+\sqrt{2}\,d{\mathfrak{B}}_{t}. The boundary crossing problem for {𝔛t}\{{\mathfrak{X}}_{t}\} with barrier bb is equivalent to the boundary crossing problem for {𝔜t}\{{\mathfrak{Y}}_{t}\} with barrier b~(t):=𝐘(b(t),t)\tilde{b}(t):={\mathbf{Y}}(b(t),t). In terms of the partial differential equations, this is equivalent to the change of variables

y=𝐘(x,t),w~(y,t)=w(𝐗(y,t),t),u~(y,t)=yw~(y,t).\displaystyle y={\mathbf{Y}}(x,t),\quad\tilde{w}(y,t)=w({\mathbf{X}}(y,t),t),\quad\tilde{u}(y,t)=-\partial_{y}\tilde{w}(y,t)\ .

We shall henceforth always assume that σ2\sigma\equiv\sqrt{2}.

The remainder of the paper is structured as follows. In the next section, we recall a few properties of the solution of the inverse problem and prove a smoothing property of the diffusion: under condition (ii) of (1.14), condition (i) is satisfied provided that the initial time t=0t=0 is shifted to t=st=s, for any ss outside a set of measure zero. In Section 3 we provide an interpretation of the free boundary condition σ2(b(t),t)ux(b(t),t)=2p˙(t)\sigma^{2}(b(t),t)\,u_{x}(b(t),t)=-2\dot{p}(t) for the viscosity solution. In Section 4 we present results that can be derived using the traditional approach to the hodograph transformation z=u(X(z,t),t)z=u(X(z,t),t). Section 5 presents the proof of Theorem 1, beginning by presenting a required generalization of the Hopf Boundary Point Lemma, then introducing the scaling function KK and the scaled survival density vv, and finally analyzing the family YhY^{h} of solutions to quasi-linear parabolic equations in order to derive our main results.

2 Regularity Properties of the Viscosity Solutions of the Inverse Problem

In this section, we collect a few results concerning the regularity of the (viscosity) solution of the inverse problem. Recall that w0(x,t)=(𝔛t>x)w_{0}(x,t)={\mathbb{P}}({\mathfrak{X}}_{t}>x) and Qb={(x,t)|t>0,x>b(t)}Q_{b}=\{(x,t)\;|\;t>0,x>b(t)\}.

Lemma 2.1.

Let (b,w,u)(b,w,u) be the solution of the inverse problem associated with pP0p\in P_{0}. Then

uC(Qb),1u=0<u in Qb,1u0u in ×(0,),\displaystyle u\in C^{\infty}(Q_{b}),\quad{\mathcal{L}}_{1}u=0<u\hbox{ in \ }Q_{b},\quad{\mathcal{L}}_{1}u\leqslant 0\leqslant u\hbox{\ in }{\mathbb{R}}\times(0,\infty), (2.1)

where the inequalities above hold in the sense of distributions.

In addition, for any T>0T>0, the following holds:

  1. 1.

    If   p˙L((0,T))\dot{p}\in L^{\infty}((0,T)), then u+w0xα(0,1)Cα,α/2(×[0,T])u+w_{0x}\in\bigcap_{\alpha\in(0,1)}C^{\alpha,\alpha/2}({\mathbb{R}}\times[0,T]); consequently, (b,w)(b,w) is a classical solution of the free boundary problem (1.7) on ×[0,T]{\mathbb{R}}\times[0,T].

  2. 2.

    If   inf{x|(𝔛0x)>0}=0\inf\{x\;|\;{\mathbb{P}}({\mathfrak{X}}_{0}\leqslant x)>0\}=0 and supt[0,T]p˙<0\sup_{t\in[0,T]}\dot{p}<0, then b(0)=0b(0)=0 and bC([0,T])b\in C([0,T]).

  3. 3.

    (Smoothing Property) If p˙<0\dot{p}<0 on [0,T][0,T] and p¨L1((0,T))\ddot{p}\in L^{1}((0,T)), then for a.e. t(0,T]t\in(0,T],

    wt(,t)C1/2(),u(,t)C()C3/2([b(t),)),σ2(b(t),t)ux(b(t)+,t)=2p˙(t).\displaystyle{\ }\quad w_{t}(\cdot,t)\in C^{1/2}({\mathbb{R}}),\quad u(\cdot,t)\in C({\mathbb{R}})\cap C^{3/2}([b(t),\infty)),\quad\sigma^{2}(b(t),t)\,u_{x}(b(t)+,t)=-2\dot{p}(t).

Proof. For (2.1), see Cheng et al. (2006), Lemma 2.1, and Chen et al. (2011), Proposition 5 (with Theorems 1 and 3 in this reference summarizing the solution of the inverse boundary crossing problem).

To simplify the presentation, we assume that μ0\mu\equiv 0. The general case is analogous.333Unlike the assumption that σ2\sigma\equiv\sqrt{2}, there are points in the paper where this is not the case. Hence, we remark on this explicitly when we take the drift to be zero. The viscosity solution for the survival distribution ww of the inverse problem satisfies the variational inequality max{w,wp}=0\max\{{\mathcal{L}}w,w-p\}=0, which, according to Cheng et al. (2006) (see also Friedman (1982)), can be approximated, as ε0{\varepsilon}\searrow 0, by the solution of

wtεwxxε=β(ε1[wεp]) in ×(0,),\displaystyle w_{t}^{\varepsilon}-w^{\varepsilon}_{xx}=-\beta({\varepsilon}^{-1}[w^{\varepsilon}-p])\hbox{\ in \ }{\mathbb{R}}\times(0,\infty), wε(,0)=w0(,0) on ×{0}\displaystyle w^{\varepsilon}(\cdot,0)=w_{0}(\cdot,0)\hbox{ \ on \ }{\mathbb{R}}\times\{0\}

where β(z)=m(max{0,z})3\beta(z)=m\cdot(\max\{0,z\})^{3} with m=p˙L((0,T))m=\|\dot{p}\|_{L^{\infty}((0,T))}. It is worth mentioning that one can further regularize (p,w0,β)(p,w_{0},\beta) to smooth (pε,w0ε,βε)(p^{\varepsilon},w_{0}^{\varepsilon},\beta^{\varepsilon}) so that the solution is not only smooth, but also monotonically decreasing in ε{\varepsilon}; see Cheng et al. (2006) for details. We set uε=xwεu^{\varepsilon}=-\partial_{x}w^{\varepsilon}, and 𝒢φ=φ+β(ε1[φ(x,t)p(t)]){\mathcal{G}}\varphi={\mathcal{L}}\varphi+\beta({\varepsilon}^{-1}[\varphi(x,t)-p(t)]).

1. Since 𝒢(p+ε)=p˙+m0=𝒢wε{\mathcal{G}}(p+{\varepsilon})=\dot{p}+m\geq 0={\mathcal{G}}w^{\varepsilon} and wε(,0)p(0)w^{{\varepsilon}}(\cdot,0)\leq p(0), the Comparison Principle yields that wεp+εw^{{\varepsilon}}\leq p+{\varepsilon}. Similarly, (w0wε)=β(ε1[wεp])0{\mathcal{L}}(w_{0}-w^{{\varepsilon}})=\beta({\varepsilon}^{-1}[w^{{\varepsilon}}-p])\geq 0 gives wεw0w^{{\varepsilon}}\leq w_{0} and 𝒢wε0=𝒢(0){\mathcal{G}}w^{{\varepsilon}}\geq 0={\mathcal{G}}(0) yields that wε0w^{{\varepsilon}}\geq 0. From wεp+εw^{{\varepsilon}}\leq p+{\varepsilon}, we have:

0(wεw0)=β([wεp]ε1)β(1)=min ×(0,T].0\leqslant-{\mathcal{L}}(w^{\varepsilon}-w_{0})=\beta([w^{\varepsilon}-p]{\varepsilon}^{-1})\leqslant\beta(1)=m\quad\hbox{in \ }{\mathbb{R}}\times(0,T]. (2.2)

A parabolic estimate (e.g. Krylov (1996), Lemma 8.7.1, page 122) then implies that

wεw0C1+α,(1+α)/2(×[0,T])mC(α),\|w^{\varepsilon}-w_{0}\|_{C^{1+\alpha,(1+\alpha)/2}({\mathbb{R}}\times[0,T])}\leqslant mC(\alpha), (2.3)

for every α(0,1)\alpha\in(0,1) where C(α)C(\alpha) is a constant depending only on α\alpha.

Let εn0{\varepsilon}_{n}\searrow 0, and let β(α,1)\beta\in(\alpha,1). Then there exists a subsequence εnk{\varepsilon}_{n_{k}} such that wεnkw0ww0w^{{\varepsilon}_{n_{k}}}-w_{0}\to w-w_{0} in Cloc1+α,(1+α)/2(×[0,T])C^{1+\alpha,(1+\alpha)/2}_{\mathrm{loc}}({\mathbb{R}}\times[0,T]). As the limit ww is shown in Cheng et al. (2006) to be the unique viscosity solution of the inverse problem associated with pp, the whole sequence in fact converges. In addition, passing to the limit in the estimate (2.3) we see that u+w0x=w0xwxα(0,1)Cα,α/2(×[0,T])u+w_{0x}=w_{0x}-w_{x}\in\cap_{\alpha\in(0,1)}C^{\alpha,\alpha/2}({\mathbb{R}}\times[0,T]).

Note that a classical solution of (1.7) requires that the free boundary condition wx(b(t),t)=0w_{x}(b(t),t)=0 be well-defined. Since wx=w0x[w0x+u]w_{x}=w_{0x}-[w_{0x}+u] is continuous on ×(0,){\mathbb{R}}\times(0,\infty), the equation wx(b(t),t)=0w_{x}(b(t),t)=0 for t>0t>0 is satisfied in the classical sense. Hence, (b,w)(b,w) is a classical solution of (1.7). This proves (1).

2. This is shown in Chen et al. (2011), Proposition 6 (see Proposition 1 above).

3. Using energy estimates, we will show that for each η(0,T)\eta\in(0,T), ηTwtx2(x,t)𝑑x𝑑t<\int_{\eta}^{T}\int_{{\mathbb{R}}}w_{tx}^{2}(x,t)dx\,dt<\infty, and therefore in particular wtx(,t)L2()w_{tx}(\cdot,t)\in L^{2}({\mathbb{R}}) for almost every t(0,T)t\in(0,T), from which Sobolov Embedding yields wt(,t)C1/2()w_{t}(\cdot,t)\in C^{1/2}({\mathbb{R}}). Since wt(x,t)=p˙(t)w_{t}(x,t)=\dot{p}(t) for all x<b(t)x<b(t) and wt(x,t)=wxx=uxw_{t}(x,t)=w_{xx}=-u_{x} for all x>b(t)x>b(t), the last assertion of the lemma thus follows.

Suppose p˙<0\dot{p}<0 on [0,T][0,T] and p¨L1((0,T))\ddot{p}\in L^{1}((0,T)). Then p˙C([0,T])\dot{p}\in C([0,T]), so m:=p˙L((0,T))m:=\|\dot{p}\|_{L^{\infty}((0,T))} is finite. As above, we have 0wε(x,t)w00\leqslant w^{\varepsilon}(x,t)\leqslant w_{0}. Using the Comparison Principle as in Cheng et al. (2006), it can be shown that uε=wxε0u^{{\varepsilon}}=-w^{{\varepsilon}}_{x}\geq 0. We then further have (w0xuε)=β(ε1(wεp))ε1uε0{\mathcal{L}}(-w_{0x}-u^{{\varepsilon}})=\beta^{\prime}({\varepsilon}^{-1}(w^{{\varepsilon}}-p)){\varepsilon}^{-1}u^{{\varepsilon}}\geq 0, and comparison yields uεw0xu^{{\varepsilon}}\leq-w_{0x} on ×(0,){\mathbb{R}}\times(0,\infty).

Let ζ=ζ(x)\zeta=\zeta(x) be a non-negative smooth function satisfying ζ(x)=0\zeta(x)=0 for x<Mx<-M, ζ(x)=1\zeta(x)=1 for x>4Mx>4-M, and 0ζ(x)10\leqslant\zeta^{\prime}(x)\leqslant 1 and |ζ′′(x)|1|\zeta^{\prime\prime}(x)|\leqslant 1 for x[M,4M]x\in[-M,4-M]. In the sequel, for convenience of notation, we consider β\beta as a function of (x,t)(x,t), being evaluated at ε1(wε(x,t)p(t)){\varepsilon}^{-1}(w^{{\varepsilon}}(x,t)-p(t)). Using the differential equation and integration by parts, one can derive the identity

ddt(12wtε+2p˙β)ζdx+(wxtεζ2+|ε1(wtεp˙)2β|ζ12wtεζxx2)dx=p¨βζdx,\displaystyle\frac{d}{dt}\int_{\mathbb{R}}\Big{(}\frac{1}{2}w_{t}^{\varepsilon}{}^{2}+\dot{p}\beta\Big{)}\zeta\,dx+\int_{\mathbb{R}}\Big{(}w_{xt}^{\varepsilon}{}^{2}\zeta+|{\varepsilon}^{-1}(w^{\varepsilon}_{t}-\dot{p})^{2}\beta^{\prime}|\zeta-\frac{1}{2}w_{t}^{\varepsilon}{}^{2}\zeta_{xx}\Big{)}dx=\ddot{p}\int_{\mathbb{R}}\beta\zeta\,dx,

so that

(wxtε)2ζ𝑑x\displaystyle\int_{{\mathbb{R}}}(w_{xt}^{{\varepsilon}})^{2}\zeta\,dx =p¨βζdxddt(12wtε+2p˙β)ζdx(|ε1(wtεp˙)2β|ζ12wtεζxx2)dx.\displaystyle=\ddot{p}\int_{\mathbb{R}}\beta\zeta\,dx-\frac{d}{dt}\int_{\mathbb{R}}\Big{(}\frac{1}{2}w_{t}^{\varepsilon}{}^{2}+\dot{p}\beta\Big{)}\zeta\,dx-\int_{{\mathbb{R}}}\left(|{\varepsilon}^{-1}(w^{\varepsilon}_{t}-\dot{p})^{2}\beta^{\prime}|\zeta-\frac{1}{2}w_{t}^{\varepsilon}{}^{2}\zeta_{xx}\right)\,dx.

Multiplying both sides of the above equation by t2t^{2}, and then integrating over (0,T)(0,T) gives:

0Tt2(wxtε)2ζ𝑑x𝑑t\displaystyle\int_{0}^{T}\int_{{\mathbb{R}}}t^{2}(w_{xt}^{{\varepsilon}})^{2}\zeta\,dx\,dt =0Tt2p¨(t)βζdx0Tt2ddt(12wtε+2p˙β)ζdxdt\displaystyle=\int_{0}^{T}\int_{\mathbb{R}}t^{2}\ddot{p}(t)\beta\zeta\,dx-\int_{0}^{T}t^{2}\frac{d}{dt}\int_{\mathbb{R}}\Big{(}\frac{1}{2}w_{t}^{\varepsilon}{}^{2}+\dot{p}\beta\Big{)}\zeta\,dx\,dt
0Tt2(|ε1(wtεp˙)2β|ζ12wtεζxx2)𝑑x𝑑t\displaystyle\;\;\;-\int_{0}^{T}\int_{{\mathbb{R}}}t^{2}\left(|{\varepsilon}^{-1}(w^{\varepsilon}_{t}-\dot{p})^{2}\beta^{\prime}|\zeta-\frac{1}{2}w_{t}^{\varepsilon}{}^{2}\zeta_{xx}\right)\,dx\,dt
0Tt2p¨(t)βζdxdt0Tt2ddt(12wtε+2p˙β)ζdxdt+0Tt22wtεζxx2dxdt\displaystyle\leq\int_{0}^{T}\int_{\mathbb{R}}t^{2}\ddot{p}(t)\beta\zeta\,dx\,dt-\int_{0}^{T}t^{2}\frac{d}{dt}\int_{\mathbb{R}}\Big{(}\frac{1}{2}w_{t}^{\varepsilon}{}^{2}+\dot{p}\beta\Big{)}\zeta\,dx\,dt+\int_{0}^{T}\int_{{\mathbb{R}}}\tfrac{t^{2}}{2}w_{t}^{\varepsilon}{}^{2}\zeta_{xx}\,dx\,dt
=I1+I2+I3.\displaystyle=I_{1}+I_{2}+I_{3}.

To control these terms, we make two preliminary estimates. First, since w0(,)=0w_{0}(\infty,\cdot)=0, there exists N>0N>0 such that w0(x,t)<p(t)w_{0}(x,t)<p(t) for every (x,t)[N,)×[0,T](x,t)\in[N,\infty)\times[0,T]. Consequently, β0\beta\equiv 0 on [N,)×[0,T][N,\infty)\times[0,T], so that, for every M>0M>0,

Mβ(wε(x,t)p(t)ε)𝑑x(M+N)mt[0,T].\displaystyle\int_{-M}^{\infty}\beta\Big{(}\frac{w^{\varepsilon}(x,t)-p(t)}{{\varepsilon}}\Big{)}\,dx\leqslant(M+N)m\quad\forall\,t\in[0,T]. (2.4)

Secondly, integrating (wtε+β)2[wxε(wtε+β)]x+wxεwxtε=ε1βwxε20(w_{t}^{\varepsilon}+\beta)^{2}-[w^{\varepsilon}_{x}(w^{{\varepsilon}}_{t}+\beta)]_{x}+w_{x}^{\varepsilon}{}w_{xt}^{\varepsilon}=-{\varepsilon}^{-1}\beta^{\prime}w_{x}^{\varepsilon}{}^{2}\leqslant 0 over {\mathbb{R}} we obtain:

(wtε+β)2dx+12ddtwxε20.\displaystyle\int_{\mathbb{R}}(w_{t}^{\varepsilon}+\beta)^{2}dx+\frac{1}{2}\;\frac{d\;}{dt}\int_{\mathbb{R}}w_{x}^{\varepsilon}{}^{2}\leqslant 0.

Integrating this inequality multiplied by 2t2t over [0,s][0,s], we get, for every s(0,)s\in(0,\infty),

20st(wtε+β)2dxdt+swxε(x,s)2dx0swxε(x,t)2dxdt\displaystyle 2\int_{0}^{s}\!\!\int_{\mathbb{R}}t(w^{\varepsilon}_{t}+\beta)^{2}dxdt+{{s}}\int_{\mathbb{R}}w_{x}^{\varepsilon}{}^{2}(x,s)dx\leqslant\int_{0}^{s}\!\!\int_{\mathbb{R}}w_{x}^{\varepsilon}{}^{2}(x,t)\,dxdt
0sw0x2(x,t)𝑑x𝑑t0smax|w0x(,t)||w0x(x,s)|𝑑x𝑑tsπ,\displaystyle\quad\leqslant\int_{0}^{s}\!\!\int_{\mathbb{R}}w_{0x}^{2}(x,t)\,dxdt\;\leqslant\int_{0}^{s}\!\!\max_{{\mathbb{R}}}|w_{0x}(\cdot,t)|\int_{\mathbb{R}}|w_{0x}(x,s)|dxdt\leqslant\frac{\sqrt{s}}{\sqrt{\pi}}, (2.5)

where we have used the fact that |w0x(x,t)|𝑑x=1\int_{\mathbb{R}}|w_{0x}(x,t)|dx=1 and |w0x(x,t)|=Γu0supzΓ(z,t)=(4πt)1/2|w_{0x}(x,t)|=\Gamma*u_{0}\leqslant\sup_{z\in{\mathbb{R}}}\Gamma(z,t)=(4\pi t)^{-1/2} where Γ(x,t)=(4πt)1/2ex2/4t\Gamma(x,t)=(4\pi t)^{-1/2}e^{-x^{2}/4t}.

Returning to the estimation of IjI_{j}, (2.4) immediately gives:

I1=0Tt2p¨(t)βζ𝑑x𝑑t(M+N)m0Tt2|p¨(t)|𝑑t.I_{1}=\int_{0}^{T}\int_{\mathbb{R}}t^{2}\ddot{p}(t)\beta\zeta\,dx\,dt\leq(M+N)m\int_{0}^{T}t^{2}|\ddot{p}(t)|\,dt. (2.6)

For I2I_{2}, we integrate by parts:

I2\displaystyle I_{2} =0Tt2ddt(12wtε+2p˙β)ζdxdt\displaystyle=-\int_{0}^{T}t^{2}\frac{d\;}{dt}\int_{\mathbb{R}}\Big{(}\tfrac{1}{2}w_{t}^{\varepsilon}{}^{2}+\dot{p}\beta\Big{)}\zeta\,dx\,dt
=0Tt(wtε+22p˙β)ζdxdtt2(12wtε+2p˙β)ζdx|0T\displaystyle=\int_{0}^{T}t\int_{\mathbb{R}}\Big{(}w_{t}^{\varepsilon}{}^{2}+2\dot{p}\beta\Big{)}\zeta\,dx\,dt-t^{2}\int_{{\mathbb{R}}}\Big{(}\tfrac{1}{2}w_{t}^{\varepsilon}{}^{2}+\dot{p}\beta\Big{)}\zeta\,dx\Bigg{|}_{0}^{T}
0Tt(wtε+22p˙β)ζdxdt+T2|p˙(T)|βζdx.\displaystyle\leqslant\int_{0}^{T}t\int_{\mathbb{R}}\Big{(}w_{t}^{\varepsilon}{}^{2}+2\dot{p}\beta\Big{)}\zeta\,dx\,dt+T^{2}|\dot{p}(T)|\int_{{\mathbb{R}}}\beta\zeta\,dx. (2.7)

The second term on the right is bounded by (2.4), and the fact that p¨L1([0,T])\ddot{p}\in L^{1}([0,T]). Now, using that (wtε)22(wtε+β)2+2β2(w^{{\varepsilon}}_{t})^{2}\leqslant 2(w_{t}^{{\varepsilon}}+\beta)^{2}+2\beta^{2}:

0Tt(wtε+22p˙β)ζdxdt\displaystyle\int_{0}^{T}t\int_{\mathbb{R}}\Big{(}w_{t}^{\varepsilon}{}^{2}+2\dot{p}\beta\Big{)}\zeta\,dx\,dt 2(0Tt(wtε+β)2ζ𝑑x𝑑t+0Ttβ(β+p˙)ζ𝑑x𝑑t)\displaystyle\leq 2\left(\int_{0}^{T}\int_{{\mathbb{R}}}t(w_{t}^{{\varepsilon}}+\beta)^{2}\zeta\,dx\,dt+\int_{0}^{T}\int_{{\mathbb{R}}}t\beta(\beta+\dot{p})\zeta\,dx\,dt\right)
Tπ+(M+N)m2T22,\displaystyle\leq\frac{\sqrt{T}}{\pi}+(M+N)m^{2}\frac{T^{2}}{2}, (2.8)

using p˙<0\dot{p}<0, (2.4), and (2.5).

To control I3I_{3}, we again use (wtε)22(wtε+β)2+2β2(w^{{\varepsilon}}_{t})^{2}\leqslant 2(w_{t}^{{\varepsilon}}+\beta)^{2}+2\beta^{2}, so that:

I3\displaystyle I_{3} =0Tt22wtε|2ζxx|dxdt0Tt2((wtε+β)2+β2)|ζxx|dxdt.\displaystyle=\int_{0}^{T}\int_{{\mathbb{R}}}\tfrac{t^{2}}{2}w_{t}^{\varepsilon}{}^{2}|\zeta_{xx}|\,dx\,dt\leq\int_{0}^{T}\int_{{\mathbb{R}}}t^{2}((w_{t}^{{\varepsilon}}+\beta)^{2}+\beta^{2})|\zeta_{xx}|\,dx\,dt. (2.9)

As above:

0Tt2β2|ζxx|𝑑x𝑑tm2T33|ζxx|𝑑x23m2T3.\int_{0}^{T}\int_{{\mathbb{R}}}t^{2}\beta^{2}|\zeta_{xx}|\,dx\,dt\leqslant m^{2}\tfrac{T^{3}}{3}\int_{{\mathbb{R}}}|\zeta_{xx}|\,dx\leqslant\tfrac{2}{3}m^{2}T^{3}. (2.10)

Furthermore, from (2.5), we have:

0Tt2(wtε+β)2𝑑x𝑑tTT2π.\displaystyle\int_{0}^{T}\!\!\int_{\mathbb{R}}t^{2}(w^{\varepsilon}_{t}+\beta)^{2}\,dx\,dt\leq\frac{T\sqrt{T}}{2\sqrt{\pi}}. (2.11)

Putting things together, we have that:

0T4Mt2wxtε(x,t)2dxdtC(M,T),\displaystyle\int_{0}^{T}\!\!\int_{4-M}^{\infty}t^{2}w^{\varepsilon}_{xt}{}^{2}(x,t)\,dx\,dt\leqslant C(M,T),

where C(M,T)C(M,T) is a constant depending on MM and TT. Sending ε0{\varepsilon}\to 0 we see that the above estimate also holds for ww. Finally, since p˙<0\dot{p}<0 on [0,T][0,T], we have bC((0,T])b\in C((0,T]). For each η(0,T)\eta\in(0,T) taking M=4+max[η,T]|b|M=4+\max_{[\eta,T]}|b| we obtain

ηTwtx2(x,t)𝑑x𝑑t<.\displaystyle\int_{\eta}^{T}\!\!\int_{\mathbb{R}}w_{tx}^{2}(x,t)\,dx\,dt<\infty.

Since η\eta can be arbitrarily small, we conclude that

wtx2(x,t)𝑑x<for almost every t(0,T).\displaystyle\int_{{\mathbb{R}}}w_{tx}^{2}(x,t)\,dx<\infty\quad\hbox{for almost every \ }t\in(0,T).

This completes the proof. ∎

Remark 2.1.

We remark that for (b,u)(b,u) to be a classical solution of the free boundary problem (1.12), we need the existence of the limit of ux(x,t)u_{x}(x,t), as xb(t)x\searrow b(t), for each t(0,T]t\in(0,T]. Here the conclusion of the third assertion in the previous lemma is not sufficient for (b,u)(b,u) to be a classical solution. Thus, from an analytical viewpoint, finding classical solutions of the free boundary problem (1.12) is much harder than that of the free boundary problem (1.7).

Remark 2.2.

Taking 𝔛{\mathfrak{X}} to be Brownian motion with 𝔛00{\mathfrak{X}}_{0}\equiv 0, we have the following:

  1. 1.

    If b()a<0b(\cdot)\equiv a<0, then:

    p(t)=12|a|/tφ(x)𝑑x,p˙(t)=2φ(|a|t)|a|t3/2,\displaystyle p(t)=1-2\int_{|a|/\sqrt{t}}^{\infty}\varphi(x)\,dx,\quad\dot{p}(t)=2\varphi\left(\frac{|a|}{\sqrt{t}}\right)\cdot\frac{|a|}{t^{3/2}},
    p¨(t)=|a|t5/2φ(|a|t1/2)(3+|a|t1),\displaystyle\ddot{p}(t)=|a|t^{-5/2}\varphi(|a|t^{-1/2})\left(-3+|a|t^{-1}\right),

    where φ\varphi is the probability density function of a standard normal random variable. Clearly (1) applies (p˙L((0,T))\dot{p}\in L^{\infty}((0,T)), and (b,w)(b,w) is a classical solution of (1.7), as can be verified by direct computation). However, limt0p˙(t)=0\lim_{t\searrow 0}\dot{p}(t)=0, so that (2) and (3) do not apply (which is not surprising since otherwise we would have b(0)=0b(0)=0, contradicting the definition of bb).

  2. 2.

    For the exponential survival function p(t)=eλtp(t)=e^{-\lambda t} for some λ>0\lambda>0, p˙(t)=λp(t)\dot{p}(t)=-\lambda p(t) and p¨(t)=λ2p(t)\ddot{p}(t)=\lambda^{2}p(t), so that both (2) and (3) apply, and in particular b(0)=0b(0)=0, and the smoothing property in (3) of the lemma holds.


3 The Free Boundary Condition For Viscosity Solutions

We interpret the free-boundary condition x(σ2u)|x=b(t)+=2p˙(t)\partial_{x}(\sigma^{2}u)|_{x=b(t)+}=-2\dot{p}(t) for the viscosity solution as follows.


Lemma 3.1.

Let pP0p\in P_{0} and (b,w,u)(b,w,u) be the unique viscosity solution of the inverse problem associated with pp. Suppose t>0,b(t)t>0,b(t)\in{\mathbb{R}} and p˙\dot{p} is continuous at tt. Then for any function C1([0,t])\ell\in C^{1}([0,t]) that satisfies (t)=b(t)\ell(t)=b(t), we have:

lim¯(x,s)(b(t),t)x>(s),stσ2(x,s)u(x,s)x(s)2p˙(t)lim¯(x,s)(b(t),t)x>(s),stσ2(x,s)u(x,s)x(s).\displaystyle\varliminf_{\underset{x>\ell(s),\ s\leqslant t}{(x,s)\to(b(t),t)}}\frac{\sigma^{2}(x,s)u(x,s)}{x-\ell(s)}\leqslant-2\dot{p}(t)\leqslant\varlimsup_{\underset{x>\ell(s),\ s\leqslant t}{(x,s)\to(b(t),t)}}\frac{\sigma^{2}(x,s)u(x,s)}{x-\ell(s)}. (3.1)

Remark 3.1.

If we know that bC1b\in C^{1} and xuC(Qb¯)\partial_{x}u\in C(\overline{Q_{b}}), then taking =b\ell=b and using L’Hôpital’s rule we see that the two limits in (3.1) are both equal to x(σ2u)|x=b(t)+\partial_{x}(\sigma^{2}u)|_{x=b(t)+}, so (3.1) provides the free boundary condition x(σ2u)|x=b(t)+=2p˙(t)\partial_{x}(\sigma^{2}u)|_{x=b(t)+}=-2\dot{p}(t).

Proof. Suppose the first inequality in (3.1) does not hold. Then the first limit in (3.1) is strictly bigger than 2p˙(t)-2\dot{p}(t) so there exist small constants m>0m>0 and δ(0,t]\delta\in(0,t] such that

u(x,s)(2mp˙(t))(x(s))s[tδ,t],x((s),b(t)+Mδ]\displaystyle u(x,s)\geqslant(2m-\dot{p}(t))(x-\ell(s))\qquad\forall\,s\in[t-\delta,t],x\in(\ell(s),b(t)+M\delta]

where M=˙L((0,t))M=\|\dot{\ell}\|_{L^{\infty}((0,t))}. Consequently, for each s[tδ,t]s\in[t-\delta,t] and x[(s),b(t)+Mδ]x\in[\ell(s),b(t)+M\delta],

w(x,s)=p(s)xu(y,s)𝑑yp(s)(s)xu(y,s)𝑑yp(s)(mp˙(t)2)(x(s))2.\displaystyle\qquad w(x,s)=p(s)-\int_{-\infty}^{x}u(y,s)dy\leqslant p(s)-\int_{\ell(s)}^{x}u(y,s)dy\leqslant p(s)-(m-\tfrac{\dot{p}(t)}{2})(x-\ell(s))^{2}. (3.2)

Now for any sufficiently small positive ε{\varepsilon}, consider the smooth function

ϕε(x,s):=p(s)12(mp˙(t))(xε(s))2,ε(s):=(s)(ε(ts)).\displaystyle\phi_{\varepsilon}(x,s):=p(s)-\tfrac{1}{2}(m-\dot{p}(t))(x-\ell_{\varepsilon}(s))^{2},\qquad\ell_{\varepsilon}(s):=\ell(s)-({\varepsilon}-(t-s)).

We compare ww and ϕε\phi_{\varepsilon} in the set

Qε:={(x,s)|s(tε,t],x(ε(s),(s)+ε)}.\displaystyle Q_{\varepsilon}:=\{(x,s)\;|\;s\in(t-{\varepsilon},t],x\in(\ell_{\varepsilon}(s),\ell(s)+\sqrt{\varepsilon})\}.

We claim that the minimum of ϕεw\phi_{\varepsilon}-w on Qε¯\overline{Q_{\varepsilon}} is negative and is attained at some point (xε,tε)Qε(x_{\varepsilon},t_{\varepsilon})\in Q_{\varepsilon}. First of all, (b(t),t)Qε(b(t),t)\in Q_{\varepsilon} and ϕε(b(t),t)w(b(t),t)=12(mp˙(t))ε2<0.\phi_{\varepsilon}(b(t),t)-w(b(t),t)=-\tfrac{1}{2}(m-\dot{p}(t)){\varepsilon}^{2}<0.

Next we show that ϕεw0\phi_{\varepsilon}-w\geqslant 0 on the parabolic boundary of QεQ_{\varepsilon},

pQε={(x,s)|s=tε,x[(tε),(tε)+ε]}{(x,s)|s(tε,t],x{ε(s),(s)+ε}}.\partial_{p}Q_{{\varepsilon}}=\{(x,s)\;|\;s=t-{\varepsilon},x\in[\ell(t-{\varepsilon}),\ell(t-{\varepsilon})+\sqrt{\varepsilon}]\}\cup\{(x,s)\;|\;s\in(t-{\varepsilon},t],x\in\{\ell_{{\varepsilon}}(s),\ell(s)+\sqrt{\varepsilon}\}\}.

When x=ε(s)x=\ell_{\varepsilon}(s), ϕε(x,s)=p(s)\phi_{\varepsilon}(x,s)=p(s) and ϕεw0\phi_{\varepsilon}-w\geqslant 0. For small enough ε{\varepsilon}, on the remainder of the parabolic boundary of QεQ_{\varepsilon}, we can verify that x[(s),b(t)+Mδ]x\in[\ell(s),b(t)+M\delta] so that we can use (3.2) to derive

ϕεw\displaystyle\phi_{\varepsilon}-w \displaystyle\geqslant [mp˙(t)2][x(s)]212[mp˙(t)][xε(s)]2\displaystyle[m-\tfrac{\dot{p}(t)}{2}]\;[x-\ell(s)]^{2}-\tfrac{1}{2}[m-\dot{p}(t)]\;[x-\ell_{\varepsilon}(s)]^{2}
=\displaystyle= m2[x(s)]2+12[mp˙(t)][2x(s)ε(s)][ε(s)(s)].\displaystyle\tfrac{m}{2}[x-\ell(s)]^{2}+\tfrac{1}{2}[m-\dot{p}(t)]\;[2x-\ell(s)-\ell_{\varepsilon}(s)]\;[\ell_{\varepsilon}(s)-\ell(s)].

On the lower part of the parabolic boundary of QεQ_{\varepsilon}, we have s=tεs=t-{\varepsilon} so ε(s)=(s)\ell_{\varepsilon}(s)=\ell(s) and ϕεw\phi_{\varepsilon}\geqslant w. Finally, on the right lateral boundary of QεQ_{\varepsilon}, we have x=(s)+εx=\ell(s)+\sqrt{\varepsilon} and 0(s)ε(s)ε0\leqslant\ell(s)-\ell_{\varepsilon}(s)\leqslant{\varepsilon} so

ϕεwm2ε12[mp˙(t)][2ε+ε]ε.\displaystyle\phi_{\varepsilon}-w\geqslant\tfrac{m}{2}{\varepsilon}-\tfrac{1}{2}[m-\dot{p}(t)][2\sqrt{\varepsilon}+{\varepsilon}]{\varepsilon}.

Thus, ϕεw0\phi_{\varepsilon}-w\geqslant 0 on the parabolic boundary of QεQ_{\varepsilon} provided that ε{\varepsilon} is sufficiently small.

Hence, for every small positive ε{\varepsilon}, there exists (xε,tε)Qε(x_{\varepsilon},t_{\varepsilon})\in Q_{\varepsilon} such that ϕεw\phi_{\varepsilon}-w attains at (xε,tε)(x_{\varepsilon},t_{\varepsilon}) the minimum of ϕεw\phi_{\varepsilon}-w over Qε¯\overline{Q_{\varepsilon}}. Now by the definition of ww as a viscosity solution (see, e.g. Cheng et al. (2006)), we have ϕε(xε,tε)0{\mathcal{L}}\phi_{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\leqslant 0. However, we can calculate (with =sxx2+μx{\mathcal{L}}=\partial_{s}-\partial_{xx}^{2}+\mu\partial_{x})

ϕε(xε,tε)\displaystyle{\mathcal{L}}\phi_{\varepsilon}(x_{\varepsilon},t_{\varepsilon}) =\displaystyle= p˙(tε)+[mp˙(t)][(xεε(tε))(˙(tε)1μ(xε,tε))+1]\displaystyle\dot{p}(t_{\varepsilon})+[m-\dot{p}(t)]\;[(x_{\varepsilon}-\ell_{\varepsilon}(t_{\varepsilon}))(\dot{\ell}(t_{\varepsilon})-1-\mu(x_{\varepsilon},t_{\varepsilon}))+1]
\displaystyle\geqslant m+[p˙(tε)p˙(t)][mp˙(t)][ε+ε](M+1+|μ(xε,tε)|).\displaystyle m+[\dot{p}(t_{\varepsilon})-\dot{p}(t)]-[m-\dot{p}(t)]\;[{\varepsilon}+\sqrt{\varepsilon}]\;(M+1+|\mu(x_{\varepsilon},t_{\varepsilon})|).

The last quantity is positive if we take ε{\varepsilon} sufficiently small. Thus we obtain a contradiction, and that the first inequality in (3.1) holds.


Now we prove the second inequality in (3.1). Since u0u\geqslant 0, the second inequality in (3.1) is trivially true when p˙(t)=0\dot{p}(t)=0. Hence, we consider the case p˙(t)<0\dot{p}(t)<0. Suppose the second inequality in (3.1) does not hold. Then the second limit in (3.1) is strictly less than 2p˙(t)-2\dot{p}(t) so there exist small constants m(0,p˙(t)/2)m\in(0,-\dot{p}(t)/2) and δ(0,t]\delta\in(0,t] such that

u(x,s)[p˙(t)2m][x(s)]t[tδ,t],x((s),(t)+Mδ].\displaystyle u(x,s)\leqslant[-\dot{p}(t)-2m]\;[x-\ell(s)]\qquad\forall\,t\in[t-\delta,t],\ x\in(\ell(s),\ell(t)+M\delta].

Since uC(Qb)u\in C^{\infty}(Q_{b}) and u>0u>0 in QbQ_{b}, the above inequality implies that ((s),s)Qb(\ell(s),s)\not\in Q_{b}. Hence, we must have (s)b(s)\ell(s)\leqslant b(s), for every s[tδ,t]s\in[t-\delta,t]. Consequently, for each s[tδ,t]s\in[t-\delta,t] and x[(s),b(t)+Mδ]x\in[\ell(s),b(t)+M\delta],

w(x,s)=p(s)xu(y,s)𝑑y=p(s)(s)xu(y,s)𝑑sp(s)+12[p˙(t)+2m][x(s)]2.\displaystyle\qquad w(x,s)=p(s)-\int_{-\infty}^{x}u(y,s)dy=p(s)-\int_{\ell(s)}^{x}u(y,s)ds\geqslant p(s)+\tfrac{1}{2}[\dot{p}(t)+2m][x-\ell(s)]^{2}. (3.3)

Now for any sufficiently small positive ε{\varepsilon}, consider the smooth function

ψε(x,s)=p(s)+12[p˙(t)+m][xε(s)]2,ε(s):=(s)+[ε(ts)].\displaystyle\psi_{\varepsilon}(x,s)=p(s)+\tfrac{1}{2}[\dot{p}(t)+m]\;[x-\ell_{\varepsilon}(s)]^{2},\qquad\ell_{\varepsilon}(s):=\ell(s)+[{\varepsilon}-(t-s)].

We compare ww and ψε\psi_{\varepsilon} in the set

Qε:={(x,s)|s(tε,t],x((s),(s)+ε)}.\displaystyle Q_{\varepsilon}:=\{(x,s)\;|\;s\in(t-{\varepsilon},t],x\in(\ell(s),\ell(s)+\sqrt{\varepsilon})\}.

We claim that the maximum of ψεw\psi_{\varepsilon}-w on Qε¯\overline{Q_{\varepsilon}} is positive and is attained at some point (xε,tε)Qε(x_{\varepsilon},t_{\varepsilon})\in Q_{\varepsilon}.

First of all, (ε(t),t)Qε(\ell_{\varepsilon}(t),t)\in Q_{\varepsilon} and ψε(ε(t),t)w(ε(t),t)=p(t)w(b(t)+ε,t)>0.\psi_{\varepsilon}(\ell_{\varepsilon}(t),t)-w(\ell_{\varepsilon}(t),t)=p(t)-w(b(t)+{\varepsilon},t)>0.

Next we show that ψεw0\psi_{\varepsilon}-w\leqslant 0 on the parabolic boundary of QεQ_{\varepsilon}. On the left lateral boundary of QεQ_{\varepsilon}, x=(s)b(s)x=\ell(s)\leqslant b(s), so w(x,s)=p(s)w(x,s)=p(s) and ψεw0\psi_{\varepsilon}-w\leqslant 0. For the remainder of the parabolic boundary we can use (3.3) to derive

ψεw\displaystyle\psi_{\varepsilon}-w \displaystyle\leqslant 12[p˙(t)+m][xε(s)]212[p˙(t)+2m][x(s)]2\displaystyle\tfrac{1}{2}[\dot{p}(t)+m]\;[x-\ell_{\varepsilon}(s)]^{2}-\tfrac{1}{2}[\dot{p}(t)+2m]\;[x-\ell(s)]^{2}
=\displaystyle= m2[x(s)]212[p˙(t)+m][2x(s)ε(s)][ε(s)(s)].\displaystyle-\tfrac{m}{2}[x-\ell(s)]^{2}-\tfrac{1}{2}[\dot{p}(t)+m]\;[2x-\ell(s)-\ell_{\varepsilon}(s)]\;[\ell_{\varepsilon}(s)-\ell(s)].

On the lower side of the parabolic boundary of QεQ_{\varepsilon}, s=tεs=t-{\varepsilon} so (s)=ε(s)\ell(s)=\ell_{\varepsilon}(s) and ψεw0\psi_{\varepsilon}-w\leqslant 0. Finally, on the right lateral boundary of QεQ_{\varepsilon}, we have x=(s)+εx=\ell(s)+\sqrt{\varepsilon} and 0ε(s)(s)ε0\leqslant\ell_{\varepsilon}(s)-\ell(s)\leqslant{\varepsilon} so

ψεwmε+|p˙(t)+m|ε3/2.\displaystyle\psi_{\varepsilon}-w\leqslant-m{\varepsilon}+|\dot{p}(t)+m|{\varepsilon}^{3/2}.

Thus, ψεw0\psi_{\varepsilon}-w\leqslant 0 on the parabolic boundary of QεQ_{\varepsilon} if ε{\varepsilon} is sufficiently small. Consequently, there exists (xε,tε)Qε(x_{\varepsilon},t_{\varepsilon})\in Q_{\varepsilon} such that ψεw\psi_{\varepsilon}-w attains at (xε,tε)(x_{\varepsilon},t_{\varepsilon}) the positive maximum of ψεw\psi_{\varepsilon}-w over Qε¯\overline{Q_{\varepsilon}}.

We claim that xε>b(tε)x_{\varepsilon}>b(t_{\varepsilon}). Indeed, if xεb(tε)x_{\varepsilon}\leqslant b(t_{\varepsilon}), then ψε(xε,tε)w(xε,tε)=ψε(xε,tε)p(tε)0\psi_{\varepsilon}(x_{\varepsilon},t_{\varepsilon})-w(x_{\varepsilon},t_{\varepsilon})=\psi_{\varepsilon}(x_{\varepsilon},t_{\varepsilon})-p(t_{\varepsilon})\leqslant 0 which contradicts the fact that the maximum of ψw\psi-w on Q¯ε\bar{Q}_{\varepsilon} is positive. Thus xε>b(tε)x_{\varepsilon}>b(t_{\varepsilon}). It then implies that ww is smooth in a neighborhood of (xε,tε)(x_{\varepsilon},t_{\varepsilon}). Consequently, ψε(xε,tε)w(xε,tε)=0{\mathcal{L}}\psi_{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\geqslant{\mathcal{L}}w(x_{\varepsilon},t_{\varepsilon})=0. However,

ψε(xε,tε)\displaystyle{\mathcal{L}}\psi_{\varepsilon}(x_{\varepsilon},t_{\varepsilon}) =\displaystyle= p˙(tε)+[p˙(t)+m][(xεε(tε))(μ(xε,tε)˙(tε)1)1]\displaystyle\dot{p}(t_{\varepsilon})+[\dot{p}(t)+m]\;[(x_{\varepsilon}-\ell_{\varepsilon}(t_{\varepsilon}))(\mu(x_{\varepsilon},t_{\varepsilon})-\dot{\ell}(t_{\varepsilon})-1)-1]
\displaystyle\leqslant m+[p˙(tε)p˙(t)]+|p˙(t)+m|ε(|μ(xε,tε)|+M+1).\displaystyle-m+[\dot{p}(t_{\varepsilon})-\dot{p}(t)]+|\dot{p}(t)+m|\sqrt{\varepsilon}\;(|\mu(x_{\varepsilon},t_{\varepsilon})|+M+1).

The last quantity is negative if we take ε{\varepsilon} sufficiently small. Thus we obtain a contradiction, and the second inequality in (3.1) holds. This completes the proof. ∎


4 The Traditional Hodograph Transformation

The hodograph transformation considers the inverse, x=X(z,t)x=X(z,t), of the function z=u(x,t)z=u(x,t), so that for each fixed zz, the curve x=X(z,t)x=X(z,t) is the zz-level set of uu. A bootstrapping procedure is applied. By beginning with a weak regularity assumption on the free boundary b(t)=X(0,t)b(t)=X(0,t), and applying the regularity theory for the partial differential equation satisfied by XX, one can obtain higher-order regularity of bb. In this section, we present two results derived using this traditional approach, one for the viscosity solution and the other for the classical solution of the free boundary problem.


Proposition 3.

Let bb be the solution of the inverse problem associated with pP0p\in P_{0}. Assume that for some interval I=(t1,t2)I=(t_{1},t_{2}), bC1(I)b\in C^{1}(I), and pCα+1/2(I)p\in C^{\alpha+1/2}(I), where α\alpha is not an integer. Then bCα(I)b\in C^{\alpha}(I).

Proof.

As bb is already C1C^{1}, we need only consider the case α>1\alpha>1, so pp is continuously differentiable. Since bC1(I)b\in C^{1}(I), by working on the function U(y,t):=u(b(t)+y,t)U(y,t):=u(b(t)+y,t) on the fixed domain [0,)×I[0,\infty)\times I with the boundary condition U(0,)=0U(0,\cdot)=0 and then translating the regularity of UU back to u(x,t)=U(xb(t),t)u(x,t)=U(x-b(t),t), one can show that for every γ(0,1)\gamma\in(0,1),

uC(Qb)C1+γ,(1+γ)/2({(x,t)|tI,x[b(t),)}).u\in C^{\infty}(Q_{b})\cap C^{1+\gamma,(1+\gamma)/2}(\{(x,t)\;|\;t\in I,x\in[b(t),\infty)\}).

In addition, since bC1(I)b\in C^{1}(I) and u0u\geqslant 0, the Hopf Lemma (Protter and Weinberger (1967), Theorem 3.3, pages 170-171) implies that ux(b(t),t)>0u_{x}(b(t),t)>0. Consequently, as uxCγ,γ/2({(x,t)|tI,x[b(t),)})u_{x}\in C^{\gamma,\gamma/2}(\{(x,t)\;|\;t\in I,x\in[b(t),\infty)\}), bC1(I)b\in C^{1}(I), and pCα+1/2(I)p\in C^{\alpha+1/2}(I) with α>1\alpha>1, we can use Lemma 3.1 and Remark 3.1 to derive that 2p˙(t)=σ2ux|x=b(t)+<02\dot{p}(t)=-\sigma^{2}u_{x}|_{x=b(t)+}<0.

Once we know the continuity of uxu_{x} and the positivity of ux(b(t),t)u_{x}(b(t),t), we can define the inverse x=X(z,t)x=X(z,t) of z=u(x,t)z=u(x,t) for tIt\in I and x[b(t),b1(t))x\in[b(t),b_{1}(t)) where b1(t)=min{x>b(t)|ux(x,t)=0}b_{1}(t)=\min\{x>b(t)\;|\;u_{x}(x,t)=0\}. Then implicit differentiation gives:444Recall that we take σ2\sigma\equiv\sqrt{2} to simplify the exposition.

XC,Xt=Xz2Xzz+[zμ(X,t)]z, in {(z,t)|tI,z(0,u(b1(t),t))}.X\in C^{\infty},\quad X_{t}=X_{z}^{-2}\,X_{zz}+[z\mu(X,t)]_{z},\quad\hbox{ \ in \ }\{(z,t)\;|\;t\in I,z\in(0,u(b_{1}(t),t))\}.

Also, XzCγ,γ/2(D)X_{z}\in C^{\gamma,\gamma/2}(D) where D={(z,t)|tI,z[0,u(b1(t),t))}D=\{(z,t)\;|\;t\in I,z\in[0,u(b_{1}(t),t))\} and Xz(0,t)=1/p˙(t)X_{z}(0,t)=-1/\dot{p}(t) for tIt\in I. It then follows from the local regularity theory for parabolic equations (see Ladyzhenskaya et al. (1968), Theorem IV.5.3, pages 320-322) that when pCα+1/2(I)p\in C^{\alpha+1/2}(I) where α>1\alpha>1 is not an integer, since Xz(0,)=1/p˙()Cα1/2(I)X_{z}(0,\cdot)=-1/\dot{p}(\cdot)\in C^{\alpha-1/2}(I) we have XC2α,α(D)X\in C^{2\alpha,\alpha}(D). Consequently, b=X(0,)Cα(I)b=X(0,\cdot)\in C^{\alpha}(I). This completes the proof. ∎


Proposition 4.

Suppose p˙<0\dot{p}<0 on (0,)(0,\infty) and pCα+1/2((0,))p\in C^{\alpha+1/2}((0,\infty)) where α1/2\alpha\geqslant 1/2 is not an integer. Assume that (b,u)(b,u) is a classical solution of the free boundary problem (1.12) satisfying

limx>b(s),(x,s)(b(t),t)ux(x,s)=2p˙(t)σ2(b(t),t)t>0.\displaystyle\lim_{x>b(s),(x,s)\to(b(t),t)}u_{x}(x,s)=\frac{-2\dot{p}(t)}{\sigma^{2}(b(t),t)}\quad\forall\,t>0. (4.1)

Then bCα((0,))b\in C^{\alpha}((0,\infty)).


Proof. Let [δ,T](0,)[\delta,T]\subset(0,\infty) be arbitrarily fixed. The conditions (4.1) and p˙<0\dot{p}<0 imply that the set {(b(t),t)|t[δ,T]}\{(b(t),t)|\;t\in[\delta,T]\} is compact, so that there exists δ1>0\delta_{1}>0 such that uxu_{x} is bounded and uniformly positive in D={(x,t)|t[δ,T],x(b(t),b(t)+δ1]}.D=\{(x,t)\;|\;t\in[\delta,T],x\in(b(t),b(t)+\delta_{1}]\}. Consequently, the inverse x=X(z,t)x=X(z,t) of z=u(x,t)z=u(x,t) is well-defined for (x,t)D(x,t)\in D. Setting ε=mint[δ,T]u(b(t)+δ1,t){\varepsilon}=\min_{t\in[\delta,T]}u(b(t)+\delta_{1},t) we have that XC((0,ε]×[δ,T])X\in C^{\infty}((0,{\varepsilon}]\times[\delta,T]), XzX_{z} is uniformly positive and bounded in (0,ε]×[δ,T](0,{\varepsilon}]\times[\delta,T] and

Xz(0,t):=limz0,stXz(z,s)=limx>b(s),(x,s)(b(t),t)1ux(x,s)=1p˙(t).\displaystyle X_{z}(0,t):=\lim_{z\searrow 0,s\to t}X_{z}(z,s)=\lim_{x>b(s),(x,s)\to(b(t),t)}\frac{1}{u_{x}(x,s)}=-\frac{1}{\dot{p}(t)}.

Since XX satisfies Xt=Xz2Xzz+[zμ]zX_{t}=X_{z}^{-2}X_{zz}+[z\mu]_{z} in (0,ε]×[δ,T](0,{\varepsilon}]\times[\delta,T], as above local regularity then implies that XX defined on (0,ε]×[δ,T](0,{\varepsilon}]\times[\delta,T] can be extended onto [0,ε]×[δ,T][0,{\varepsilon}]\times[\delta,T] such that XC2α,α([0,ε]×(δ,T])X\in C^{2\alpha,\alpha}([0,{\varepsilon}]\times(\delta,T]). Hence b=limz0X(z,)=X(0,)Cα((δ,T])b=\lim_{z\to 0}X(z,\cdot)=X(0,\cdot)\in C^{\alpha}((\delta,T]). Sending δ0\delta\searrow 0 and TT\to\infty we conclude that bCα((0,))b\in C^{\alpha}((0,\infty)).∎

5 Proof of The Main Result

In this section, we prove our main result, Theorem 1. This provides regularity of the viscosity solution of the inverse problem without the a priori regularity assumptions used in the previous section. We begin with a technical result - a generalization of Hopf’s Lemma in the one-dimensional case that is needed in our later arguments. We then introduce the scaling function KK, and the new hodograph transformation for the scaled function v=u/Kv=u/K. Finally, we prove Theorem 1 by analyzing a family of perturbed equations related to the PDE satisfied by X(z,t)X(z,t).

5.1 A Generalization of Hopf’s Lemma in the One-Dimensional Case

In order to apply the weak formulation of the free boundary condition, we need the following extension of the classical Hopf Lemma.

Lemma 5.1 (Generalized Hopf’s Lemma).

Let =taxx+cx+d{\mathcal{L}}=\partial_{t}-a\partial_{xx}+c\;\partial_{x}+d where aa, cc and dd are bounded functions and infa>0\inf a>0. Assume that ϕ0{\mathcal{L}}\phi\geqslant 0 in Q:={(x,t)| 0<t<T,l(t)<x<r(t)}Q:=\{(x,t)\;|\;0<t<T,l(t)<x<r(t)\} where ll and rr are Lipschitz continuous functions. Also assume that ϕ>0\phi>0 in QQ. Then for every δ(0,T)\delta\in(0,T), there exists η>0\eta>0 such that

ϕ(x,t)η[xl(t)][r(t)x]x[l(t),r(t)],t[δ,T].\displaystyle\phi(x,t)\geqslant\eta[x-l(t)][r(t)-x]\quad\forall\,x\in[l(t),r(t)],t\in[\delta,T].

Moreover, if in addition ϕ(l(T),T)=0\phi(l(T),T)=0, then ϕx(l(T),T)η[r(T)l(T)]\phi_{x}(l(T),T)\geqslant\eta[r(T)-l(T)]; similarly, if ϕ(r(T),T)=0\phi(r(T),T)=0, then ϕx(r(T),T)η[r(T)l(T)].\phi_{x}(r(T),T)\leqslant-\eta[r(T)-l(T)].


Proof. Without loss of generality, we can assume that ϕ(x,0)>0\phi(x,0)>0 for all x(l(0),r(0))x\in(l(0),r(0)). By modifying rr and ll in [0,δ][0,\delta] by l~(t)=l(t)+ε[δt]\tilde{l}(t)=l(t)+{\varepsilon}[\delta-t] and r~(t)=r(t)ε[δt]\tilde{r}(t)=r(t)-{\varepsilon}[\delta-t] we can further assume that ϕ(x,0)\phi(x,0) is uniformly positive on [l(0),r(0)][l(0),r(0)]; i.e., there exists η^>0\hat{\eta}>0 such that ϕ(x,0)>η^\phi(x,0)>\hat{\eta} for all x[l(0),r(0)]x\in[l(0),r(0)]. Furthermore, by approximating rr by smooth functions from below and ll by smooth functions from above, with the same Lipschitz constants of rr and ll, we can assume that both rr and ll are smooth functions.

Now for large positive constants MM and LL to be determined, consider the function

ψ(x,t)\displaystyle\psi(x,t) =\displaystyle= η^eMtL(x[r(t)+l(t)]/2)2/2sinπ[xl(t)]r(t)l(t).\displaystyle\hat{\eta}e^{-Mt-L(x-[r(t)+l(t)]/2)^{2}/2}\sin\frac{\pi[x-l(t)]}{r(t)-l(t)}.

Direct calculation gives

ψ(x,t)\displaystyle{\mathcal{L}}\psi(x,t) =\displaystyle= η^eMtL[x(r+l)/2]2/2{Acosπ(xl)rl+Bsinπ(xl)rl}\displaystyle\hat{\eta}e^{-Mt-L[x-(r+l)/2]^{2}/2}\Big{\{}A\cos\frac{\pi(x-l)}{r-l}+B\ \sin\frac{\pi(x-l)}{r-l}\Big{\}}

where

A\displaystyle A =\displaystyle= π(xr)l+π(lx)r(rl)2+2aL(xr+l2)πrl+cπrl,\displaystyle\frac{\pi(x-r)l^{\prime}+\pi(l-x)r^{\prime}}{(r-l)^{2}}+2aL\Big{(}x-\frac{r+l}{2}\Big{)}\frac{\pi}{r-l}+\frac{c\;\pi}{r-l},
B\displaystyle B =\displaystyle= {M+L(xr+l2)r+l2}+a{LL2(xl+r2)2+π2(rl)2}cL(xr+l2)+d.\displaystyle\Big{\{}-M+L\Big{(}x-\frac{r+l}{2}\Big{)}\frac{r^{\prime}+l^{\prime}}{2}\Big{\}}+a\Big{\{}L-L^{2}\Big{(}x-\frac{l+r}{2}\Big{)}^{2}+\frac{\pi^{2}}{(r-l)^{2}}\Big{\}}-cL\Big{(}x-\frac{r+l}{2}\Big{)}+d.

First taking L=2max[0,T]{(|r|+|l|+cL)/(rl)}/infaL=2\max_{[0,T]}\{(|r^{\prime}|+|l^{\prime}|+\|c\|_{L^{\infty}})/(r-l)\}/\inf a and then taking a suitably large MM we see that ψ0{\mathcal{L}}\psi\leqslant 0. Thus, by comparison, we have ϕ(x,t)ψ(x,t)\phi(x,t)\geqslant\psi(x,t) in QQ and obtain the assertion of the Lemma.∎


5.2 Scaling the Survival Density

In making comparison arguments in the proof of Theorem 1, we would like to have that constant functions are solutions of the partial differential equation under consideration. Unfortunately, while this is true for the operator =txx2+μx{\mathcal{L}}=\partial_{t}-\partial^{2}_{xx}+\mu\partial_{x}, it may fail for 1=txx2σ2+xμ{\mathcal{L}}_{1}=\partial_{t}-\partial_{xx}^{2}\sigma^{2}+\partial_{x}\mu. To resolve this technical difficulty, we introduce a suitable scaling of the function uu.

To this end, let K=K(x,t)K=K(x,t) be the bounded solution of the initial value problem:

1K:=tKxxK+x(μK)=0 in ×(0,),K(,0)=1 on ×{0}.\displaystyle{\mathcal{L}}_{1}K:=\partial_{t}K-\partial_{xx}K+\partial_{x}(\mu K)=0\hbox{ \ in \ }{\mathbb{R}}\times(0,\infty),\quad K(\cdot,0)=1\hbox{ \ on \ }{\mathbb{R}}\times\{0\}. (5.1)

Then KK is smooth, uniformly positive, and bounded in ×[0,T]{\mathbb{R}}\times[0,T] for any T>0T>0. Now write

u(x,t)=K(x,t)v(x,t).\displaystyle u(x,t)=K(x,t)\;v(x,t).

It is easy to verify that

1u=K2v,\displaystyle{\mathcal{L}}_{1}u=K\;{\mathcal{L}}_{2}v,

where

2:=txx+νx,ν(x,t):=μ(x,t)xlogK2(x,t).\displaystyle{\mathcal{L}}_{2}:=\partial_{t}-\partial_{xx}+\nu\;\partial_{x},\qquad\nu(x,t):=\mu(x,t)-\partial_{x}\log K^{2}(x,t).

In particular, note that constant functions cc satisfy the equation 2c=0{\mathcal{L}}_{2}c=0.

5.3 The New Hodograph Transformation

As we have seen, the traditional hodograph transformation is defined as the inverse of z=u(x,t)z=u(x,t). In order to work with the scaled survival density and the operator 2{\mathcal{L}}_{2} introduced above, we instead define x=X(z,t)x=X(z,t) as the inverse of z=v(x,t)z=v(x,t):

z=v(X(z,t),t).\displaystyle z=v(X(z,t),t).

If z0=v(x0,t0)>0z_{0}=v(x_{0},t_{0})>0 and vx(x0,t0)0v_{x}(x_{0},t_{0})\not=0, then by the Implicit Function Theorem, locally the above equation defines a unique smooth XX for (z,t)(z,t) near (z0,t0)(z_{0},t_{0}) that satisfies X(z0,t0)=x0X(z_{0},t_{0})=x_{0}. In addition, by implicit differentiation,

vx=1Xz,vt=XtXz,vxx=XzzXv3.\displaystyle v_{x}=\frac{1}{X_{z}},\qquad v_{t}=-\frac{X_{t}}{X_{z}},\qquad v_{xx}=-\frac{X_{zz}}{X_{v}^{3}}.

Thus, 2v=0{\mathcal{L}}_{2}v=0 implies that X=X(z,t)X=X(z,t) satisfies the following quasi-linear partial differential equation of parabolic type:

Xt=Xz2Xzz+ν(X,t).\displaystyle X_{t}=X_{z}^{-2}\;X_{zz}+\nu(X,t).

The free boundary is given by b(t)=X(0,t)b(t)=X(0,t) on which we can derive the boundary condition as follows. Since u(b(t),t)=0u(b(t),t)=0 and ux(b(t),t)=p˙(t)u_{x}(b(t),t)=-\dot{p}(t), we see that vx(b(t),t)=p˙(t)/K(b(t),t)v_{x}(b(t),t)=-\dot{p}(t)/K(b(t),t). Hence, we have the non-linear boundary condition

p˙(t)Xz(0,t)+K(X(0,t),t)=0.\displaystyle\dot{p}(t)\;X_{z}(0,t)+K(X(0,t),t)=0.

This is a standard boundary condition for the quasi-linear parabolic equation for XX.


5.4 The Basic Assumption

Let pP0p\in P_{0} and (b,w,u)(b,w,u) be the unique viscosity solution of the inverse problem associated with pp. To prove Theorem 1, we need only establish the assertions of the theorem in a finite time interval [0,T][0,T] for any fixed positive TT. Hence, in the sequel, we assume, for some T>0T>0 and x0>0x_{0}>0, that

pC1([0,T]),p˙<0 on [0,T],u0C1([0,x0]),u0>0 on [0,x0],u0=0 on (,0].\displaystyle p\in C^{1}([0,T]),\ \dot{p}<0\hbox{ on }[0,T],\quad\ u_{0}\in C^{1}([0,x_{0}]),\ u^{\prime}_{0}>0\hbox{ on }[0,x_{0}],\ u_{0}=0\hbox{ on }(-\infty,0]. (5.2)

These conditions are satisfied by the general assumptions made in Theorem 1 if (i) in (1.14) is imposed. If instead of (i), condition (ii) in (1.14) is imposed, we can apply the third assertion of Lemma 2.1 to shift the initial time by considering first the solution (b^,u^):=(b(s+)b(s),u(+b(s),+s))(\hat{b},\hat{u}):=(b(s+\cdot)-b(s),u(\cdot+b(s),\cdot+s)) for ss at which wxt(,s)L2()w_{xt}(\cdot,s)\in L^{2}({\mathbb{R}}), and then sending s0s\searrow 0.

Under (5.2), we will show that bC1/2((0,T])b\in C^{1/2}((0,T]) and that (b,u)(b,u) is a classical solution of the free boundary problem (1.12) on ×[0,T]{\mathbb{R}}\times[0,T].

First of all, from Lemma 2.1 (1) and (2), we see that

uC(×(0,T]((,x0)×{0})),bC([0,T]),b(0)=0.\displaystyle u\in C({\mathbb{R}}\times(0,T]\cup((-\infty,x_{0})\times\{0\})),\quad b\in C([0,T]),\qquad b(0)=0.

5.5 The Level Sets.

The hodograph transformation we are going to use is based on the inverse, x=X(z,t)x=X(z,t), of z=v(x,t)z=v(x,t) where v=u/Kv=u/K. That is, for each zz, X(z,)X(z,\cdot) is the zz-level set of vv. For XX to be well-defined, we need to consider vv in the set where vxv_{x} is positive.

We begin by investigating the initial value X0=X(,0)X_{0}=X(\cdot,0) specified by z=u0(X0(z)).z=u_{0}(X_{0}(z)). Assume that u0u_{0} satisfies (5.2). Then the function z=u0(x),x[0,x0]z=u_{0}(x),x\in[0,x_{0}], admits a unique inverse, x=X0(z)x=X_{0}(z), which satisfies:

X0C1([0,u0(x0)]),z=u0(X0(z)),X0(z)=1u0(X0(z)),z[0,u0(x0)].\displaystyle X_{0}\in C^{1}([0,u_{0}(x_{0})]),\qquad z=u_{0}(X_{0}(z)),\qquad X^{\prime}_{0}(z)=\frac{1}{u^{\prime}_{0}(X_{0}(z))},\ \ \forall\,z\in[0,u_{0}(x_{0})].

Next we consider X(,t)X(\cdot,t) for small tt. Fix ε(0,u0(x0)){\varepsilon}\in(0,u_{0}(x_{0})). There exists δ1>0\delta_{1}>0 such that X0(ε)+2δ1<x0X_{0}({\varepsilon})+2\delta_{1}<x_{0} and b(s)<X0(ε)2δ1b(s)<X_{0}({\varepsilon})-2\delta_{1} for all s[0,δ1]s\in[0,\delta_{1}]. Also, since uC(×(0,)(,x0)×{0})u\in C({\mathbb{R}}\times(0,\infty)\cup(-\infty,x_{0})\times\{0\}), there exists δ2(0,δ1]\delta_{2}\in(0,\delta_{1}] such that u<εu<{\varepsilon} on (,X0(ε)δ1]×[0,δ2](-\infty,X_{0}({\varepsilon})-\delta_{1}]\times[0,\delta_{2}] and u>εu>{\varepsilon} on {X0(ε)+δ1}×[0,δ2]\{X_{0}({\varepsilon})+\delta_{1}\}\times[0,\delta_{2}]. Finally, from 2v=0{\mathcal{L}}_{2}v=0 in [X0(ε)2δ1,X0(ε)+2δ1]×(0,δ2][X_{0}({\varepsilon})-2\delta_{1},X_{0}({\varepsilon})+2\delta_{1}]\times(0,\delta_{2}] we see that vxv_{x} is continuous and uniformly positive and vt=O(t1/2)v_{t}=O(t^{-1/2}) on [X0(ε)δ1,X0(ε)+δ1]×[0,δ3][X_{0}({\varepsilon})-\delta_{1},X_{0}({\varepsilon})+\delta_{1}]\times[0,\delta_{3}] for some δ3(0,δ2]\delta_{3}\in(0,\delta_{2}]. Hence for each t[0,δ3]t\in[0,\delta_{3}], the equation ε=v(x,t){\varepsilon}=v(x,t), for x[X0(ε)δ1,X0(ε)+δ1]x\in[X_{0}({\varepsilon})-\delta_{1},X_{0}({\varepsilon})+\delta_{1}], admits a unique solution which we denoted by x=X(ε,t)x=X({\varepsilon},t). This solution has the property that X(ε,t)=min{x>b(t)|v(x,t)=ε}X({\varepsilon},t)=\min\{x>b(t)\;|\;v(x,t)={\varepsilon}\}. In addition, Xt(ε,t)=vt/vx=O(t1/2)X_{t}({\varepsilon},t)=-v_{t}/v_{x}=O(t^{-1/2}). Hence, X(ε,)C((0,δ3])C1/2([0,δ3])X({\varepsilon},\cdot)\in C^{\infty}((0,\delta_{3}])\cap C^{1/2}([0,\delta_{3}]).

Now we extend the inverse, x=X(z,t)x=X(z,t), of z=v(x,t)z=v(x,t) to (z,t)(0,z0)×[0,T](z,t)\in(0,z_{0})\times[0,T] where TT is an arbitrarily fixed positive constant and z0z_{0} is a small positive constant that depends on TT and on the viscosity solution uu. More precisely, we prove the following.

Lemma 5.2.

Let pP0p\in P_{0} be given and (b,u)(b,u) be the unique viscosity solution of the inverse problem associated with pp. Assume that (5.2) holds. Let KK be defined in (5.1) and v:=u/K.v:=u/K.

Then there exists z0>0z_{0}>0 such that the function

X(z,t):=min{xb(t)|v(x,t)=z}(z,t)[0,z0]×[0,T]\displaystyle X(z,t):=\min\{x\geqslant b(t)\;|\;v(x,t)=z\}\qquad\forall\,(z,t)\in[0,z_{0}]\times[0,T] (5.3)

is well-defined and satisfies

z=v(X(z,t),t),vx(X(z,t),t)>0,(z,t)(0,z0]×[0,T],\displaystyle\displaystyle z=v(X(z,t),t),\qquad v_{x}(X(z,t),t)>0,\quad\forall\,(z,t)\in(0,z_{0}]\times[0,T],
XC((0,z0)×(0,T])C1,1/2((0,z0]×[0,T]).\displaystyle\displaystyle X\in C^{\infty}((0,z_{0})\times(0,T])\cap C^{1,1/2}((0,z_{0}]\times[0,T]).
Proof.

We believe that the idea behind this proof may have appeared previously in the literature, but are not aware of a precise reference. For completeness, and possible other applications, we present a full proof. We consider the level sets of vv in ×(0,){\mathbb{R}}\times(0,\infty). We call zz\in{\mathbb{R}} a critical value of vv if there exists (x,t)×(0,)(x,t)\in{\mathbb{R}}\times(0,\infty) such that v(x,t)=zv(x,t)=z and either vt(x,t)=vx(x,t)=0v_{t}(x,t)=v_{x}(x,t)=0 or vv is not differentiable at (x,t)(x,t). Hence, if z>0z>0 is not a critical value, then by the Implicit Function Theorem, the level set {(x,t)×(0,)|v(x,t)=z}\{(x,t)\in{\mathbb{R}}\times(0,\infty)\;|\;v(x,t)=z\} consists of smooth curves, each of which either does not have boundary (i.e. lies completely inside QbQ_{b}) or has boundary on (0,)×{0}(0,\infty)\times\{0\}. Since vC(×(0,))C(Qb)v\in C({\mathbb{R}}\times(0,\infty))\cap C^{\infty}(Q_{b}) and v(x,t)=0v(x,t)=0 for all xb(t)x\leqslant b(t), by Sard’s Theorem (e.g. Guillemin and Pollack (1974), pages 39-45) the set of all critical values of vv has measure zero.

As before, we denote by x=X0(z)x=X_{0}(z) the inverse of z=u0(x),x[0,x0]z=u_{0}(x),x\in[0,x_{0}].

Since b(0)=0b(0)=0 and bC([0,T])b\in C([0,T]), we can find a smooth function C([0,T])\ell\in C^{\infty}([0,T]) such that (0)=x0\ell(0)=x_{0} and (t)>b(t)\ell(t)>b(t) for all t[0,T]t\in[0,T]. We define

Γ:={((t),t)|t[0,T]},z0:=minΓv=mint[0,T]v((t),t).\displaystyle\Gamma:=\{(\ell(t),t)\;|\;t\in[0,T]\},\qquad z_{0}:=\min_{\Gamma}v=\min_{t\in[0,T]}v(\ell(t),t).

Then XX in (5.3) is well-defined and X(z,t)(b(t),(t)]X(z,t)\in(b(t),\ell(t)] for all t[0,T]t\in[0,T] and z(0,z0]z\in(0,z_{0}]. In addition, X(0,t)=b(t)X(0,t)=b(t) for all t[0,T]t\in[0,T] and X(z,0)=X0(z)X(z,0)=X_{0}(z) for all z(0,z0]z\in(0,z_{0}].

Next, let ε(0,z0){\varepsilon}\in(0,z_{0}) be a non-critical value of vv and let γε\gamma_{\varepsilon} be the smooth curve in {(x,t)|t>0,v(x,t)=ε}\{(x,t)\;|t>0,v(x,t)={\varepsilon}\} that connects to (X0(ε),0)(X_{0}({\varepsilon}),0). We first claim that γε\gamma_{\varepsilon} is a simple curve, i.e., it cannot form a loop. Indeed, if it forms a loop, then the loop is in QbQ_{b} so the differential equation vt=vxxνvxv_{t}=v_{xx}-\nu v_{x} in QbQ_{b} implies that vεv\equiv{\varepsilon} inside the loop which is impossible, since ε{\varepsilon} is not a critical value. Hence, γε\gamma_{\varepsilon} is a simple curve. We parameterize γε\gamma_{\varepsilon} by its arc-length parameter, ss, in the form (x,t)=(x(s),t(s)),s(0,l)(x,t)=(x(s),t(s)),s\in(0,l), with (x(0+),t(0+))=(X0(ε),0)(x(0+),t(0+))=(X_{0}({\varepsilon}),0). It is not difficult to show that lim|x|+t|v(x,t)|=0\lim_{|x|+t\to\infty}|v(x,t)|=0, so γε\gamma_{\varepsilon} stays in a bounded region and we must have l<l<\infty and t(l)=0,x(l)>x0t(l-)=0,x(l-)>x_{0}. In addition, by the earlier discussion of X(ε,t)X({\varepsilon},t) for small positive tt we see that t(s)>0t^{\prime}(s)>0 and x(s)=X(ε,t(s))x(s)=X({\varepsilon},t(s)) for all small positive ss.

When t(s)>0t(s)>0, we can differentiate v(x(s),t(s))=εv(x(s),t(s))={\varepsilon} to obtain

vx(x(s),t(s))x(s)+vt(x(s),t(s))t(s)=0,\displaystyle v_{x}(x(s),t(s))\;x^{\prime}(s)+v_{t}(x(s),t(s))\;t^{\prime}(s)=0, (5.4)
vxxx+22vxtxt+vttt+2vxx′′+vtt′′|t=t(s)x=x(s)=0.\displaystyle v_{xx}\;x^{\prime}{}^{2}+2v_{xt}\;x^{\prime}\;t^{\prime}+v_{tt}\;t^{\prime}{}^{2}+v_{x}\;x^{\prime\prime}+v_{t}\;t^{\prime\prime}|^{x=x(s)}_{t=t(s)}=0. (5.5)

Now we define l0=sup{s(0,l)|t>0 in (0,s]}l_{0}=\sup\{s\in(0,l)\;|\;t^{\prime}>0\hbox{ \ in \ }(0,s]\}. Since t(l)=0t(l)=0, we must have l0(0,l)l_{0}\in(0,l) and t(l0)=0t^{\prime}(l_{0})=0. Consequently, x(l0)2=1t(l0)2=1x^{\prime}(l_{0})^{2}=1-t^{\prime}(l_{0})^{2}=1. Evaluating (5.4) at s=l0s=l_{0}, we obtain vx(x(l0),t(l0))=0v_{x}(x(l_{0}),t(l_{0}))=0. Consequently, evaluating (5.5) at s=l0s=l_{0} and using vt=vxxνvxv_{t}=v_{xx}-\nu v_{x} we obtain vt(x(l0),t(l0))[1+t′′(l0)]=0v_{t}(x(l_{0}),t(l_{0}))[1+t^{\prime\prime}(l_{0})]=0. Since ε{\varepsilon} is not a critical value of vv, we must have vt(x(l0),t(l0))0v_{t}(x(l_{0}),t(l_{0}))\not=0, so that t′′(l0)=1t^{\prime\prime}(l_{0})=-1. This implies that t(s)<0t^{\prime}(s)<0 for all ss bigger than and close to l0l_{0}.

Next we define l1=sup{s(l0,l)|t<0 in (l0,s]}.l_{1}=\sup\{s\in(l_{0},l)\;|\;t^{\prime}<0\hbox{ \ in \ }(l_{0},s]\}. We claim that l1=ll_{1}=l. Indeed, suppose l1<ll_{1}<l. Then we must have t(l1)>0t(l_{1})>0 and t(l1)=0t^{\prime}(l_{1})=0. As above, first evaluating (5.4) at s=l1s=l_{1} we obtain vx(x(l1),t(l1))=0v_{x}(x(l_{1}),t(l_{1}))=0 and then evaluating (5.5) at s=l1s=l_{1} and using vt=vxxνvxv_{t}=v_{xx}-\nu v_{x} we get that t′′(l1)=1t^{\prime\prime}(l_{1})=-1. However, this is impossible since t(l1)=0t^{\prime}(l_{1})=0 and t(s)<0t^{\prime}(s)<0 for all s(l0,l1).s\in(l_{0},l_{1}). Thus, we must have l1=ll_{1}=l. In summary, we have

t(0+)=0,t(s)>0s(0,l0),t(l0)=0,t′′(l0)=1,t(s)<0s(l0,l),t(l)=0.\displaystyle t(0+)=0,\quad t^{\prime}(s)>0\ \forall\,s\in(0,l_{0}),\quad t^{\prime}(l_{0})=0,\ t^{\prime\prime}(l_{0})=-1,\quad t^{\prime}(s)<0\ \forall\,s\in(l_{0},l),\quad t(l-)=0.

Now we claim that t(l0)>Tt(l_{0})>T. Suppose not. Then t(s)Tt(s)\leqslant T for all s[0,l]s\in[0,l]. Since minΓv=z0>ε\min_{\Gamma}v=z_{0}>{\varepsilon}, we see that γε\gamma_{\varepsilon} cannot touch Γ\Gamma. That is, x(s)<(t(s))x(s)<\ell(t(s)) for all s[0,l]s\in[0,l]. Thus, we must have x(l)=X0(ε)x(l)=X_{0}({\varepsilon}). However, this would imply γε\gamma_{\varepsilon} forms a loop which is impossible. Hence, t(l0)>Tt(l_{0})>T.

Let l^(0,l0)\hat{l}\in(0,l_{0}) be the number such that t(l^)=Tt(\hat{l})=T. Denote the inverse of t=t(s)t=t(s), s[0,l^],s\in[0,\hat{l}], by s=S(t)s=S(t). We can apply the Maximum Principle (Protter and Weinberger (1967), Theorem 3.2, recalling that 2v0{\mathcal{L}}_{2}v\leqslant 0 on ×(0,){\mathbb{R}}\times(0,\infty)) for vv on the domain {(y,t)|t[0,T],y(,x(S(t))]}\{(y,t)\;|\;t\in[0,T],y\in(-\infty,x(S(t))]\} to conclude that v(y,t)<εv(y,t)<{\varepsilon} for every y<x(S(t))y<x(S(t)) and t[0,T]t\in[0,T]. Hence, we must have x(S(t))=min{x>b(t)|v(x,t)=ε}=X(ε,t)x(S(t))=\min\{x>b(t)\;|\;v(x,t)={\varepsilon}\}=X({\varepsilon},t). Since ε{\varepsilon} is not a critical value and t>0t^{\prime}>0 in (0,l^](0,\hat{l}], we derive from (5.4) that vx(X(ε,t),t)>0v_{x}(X({\varepsilon},t),t)>0 for every t(0,T]t\in(0,T]. Hence, X(ε,)C((0,T])C1/2([0,T])X({\varepsilon},\cdot)\in C^{\infty}((0,T])\cap C^{1/2}([0,T]).

Finally, let z1,z2z_{1},z_{2} be any two non-critical values of vv that satisfy 0<z1<z2z00<z_{1}<z_{2}\leqslant z_{0}. Then for i=1,2i=1,2, X(zi,t):=min{x>b(t)|v(x,t)=zi}X(z_{i},t):=\min\{x>b(t)\;|\;v(x,t)=z_{i}\} is a smooth function and vx(X(zi,t),t)>0v_{x}(X(z_{i},t),t)>0 for t[0,T]t\in[0,T]. Note that vxv_{x} satisfies the equation (vx)t=(vx)xx+ν(vx)x+νxvx(v_{x})_{t}=(v_{x})_{xx}+\nu(v_{x})_{x}+\nu_{x}v_{x} in QbQ_{b}, the Maximum Principle then implies that vx>0v_{x}>0 on {(x,t)|t[0,T],x[X(z1,t),X(z2,t)]}\{(x,t)\;|\;t\in[0,T],\;x\in[X(z_{1},t),X(z_{2},t)]\}. By the Implicit Function Theorem, we then know that XC((z1,z2)×(0,T])X\in C^{\infty}((z_{1},z_{2})\times(0,T]). Finally, sending z10z_{1}\to 0 and z2z0z_{2}\to z_{0} along non-critical values of vv, we conclude that XC((0,z0)×(0,T])C1,1/2((0,z0)×[0,T])X\in C^{\infty}((0,z_{0})\times(0,T])\cap C^{1,1/2}((0,z_{0})\times[0,T]). This completes the proof of the Lemma.∎

Remark 5.1.

Here we have used the Maximum Principle for vxv_{x}. The Maximum Principle may not hold for uxu_{x} since the equation for uxu_{x} is (ux)t=(ux)xxμuxx2μxuxμxxu(u_{x})_{t}=(u_{x})_{xx}-\mu u_{xx}-2\mu_{x}u_{x}-\mu_{xx}u where the non-homogeneous term μxxu\mu_{xx}u may cause difficulties. Of course, we also need to know that vC(Qb)C(×(0,T]((,x0)×{0}))v\in C^{\infty}(Q_{b})\cap C({\mathbb{R}}\times(0,T]\cup((-\infty,x_{0})\times\{0\})) and 2v0{\mathcal{L}}_{2}v\leqslant 0 on ×(0,){\mathbb{R}}\times(0,\infty) so that the Maximum Principle can be applied to vεv-{\varepsilon}.

Using the same idea as in the proof, one can derive the following which maybe useful in qualitative and/or quantitative studies of the free boundary.

Proposition 5.

Let (b,u)(b,u) be the solution of the inverse problem associated with pP0p\in P_{0}. Assume that pC1([0,))p\in C^{1}([0,\infty)) and p˙<0\dot{p}<0 on [0,)[0,\infty). Let v=u/Kv=u/K where KK is defined in (5.1).

  1. 1.

    Denote by N(t)N(t) the number of roots (without counting multiplicity) of vx(,t)=0v_{x}(\cdot,t)=0 in (b(t),)(b(t),\infty). Then N(t)N(t) is a decreasing function. In particular, if N(0)=1N(0)=1, i.e. u0u_{0}^{\prime} changes sign only once in (b(0),)(b(0),\infty), then N(t)1N(t)\equiv 1 for all t>0t>0; that is, vx(,t)v_{x}(\cdot,t) changes sign only once in (b(t),)(b(t),\infty).

  2. 2.

    Suppose u0u_{0} is the Delta function (i.e. 𝔛0=0{\mathfrak{X}}_{0}=0 a.s.). Then there exists X1C([0,))C((0,))X_{1}\in C([0,\infty))\cap C^{\infty}((0,\infty)) such that X1(0)=0X_{1}(0)=0 and vx(,t)>0v_{x}(\cdot,t)>0 in (b(t),X1(t))(b(t),X_{1}(t)) and vx(,t)<0v_{x}(\cdot,t)<0 in (X1(t),))(X_{1}(t),\infty)) for every t>0t>0.

The first assertion can be proven by a variation of our proof. The second assertion follows from the fact that the Delta function can be approximated by a sequence of the bell-shaped positive functions, each of which has only one local maximum and no local minimum; see Chen et al. (2008). We omit the details.

5.6 The Initial Boundary Value Problem

In the sequel, XX is defined by (5.3) and ε(0,z0){\varepsilon}\in(0,z_{0}) is a fixed small positive constant. We consider the quasi-linear parabolic initial boundary value problem, for the unknown function Y=Y(z,t):Y=Y(z,t):

{Yt=Yz2Yzz+ν(Y,t),z(0,ε),t(0,T],Y(z,0)=X0(z),z[0,ε],p˙(t)Yz(0,t)+K(Y(0,t),t)=0,t(0,T],Yz(ε,t)=Xz(ε,t),t(0,T].\displaystyle\left\{\begin{array}[]{ll}\displaystyle Y_{t}=Y_{z}^{-2}\;Y_{zz}+\nu(Y,t),&z\in(0,{\varepsilon}),t\in(0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ Y(z,0)=X_{0}(z),&z\in[0,{\varepsilon}],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \dot{p}(t)\,Y_{z}(0,t)+K(Y(0,t),t)=0,&t\in(0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ Y_{z}({\varepsilon},t)=X_{z}({\varepsilon},t),&t\in(0,T].\end{array}\right. (5.10)

We know from the theory of quasi-linear equations that this problem admits a unique classical solution for ε{\varepsilon} small enough, which is also smooth up to the boundary, so long as Xz(ε,t)X_{z}({\varepsilon},t) for t[0,T]t\in[0,T] and X0(z)X_{0}^{\prime}(z) for z[0,ε]z\in[0,{\varepsilon}] are uniformly positive. The main difficulty is to show that X=YX=Y.555Once we have X=YX=Y, and b=Y(0,)b=Y(0,\cdot), it follows from classical theory that bb is only 1/21/2 less differentiable than pp. We know that XX satifies all of (5.10), except for the third equation (i.e., we don’t know a priori that p˙(t)Xz(0,t)+K(X(0,t),t)=0,t(0,T]\dot{p}(t)\,X_{z}(0,t)+K(X(0,t),t)=0,t\in(0,T], because we don’t have smoothness of XX up to the boundary).

To do this, we consider for each hh\in{\mathbb{R}}, the following initial boundary value problem, for Yh=Yh(z,t)Y^{h}=Y^{h}(z,t),

{Yth=(Yzh)2Yzzh+ν(Yh,t),z(0,ε),t(0,T],Yh(z,0)=X0(z)+h,z[0,ε],p˙h(t)Yzh(0,t)+K(Yh(0,t),t)=0,t(0,T],Yzh(ε,t)=Xz(ε,t),t(0,T],\displaystyle\left\{\begin{array}[]{ll}\displaystyle Y^{h}_{t}=(Y^{h}_{z})^{-2}\;Y^{h}_{zz}+\nu(Y^{h},t),&z\in(0,{\varepsilon}),t\in(0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ Y^{h}(z,0)=X_{0}(z)+h,&z\in[0,{\varepsilon}],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \dot{p}^{h}(t)\;Y^{h}_{z}(0,t)+K(Y^{h}(0,t),t)=0,&t\in(0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ Y^{h}_{z}({\varepsilon},t)=X_{z}({\varepsilon},t),&t\in(0,T],\end{array}\right. (5.15)

where {ph}h\{p^{h}\}_{h\in{\mathbb{R}}} is a family that has the following properties:

phP0C([0,T]),p˙h<0 on [0,T],(p˙p˙h)h0,limh0phpC1([0,T])=0.\displaystyle p^{h}\in P_{0}\cap C^{\infty}([0,T]),\quad\dot{p}^{h}<0\hbox{ on }[0,T],\quad(\dot{p}-\dot{p}^{h})h\geqslant 0,\quad\lim_{h\to 0}\|p^{h}-p\|_{C^{1}([0,T])}=0.

Note that if μ0\mu\equiv 0 and phpp^{h}\equiv p, then ν0\nu\equiv 0 and K1K\equiv 1 so problem (5.15) relates to (5.10) by the simple translation Yh=Y+hY^{h}=Y+h. Here we shall consider the general case.

5.7 Well-Posedness of the Perturbed Problems

We now show that for each hh\in{\mathbb{R}}, (5.15) admits a unique classical solution. Since (5.10) and (5.15) belong to the same type of initial-boundary value problem, we state our result in terms of (5.10). The conclusion for (5.15) is analogous.

Lemma 5.3.

Let K,νK,\nu be smooth and bounded functions on ×[0,T]{\mathbb{R}}\times[0,T]. Assume that K>0K>0 and

X0C1([0,ε]),X0>0,pC1([0,T]),p˙<0,Xz(ε,)C([0,T]),Xz(ε,)>0.X_{0}\in C^{1}([0,{\varepsilon}]),X_{0}^{\prime}>0,\quad p\in C^{1}([0,T]),\dot{p}<0,\quad X_{z}({\varepsilon},\cdot)\in C([0,T]),X_{z}({\varepsilon},\cdot)>0.

Then problem (5.10) admits a unique classical solution that satisfies

YC([0,ε]×[0,T])C((0,ε)×(0,T]),YzC([0,ε]×(0,T]),\displaystyle Y\in C([0,{\varepsilon}]\times[0,T])\cap C^{\infty}((0,{\varepsilon})\times(0,T]),\quad Y_{z}\in C([0,{\varepsilon}]\times(0,T]),
0<inf[0,ε]×(0,T]Yzsup[0,ε]×(0,T]Yz<.\displaystyle\displaystyle 0<\inf_{[0,{\varepsilon}]\times(0,T]}Y_{z}\leqslant\sup_{[0,{\varepsilon}]\times(0,T]}Y_{z}<\infty.

If in addition pCα+1/2((t1,t2])p\in C^{\alpha+1/2}((t_{1},t_{2}]) where α1/2\alpha\geqslant 1/2 is not an integer and (t1,t2](0,T](t_{1},t_{2}]\subset(0,T], then YC2α,α([0,ε)×(t1,t2])Y\in C^{2\alpha,\alpha}([0,{\varepsilon})\times(t_{1},t_{2}]) so that Y(0,)Cα((t1,t2])Y(0,\cdot)\in C^{\alpha}((t_{1},t_{2}]).

If pCα+1/2([0,T])p\in C^{\alpha+1/2}([0,T]), X0C2α([0,ε))X_{0}\in C^{2\alpha}([0,{\varepsilon})), α1/2\alpha\geqslant 1/2 is not an integer, and all compatibility conditions at (0,0)(0,0) up to order α+1/2\llbracket\alpha+1/2\rrbracket are satisfied, then YC2α,α([0,ε)×[0,T])Y\in C^{2\alpha,\alpha}([0,{\varepsilon})\times[0,T]) so Y(0,)Cα([0,T]).Y(0,\cdot)\in C^{\alpha}([0,T]).

Proof.

According to the general theory of quasi-linear partial differential equations of parabolic type (see Ladyzhenskaya et al. (1968)), to show that (5.10) admits a unique classical solution, it suffices to establish an a priori estimate for an upper bound and a positive lower bound for YzY_{z}. For this purpose, suppose we have a classical solution YY of (5.10). Then using a local analysis we have YC((0,ε)×(0,T])Y\in C^{\infty}((0,{\varepsilon})\times(0,T]). Set H=YzH=Y_{z}. Then we can differentiate the first two equations in (5.10) to obtain:

{Ht=(H1)zz+νx(Y,t)H,z(0,ε),t(0,T],H(z,0)=X0(z),z[0,ε],p˙(t)H(0,t)+K(Y(0,t),t)=0,t(0,T],H(ε,t)=Xz(ε,t),t(0,T].\displaystyle\left\{\begin{array}[]{ll}H_{t}=-\;(H^{-1})_{zz}+\nu_{x}(Y,t)\;H,&z\in(0,{\varepsilon}),t\in(0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ H(z,0)=X_{0}^{\prime}(z),&z\in[0,{\varepsilon}],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \displaystyle\dot{p}(t)\;H(0,t)+K(Y(0,t),t)=0,&t\in(0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ H({\varepsilon},t)=X_{z}({\varepsilon},t),&t\in(0,T].\end{array}\right.

Now denote

M1:=KL(×[0,T])min[0,T]|p˙|,\displaystyle M_{1}:=\frac{\|K\|_{L^{\infty}({\mathbb{R}}\times[0,T])}}{\min_{[0,T]}|\dot{p}|}, m1:=inf×[0,T]Kmax[0,T]|p˙|,\displaystyle m_{1}:=\frac{\inf_{{\mathbb{R}}\times[0,T]}K}{\max_{[0,T]}|\dot{p}|},
M2:=max[0,x0]1u0(x),\displaystyle M_{2}:=\max_{[0,x_{0}]}\frac{1}{u_{0}^{\prime}(x)}, m2:=min[0,x0]1u0(x),\displaystyle m_{2}:=\min_{[0,x_{0}]}\frac{1}{u_{0}^{\prime}(x)},
M3=maxt[0,T]Xz(ε,t),\displaystyle M_{3}=\max_{t\in[0,T]}X_{z}({\varepsilon},t), m3=mint[0,T]Xz(ε,t),\displaystyle m_{3}=\min_{t\in[0,T]}X_{z}({\varepsilon},t),
M=max{M1,M2,M3},\displaystyle M=\max\{M_{1},M_{2},M_{3}\}, m=min{m1,m2,m3},\displaystyle m=\min\{m_{1},m_{2},m_{3}\},
k(t)=νx(,t)L().\displaystyle k(t)=\|\nu_{x}(\cdot,t)\|_{L^{\infty}({\mathbb{R}})}.

Then by comparison, we have

me0tk(s)𝑑sH(z,t)=Yz(z,t)Me0tk(s)𝑑st[0,T],z[0,ε].\displaystyle me^{-\int_{0}^{t}k(s)ds}\leqslant H(z,t)=Y_{z}(z,t)\leqslant Me^{\int_{0}^{t}k(s)ds}\quad\forall\,t\in[0,T],z\in[0,{\varepsilon}].

These a priori upper and lower bounds then imply that (5.10) admits a unique classical solution. The remaining assertions follow from the local and global regularity theory of parabolic equations Ladyzhenskaya et al. (1968). This completes the proof. ∎

An an example, we demonstrate the derivation of the first order compatibility condition:

1p˙(0)=limt0Yz(0,t)=limz0Yz(z,0)=X0(0)=1u0(0).\displaystyle\frac{1}{-\dot{p}(0)}=\lim_{t\searrow 0}Y_{z}(0,t)=\lim_{z\searrow 0}Y_{z}(z,0)=X_{0}^{\prime}(0)=\frac{1}{u_{0}^{\prime}(0)}.

We remark that from our definition of XX, the compatibility of the initial and boundary data at (ε,0)({\varepsilon},0) for (5.10) is automatically satisfied. For (5.15), since K(,0)1K(\cdot,0)\equiv 1, the first order compatibility condition at (ε,0)({\varepsilon},0) is also satisfied, so YhC1,1/2([0,ε][0,T]{(0,0)})Y^{h}\in C^{1,1/2}([0,{\varepsilon}]\cup[0,T]\setminus\{(0,0)\}).

Finally, to demonstrate continuous dependence, we integrate over (0,ε)×{t},t(0,T],(0,{\varepsilon})\times\{t\},t\in(0,T], the difference of the differential equations in (5.10) and (5.15) multiplied by YYhY-Y^{h} and use integration by parts to obtain

12ddt0ε(YYh)2𝑑z+0ε(YzYzh)2YzYzh𝑑z\displaystyle\frac{1}{2}\frac{d}{dt}\int_{0}^{\varepsilon}(Y-Y^{h})^{2}dz+\int_{0}^{\varepsilon}\frac{(Y_{z}-Y_{z}^{h})^{2}}{Y_{z}Y_{z}^{h}}dz
=0ε(YYh)[ν(Y,t)ν(Yh,t)]𝑑z+[p˙h(t)K(Yh(0,t),t)p˙(t)K(Y(0,t),t)][Y(0,t)Yh(0,t)].\displaystyle\quad=\int_{0}^{\varepsilon}(Y-Y^{h})[\nu(Y,t)-\nu(Y^{h},t)]dz+\Big{[}\frac{\dot{p}^{h}(t)}{K(Y^{h}(0,t),t)}-\frac{\dot{p}(t)}{K(Y(0,t),t)}\Big{]}[Y(0,t)-Y^{h}(0,t)].

Upon using the boundedness of YzY_{z} and YzhY_{z}^{h}, Cauchy’s inequality, and the Sobolev embedding

ϕL([0,ε])2(1ε+1δ)0εϕ2(z)𝑑z+δ0εϕz2(z)𝑑zδ>0,\displaystyle\|\phi\|_{L^{\infty}([0,{\varepsilon}])}^{2}\leqslant\Big{(}\frac{1}{{\varepsilon}}+\frac{1}{\delta}\Big{)}\int_{0}^{\varepsilon}\phi^{2}(z)dz+\delta\int_{0}^{\varepsilon}\phi^{2}_{z}(z)dz\quad\forall\,\delta>0,

and we find that there exists a positive constant CC such that

ddt0ε(YYh)2𝑑zC0ε(YYh)2+C|p˙h(t)p˙(t)|2t(0,T].\displaystyle\frac{d}{dt}\int_{0}^{\varepsilon}(Y-Y^{h})^{2}dz\leqslant C\int_{0}^{\varepsilon}(Y-Y^{h})^{2}+C|\dot{p}^{h}(t)-\dot{p}(t)|^{2}\quad\forall\,t\in(0,T].

Gronwall’s inequality then yields the estimate

maxt[0,T]Y(,t)Yh(,t)L2((0,ε))2CeCT{h2+0T|p˙hp˙|2𝑑t}.\displaystyle\max_{t\in[0,T]}\|Y(\cdot,t)-Y^{h}(\cdot,t)\|^{2}_{L^{2}((0,{\varepsilon}))}\leqslant Ce^{CT}\Big{\{}h^{2}+\int_{0}^{T}|\dot{p}^{h}-\dot{p}|^{2}dt\Big{\}}.

This estimate in turn implies the C([0,T];L2((0,ε))C([0,T];L^{2}((0,{\varepsilon})) convergence of YhY^{h} to YY as h0h\to 0. By Sobolev embedding and the boundedness of (YYh)z(Y-Y^{h})_{z}, this convergence also implies the LL^{\infty} convergence:

limh0YhYC([0,ε]×[0,T])=0.\displaystyle\lim_{h\to 0}\|Y^{h}-Y\|_{C([0,{\varepsilon}]\times[0,T])}=0.

To show that b=Y(0,)b=Y(0,\cdot), it suffices to show the following:

Lemma 5.4.

For every h>0h>0, Yh(0,)<b<Yh(0,)Y^{-h}(0,\cdot)<b<Y^{h}(0,\cdot) and Yh(ε,)<X(ε,)<Yh(ε,)Y^{-h}({\varepsilon},\cdot)<X({\varepsilon},\cdot)<Y^{h}({\varepsilon},\cdot) on [0,T][0,T].

The proof will be given in the next three subsections.

5.8 The Inverse Hodograph Transformation

To show that Y=XY=X and b=Y(0,)b=Y(0,\cdot), we define z=vh(x,t)z=v^{h}(x,t) as the inverse function of x=Yh(z,t)x=Y^{h}(z,t):

x=Yh(z,t),z[0,ε],t[0,T]z=vh(x,t),x[Yh(0,t),Yh(ε,t)],t[0,T].\displaystyle x=Y^{h}(z,t),z\in[0,{\varepsilon}],t\in[0,T]\qquad\Longleftrightarrow\qquad z=v^{h}(x,t),\,x\in[Y^{h}(0,t),Y^{h}({\varepsilon},t)],\,t\in[0,T].

Since Yzh>0Y^{h}_{z}>0, the inverse is well-defined. We record the key equation for future reference:

x=Yh(vh(x,t),t)t[0,T],x[Yh(0,t),Yh(ε,t)].\displaystyle x=Y^{h}(v^{h}(x,t),t)\qquad\forall\,t\in[0,T],\;x\in[Y^{h}(0,t),Y^{h}({\varepsilon},t)]. (5.17)

By implicit differentiation, we find that

vxh(x,t)\displaystyle v^{h}_{x}(x,t) =\displaystyle= 1Yzh(z,t)|z=vh(x,t),\displaystyle\frac{1}{Y^{h}_{z}(z,t)}\Big{|}_{z=v^{h}(x,t)},\vskip 6.0pt plus 2.0pt minus 2.0pt
vxxh(x,t)\displaystyle v^{h}_{xx}(x,t) =\displaystyle= Yzzh(z,t)Yzh(z,t)3|z=vh(x,t),\displaystyle-\frac{Y^{h}_{zz}(z,t)}{Y^{h}_{z}(z,t)^{3}}\Big{|}_{z=v^{h}(x,t)},\vskip 6.0pt plus 2.0pt minus 2.0pt
vth(x,t)\displaystyle v^{h}_{t}(x,t) =\displaystyle= Yth(z,t)Yzh(z,t)=Yzzh(z,t)Yzh(z,t)3ν(Yh(z,t),t)Yzh(z,t)|z=vh(x,t)\displaystyle-\frac{Y^{h}_{t}(z,t)}{Y^{h}_{z}(z,t)}=-\frac{Y^{h}_{zz}(z,t)}{Y^{h}_{z}(z,t)^{3}}-\frac{\nu(Y^{h}(z,t),t)}{Y^{h}_{z}(z,t)}\Big{|}_{z=v^{h}(x,t)}
=\displaystyle= vxxh(x,t)ν(x,t)vxh(x,t).\displaystyle v_{xx}^{h}(x,t)-\nu(x,t)v^{h}_{x}(x,t).

Finally, setting uh(x,t)=K(x,t)vh(x,t)u^{h}(x,t)=K(x,t)v^{h}(x,t) we see that

2vh=0,1uh(x,t)=0,t(0,T],x(Yh(0,t),Yh(ε,t)).\displaystyle{\mathcal{L}}_{2}v^{h}=0,\quad{\mathcal{L}}_{1}u^{h}(x,t)=0,\quad\forall\,t\in(0,T],x\in(Y^{h}(0,t),Y^{h}({\varepsilon},t)).

When t=0t=0, using Yh(z,0)=X0(z)+hY^{h}(z,0)=X_{0}(z)+h, we see from (5.17) that x=Yh(vh(x,0),0)=X0(vh(x,0))+h.x=Y^{h}(v^{h}(x,0),0)=X_{0}(v^{h}(x,0))+h. This implies that xh=X0(vh(x,0))x-h=X_{0}(v^{h}(x,0)), so using x^=X0(u0(x^))\hat{x}=X_{0}(u_{0}(\hat{x})) with x^=xh\hat{x}=x-h we obtain vh(x,0)=u0(xh)v^{h}(x,0)=u_{0}(x-h) or

uh(x,0)=vh(x,0)=u0(xh),x[Yh(0,0),Yh(ε,0)]=[h,X0(ε)+h].\displaystyle u^{h}(x,0)=v^{h}(x,0)=u_{0}(x-h),\quad\forall\,x\in[Y^{h}(0,0),Y^{h}({\varepsilon},0)]=[h,X_{0}({\varepsilon})+h].

Next, substituting x=Yh(ε,t)x=Y^{h}({\varepsilon},t) in (5.17) we obtain

vh(Yh(ε,t))=ε=v(X(ε,t),t).\displaystyle v^{h}(Y^{h}({\varepsilon},t))={\varepsilon}=v(X({\varepsilon},t),t).

The boundary condition Yzh(ε,t)=Xz(ε,t)=1/vx(X(ε,t),t)Y_{z}^{h}({\varepsilon},t)=X_{z}({\varepsilon},t)=1/v_{x}(X({\varepsilon},t),t) then implies that

vxh(Yh(ε,t),t)=vx(X(ε,t),t).\displaystyle v_{x}^{h}(Y^{h}({\varepsilon},t),t)=v_{x}(X({\varepsilon},t),t).

Finally, substituting x=Yh(0,t)x=Y^{h}(0,t) in (5.17) we have

vh(Yh(0,t),t)=0=v(b((t),t).\displaystyle v^{h}(Y^{h}(0,t),t)=0=v(b((t),t).

The boundary condition p˙h(t)Yzh(0,t)+K(Yh(0,t),t)=0\dot{p}^{h}(t)\;Y_{z}^{h}(0,t)+K(Y^{h}(0,t),t)=0 then gives

vxh(Yh(0,t),t)K(Yh(0,t),t)=p˙h(t).\displaystyle v_{x}^{h}(Y^{h}(0,t),t)\;K(Y^{h}(0,t),t)=-\dot{p}^{h}(t).

In summary, we see that vh=vh(x,t)v^{h}=v^{h}(x,t) has the following properties:

{2vh=0t(0,T],x(Yh(0,t),Yh(ε,t)),vh(x,0)=u0(xh),x[Yh(0,0),Yh(ε,0)],vh(Yh(0,t),t)=0=v(b(t),t),t[0,T],vxh(Yh(0,t),t)K(Yh(0,t),t)=p˙h(t),t[0,T],vh(Yh(ε,t),t)=ε=v(X(ε,t),t),t[0,T],vxh(Yh(ε,t),t)=vx(X(ε,t),t),t[0,T].\displaystyle\left\{\begin{array}[]{ll}{\mathcal{L}}_{2}v^{h}=0&\forall\,t\in(0,T],x\in(Y^{h}(0,t),Y^{h}({\varepsilon},t)),\vskip 6.0pt plus 2.0pt minus 2.0pt\\ v^{h}(x,0)=u_{0}(x-h),&\forall\,x\in[Y^{h}(0,0),Y^{h}({\varepsilon},0)],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ v^{h}(Y^{h}(0,t),t)=0=v(b(t),t),&\forall\;t\in[0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ v^{h}_{x}(Y^{h}(0,t),t)\;K(Y^{h}(0,t),t)=-\dot{p}^{h}(t),&\forall\;t\in[0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ v^{h}(Y^{h}({\varepsilon},t),t)={\varepsilon}=v(X({\varepsilon},t),t),&\forall\;t\in[0,T],\vskip 6.0pt plus 2.0pt minus 2.0pt\\ v^{h}_{x}(Y^{h}({\varepsilon},t),t)=v_{x}(X({\varepsilon},t),t),&\forall\;t\in[0,T].\end{array}\right.

Note that [Yh(0,0),Yh(ε,0)]=[h,X0(ε)+h].[Y^{h}(0,0),Y^{h}({\varepsilon},0)]=[h,X_{0}({\varepsilon})+h].


5.9 The Proof that b<Yh(0,)b<Y^{h}(0,\cdot) and X(ε,)<Yh(ε,)X({\varepsilon},\cdot)<Y^{h}({\varepsilon},\cdot) for h>0h>0

Let h>0h>0 be arbitrarily fixed. We define

T:=sup{t[0,T]|b<Yh(0,) and X(ε,)<Yh(ε,) in [0,t]}.\displaystyle T^{*}:=\sup\{t\in[0,T]\;|\;b<Y^{h}(0,\cdot)\hbox{ \ and \ }X({\varepsilon},\cdot)<Y^{h}({\varepsilon},\cdot)\hbox{ \ in \ }[0,t]\}.

We know that X(ε,),Yh(0,)X({\varepsilon},\cdot),Y^{h}(0,\cdot) and Yh(ε,)Y^{h}({\varepsilon},\cdot) are all continuous and that bb is upper-semi-continuous (Chen et al. (2011)). Also, Yh(0,0)=hY^{h}(0,0)=h, b(0+)0b(0+)\leqslant 0, and Yh(ε,0)=X0(ε,0)+hY^{h}({\varepsilon},0)=X_{0}({\varepsilon},0)+h. Thus, we have that TT^{*} is well-defined and T(0,T]T^{*}\in(0,T].

We claim that b<Yh(0,)b<Y^{h}(0,\cdot) and X(ε,)<Yh(ε,)X({\varepsilon},\cdot)<Y^{h}({\varepsilon},\cdot) on [0,T][0,T]. Suppose the claim is not true. Then we must have have b(t)<Yh(0,t)b(t)<Y^{h}(0,t) and X(ε,t)<Yh(ε,t)X({\varepsilon},t)<Y^{h}({\varepsilon},t) for all t[0,T)t\in[0,T^{*}) and

either (i) X(ε,T)=Yh(ε,T), or (ii) b(T)=Yh(0,T).\displaystyle\hbox{either (i) \ }X({\varepsilon},T^{*})=Y^{h}({\varepsilon},T^{*}),\qquad\hbox{ or (ii) \ }b(T^{*})=Y^{h}(0,T^{*}).

We shall show that neither of the above can happen. For this we compare vv and vhv^{h} in the set

Q\displaystyle Q :=\displaystyle:= {(x,t)|t(t0,T],Yh(0,t)<x<X(ε,t)},\displaystyle\{(x,t)\;|\;t\in(t_{0},T^{*}],Y^{h}(0,t)<x<X({\varepsilon},t)\},

where

t0\displaystyle t_{0} :=\displaystyle:= inf{t[0,T]|Yh(0,t)<X(ε,t) in [t,T]}.\displaystyle\inf\{t\in[0,T^{*}]\;|\;Y^{h}(0,t)<X({\varepsilon},t)\hbox{ \ in \ }[t,T^{*}]\}.

Since b(t)<Yh(0,t)b(t)<Y^{h}(0,t) for all t[0,T)t\in[0,T^{*}), we have 2v=2vh=0{\mathcal{L}}_{2}v={\mathcal{L}}_{2}v^{h}=0 in QQ. Also, on the left lateral parabolic boundary of QQ, x=Yh(0,t)x=Y^{h}(0,t), we have v0=vhv\geqslant 0=v^{h}. On the right lateral boundary of QQ, x=X(ε,t)x=X({\varepsilon},t), we have vhε=vv^{h}\leqslant{\varepsilon}=v. Thus vvhv\geqslant v^{h} on the parabolic boundary of QQ if t0>0t_{0}>0. Finally, if t0=0t_{0}=0, we have v(x,0)=u0(x)>u0(xh)=vh(x,0)v(x,0)=u_{0}(x)>u_{0}(x-h)=v^{h}(x,0) for all x[Yh(0,0),X(ε,0)]=[h,X0(ε)]x\in[Y^{h}(0,0),X({\varepsilon},0)]=[h,X_{0}({\varepsilon})]. Thus, vvhv\geqslant v^{h} on the parabolic boundary of QQ. Since h>0h>0, we cannot have vvhv\equiv v^{h}, so the Strong Maximum Principle implies that v>vhv>v^{h} in QQ.

Now consider case (i): X(ε,T)=Yh(ε,T)X({\varepsilon},T^{*})=Y^{h}({\varepsilon},T^{*}). Then as X(ε,)X({\varepsilon},\cdot) is smooth and v(X(ε,T),T)=ε=vh(Yh(ε,T),T)=vh(X(ε,T),T)v(X({\varepsilon},T^{*}),T^{*})={\varepsilon}=v^{h}(Y^{h}({\varepsilon},T^{*}),T^{*})=v^{h}(X({\varepsilon},T^{*}),T^{*}), the Hopf Lemma implies that vx(X(ε,T),t)<vxh(X(ε,T),T)v_{x}(X({\varepsilon},T^{*}),t)<v^{h}_{x}(X({\varepsilon},T^{*}),T^{*}). This is impossible since vxh(X(ε,T),T)=vxh(Yh(ε,T),T)=vx(X(ε,T),T)v^{h}_{x}(X({\varepsilon},T^{*}),T^{*})=v_{x}^{h}(Y^{h}({\varepsilon},T^{*}),T^{*})=v_{x}(X({\varepsilon},T^{*}),T^{*}). Thus case (i) cannot happen.

Next, we consider case (ii): b(T)=Yh(0,T)b(T^{*})=Y^{h}(0,T^{*}). Since Yh(0,)Y^{h}(0,\cdot) is smooth, the generalized Hopf Lemma 5.1 implies that there exist η>0\eta>0 and δ>0\delta>0 such that

v(x,s)vh(x,s)[xYh(0,s)]η,s[Tδ,T],x[Yh(0,s),b(T)+δ].\displaystyle v(x,s)-v^{h}(x,s)\geqslant[x-Y^{h}(0,s)]\;\eta,\quad\forall\,s\in[T^{*}-\delta,T^{*}],x\in[Y^{h}(0,s),b(T^{*})+\delta].

However, since vxh(Yh(0,s),s)K(Yh(0,s),t)=p˙h(s)p˙(s)v_{x}^{h}(Y^{h}(0,s),s)K(Y^{h}(0,s),t)=-\dot{p}^{h}(s)\geqslant-\dot{p}(s), the above inequality implies

lim¯x>Yh(0,s),sT,(x,s)(b(T),T)u(x,s)xYh(0,s)p˙(T)+ηK(b(T),T).\displaystyle\varliminf_{x>Y^{h}(0,s),s\leqslant T^{*},(x,s)\to(b(T^{*}),T^{*})}\frac{u(x,s)}{x-Y^{h}(0,s)}\geqslant-\dot{p}(T^{*})+\eta K(b(T^{*}),T^{*}).

But this contradicts the first inequality in (3.1) [with :=Yh(0,)\ell:=Y^{h}(0,\cdot)] of Lemma 3.1. Hence, case (ii) also cannot happen.

In conclusion, when h>0h>0, we must have b<Yh(0,)b<Y^{h}(0,\cdot) and X(ε,)<Yh(ε,)X({\varepsilon},\cdot)<Y^{h}({\varepsilon},\cdot) on [0,T][0,T].

5.10 The Proof that Yh(0,)<bY^{h}(0,\cdot)<b and Yh(ε,)<X(ε,)Y^{h}({\varepsilon},\cdot)<X({\varepsilon},\cdot) for h<0h<0

Here, we use the facts that b(0)=0b(0)=0 and bb is continuous on [0,T][0,T], proven in Chen et al. (2011), and repeated here as Proposition 1. Also, we need the fact that 2v0{\mathcal{L}}_{2}v\leqslant 0 in ×(0,){\mathbb{R}}\times(0,\infty).

Let h<0h<0 be arbitrary. We define

T:=sup{t[0,T]|Yh(0,)<b and Yh(ε,)<X(ε,) on [0,t]}.\displaystyle T^{*}:=\sup\{t\in[0,T]\;|\;Y^{h}(0,\cdot)<b\hbox{ \ and \ }\quad Y^{h}({\varepsilon},\cdot)<X({\varepsilon},\cdot)\hbox{ \ on \ }[0,t]\}.

Since b,X(ε,),Yh(0,),Yh(ε,)b,X({\varepsilon},\cdot),Y^{h}(0,\cdot),Y^{h}({\varepsilon},\cdot) are all continuous and Yh(0,0)=h<0=b(0)Y^{h}(0,0)=h<0=b(0) and Yh(ε,0)=X(ε,0)+hY^{h}({\varepsilon},0)=X({\varepsilon},0)+h, we see that TT^{*} is well-defined and T(0,T]T^{*}\in(0,T].

We claim that Yh(0,)<bY^{h}(0,\cdot)<b and Yh(ε,)<X(ε,)Y^{h}({\varepsilon},\cdot)<X({\varepsilon},\cdot) on [0,T][0,T]. Suppose the claim is not true. Then Y(0,t)<b(t)Y(0,t)<b(t) and Yh(ε,t)<X(ε,t)Y^{h}({\varepsilon},t)<X({\varepsilon},t) for all t[0,T)t\in[0,T^{*}) and

either (i) X(ε,T)=Yh(ε,T), or (ii) b(T)=Yh(0,T).\displaystyle\hbox{either (i) \ }X({\varepsilon},T^{*})=Y^{h}({\varepsilon},T^{*}),\qquad\hbox{ or (ii) \ }b(T^{*})=Y^{h}(0,T^{*}).

To show that none of the above can happen, we compare vv and vhv^{h} as before in the set

Q:={(x,t)|t(0,T],Yh(0,t)<x<Yh(ε,t)}.\displaystyle Q:=\{(x,t)\;|\;t\in(0,T^{*}],Y^{h}(0,t)<x<Y^{h}({\varepsilon},t)\}.

Then, by the definition of TT^{*}, we have v(x,t)vh(x,t)v(x,t)\leqslant v^{h}(x,t) on the parabolic boundary of QQ and 2vh=02v{\mathcal{L}}_{2}v^{h}=0\geqslant{\mathcal{L}}_{2}v in QQ. The Maximum Principle then implies that v<vhv<v^{h} in QQ.

Now consider case (i): X(ε,T)=Yh(ε,T)X({\varepsilon},T^{*})=Y^{h}({\varepsilon},T^{*}). Then as Yh(ε,)Y^{h}({\varepsilon},\cdot) is smooth and v(X(ε,T),T)=ε=vh(X(ε,T),T)v(X({\varepsilon},T^{*}),T^{*})={\varepsilon}=v^{h}(X({\varepsilon},T^{*}),T^{*}), the Hopf Lemma implies that vx(X(ε,T),T)>vxh(X(ε,T),T)v_{x}(X({\varepsilon},T^{*}),T^{*})>v^{h}_{x}(X({\varepsilon},T^{*}),T^{*}). This is impossible since vxh(X(ε,T),T)=vx(Yh(ε,T),T)=vx(X(ε,T),T)v^{h}_{x}(X({\varepsilon},T^{*}),T^{*})=v_{x}(Y^{h}({\varepsilon},T^{*}),T^{*})=v_{x}(X({\varepsilon},T^{*}),T^{*}). Thus case (i) cannot happen.

Next we consider case (ii): b(T)=Yh(0,T)b(T^{*})=Y^{h}(0,T^{*}). Again, since Yh(0,)Y^{h}(0,\cdot) is smooth, the generalized Hopf Lemma 5.1 implies that there exist η>0\eta>0 and δ>0\delta>0 such that

vh(x,s)v(x,s)[xYh(0,s)]ηs[Tδ,T],x[Yh(0,s),b(T)+δ].\displaystyle v^{h}(x,s)-v(x,s)\geqslant[x-Y^{h}(0,s)]\;\eta\quad\forall\,s\in[T^{*}-\delta,T^{*}],x\in[Y^{h}(0,s),b(T^{*})+\delta].

However, since vxh(Yh(0,s),s)K(Yh(0,s),s)=p˙h(s)p˙(s)v_{x}^{h}(Y^{h}(0,s),s)K(Y^{h}(0,s),s)=-\dot{p}^{h}(s)\leqslant-\dot{p}(s), the above inequality implies

lim¯x>Yh(0,s),sT,(x,s)(b(T),T)u(x,s)xYh(0,s)p˙(T)ηK(b(T),t).\displaystyle\varlimsup_{x>Y^{h}(0,s),s\leqslant T^{*},(x,s)\to(b(T^{*}),T^{*})}\frac{u(x,s)}{x-Y^{h}(0,s)}\leqslant-\dot{p}(T^{*})-\eta\,K(b(T^{*}),t).

This contradicts the second inequality in (3.1) [with =Yh(0,)\ell=Y^{h}(0,\cdot)] of Lemma 3.1. Hence, case (ii) also cannot happen. Thus, when h<0h<0, we have Yh(0,)<bY^{h}(0,\cdot)<b and Yh(ε,)<X(ε,)Y^{h}({\varepsilon},\cdot)<X({\varepsilon},\cdot) on [0,T][0,T].


5.11 Proof of Theorem 1

Once we know that Yh(0,t)<b(t)<Yh(0,t)Y^{-h}(0,t)<b(t)<Y^{h}(0,t) for h>0h>0 and t[0,T]t\in[0,T], we can send h0h\to 0 to conclude that b(t)=Y(0,t)b(t)=Y(0,t). Consequently, b=Y(0,)C1/2((0,T])b=Y(0,\cdot)\in C^{1/2}((0,T]).

As we know p˙C([0,T])\dot{p}\in C([0,T]), one can show that YzC([0,ε]×(0,T])Y_{z}\in C([0,{\varepsilon}]\times(0,T]), so

limz0,stYz(z,s)=Yz(0,t)=1K(b(t),t)p˙(t)t(0,T].\displaystyle\lim_{z\searrow 0,s\to t}Y_{z}(z,s)=Y_{z}(0,t)=-\frac{1}{K(b(t),t)\dot{p}(t)}\quad\forall\,t\in(0,T].

Using X=YX=Y and ux(Y(z,t),t)=1/[K(Y(z,t),t)Yz(z,t))]u_{x}(Y(z,t),t)=1/[K(Y(z,t),t)Y_{z}(z,t))], we then obtain

ux(b(t)+,t):=limx>b(s),(x,s)(b(t),t)ux(x,s)=p˙(t)t(0,T].\displaystyle u_{x}(b(t)+,t):=\lim_{x>b(s),(x,s)\to(b(t),t)}u_{x}(x,s)=-\dot{p}(t)\qquad\forall\,t\in(0,T].

Thus, (b,u)(b,u) is a classical solution of (1.12) on ×[0,T]{\mathbb{R}}\times[0,T]. If, in addition, pCα+1/2((t1,t2))p\in C^{\alpha+1/2}((t_{1},t_{2})) for some α>1/2\alpha>1/2 that is not an integer, then by local regularity, b=Y(0,)Cα((t1,t2))b=Y(0,\cdot)\in C^{\alpha}((t_{1},t_{2})). If we further have u0C2α([0,x0]),pCα+1/2([0,T])u_{0}\in C^{2\alpha}([0,x_{0}]),p\in C^{\alpha+1/2}([0,T]) for some α1/2\alpha\geqslant 1/2 that is not an integer, and all compatibility conditions up to the order α+1/2\llbracket\alpha+1/2\rrbracket are satisfied, then YC2α,α([0,ε]×[0,T])Y\in C^{2\alpha,\alpha}([0,{\varepsilon}]\times[0,T]) so b=Y(0,)Cα([0,T])b=Y(0,\cdot)\in C^{\alpha}([0,T]).

Finally, upon noting that TT can be arbitrarily large, we also obtain the assertion of Theorem 1, which completes the proof of this theorem.

6 Conclusion

In earlier work, we studied the inverse first-passage problem for a one-dimensional diffusion process by relating it to a variational inequality. We investigated existence and uniqueness, as well as the asymptotic behaviour of the boundary for small times, and weak regularity of the boundary. In this paper, we studied higher-order regularity of the free boundary in the inverse first-passage problem. The main tool used was the hodograph transformation. The traditional approach to the transformation begins with some a priori regularity assumptions, and then uses a bootstrap argument to obtain higher regularity. We presented the results of this approach, but then went further, studying the regularity of the free boundary under weaker assumptions. In order to do so, we needed to perform the hodograph transformation on a carefully chosen scaling of the survival density, and to analyze the behaviour of a related family of quasi-linear parabolic equations. We expect that the method presented here can be applied to other parabolic obstacle problems.

References

  • Abundo [2006] M. Abundo. Limit at zero of the first-passage time density and the inverse problem for one-dimensional diffusions. Stochastic Analysis and Applications, 24:1119–1145, 2006.
  • Anulova [1980] S.V. Anulova. Markov times with given distribution for a Wiener process. Theory Probab. Appl., 25(2):362–366, 1980.
  • Avellaneda and Zhu [2001] M. Avellaneda and J. Zhu. Modeling the distance-to-default process of a firm. Risk, 14(12):125–129, 2001.
  • Chen et al. [2008] X. Chen, J. Chadam, L. Jiang, and W. Zheng. Convexity of the exercise boundary of the American put option on a zero dividend asset. Mathematical Finance, 18(1):185–197, 2008.
  • Chen et al. [2011] X. Chen, L. Cheng, J. Chadam, and D. Saunders. Existence and uniqueness of solutions to the inverse boundary crossing problem for diffusions. Annals of Applied Probability, 21(5):1663–1693, 2011.
  • Cheng et al. [2006] L. Cheng, X. Chen, J. Chadam, and D. Saunders. Analysis of an inverse first passage problem from risk management. SIAM Journal on Mathematical Analysis, 38(3):845–873, 2006.
  • Dudley and Gutmann [1977] R.M. Dudley and S. Gutmann. Stopping times with given laws. In Sém. de Probab. XI (Strasbourg 1975/76), volume 581 of Lecture Notes in Math., pages 51–58. Springer, 1977.
  • Ekström and Janson [2016] E. Ekström and S. Janson. The inverse first-passage problem and optimal stopping. Annals of Applied Probability, 26(5):3154–3177, 2016.
  • Friedman [1982] A. Friedman. Variational Principles and Free-Boundary Problems. John Wiley & Sons, 1982.
  • Guillemin and Pollack [1974] V. Guillemin and A. Pollack. Differential Topology. Prentice-Hall, 1974.
  • Huang and Tian [2006] H. Huang and W. Tian. Constructing default boundaries. Banques & Marchés, pages 21–28, Jan.-Feb. 2006.
  • Hull and White [2001] J. Hull and A. White. Valuing credit default swaps II: Modeling default correlations. Journal of Derivatives, 8(3):12–21, 2001.
  • Iscoe and Kreinin [2002] I. Iscoe and A. Kreinin. Default boundary problem. Algorithmics Inc., Internal Paper, 2002.
  • Kinderlehrer and Stampacchia [1980] D. Kinderlehrer and G. Stampacchia. An Introduction to Variational Inequalities and their Applications. Academic Press, New York and London, 1980.
  • Krylov [1996] N.V. Krylov. Lectures on Elliptic and Parabolic Equations in Hölder Spaces. American Mathematical Society, Providence, Rhode Island, 1996.
  • Ladyzhenskaya et al. [1968] O.A. Ladyzhenskaya, V.A. Solonnikov, and N.N. Ural’tseva. Linear and Quasi-Linear Equations of Parabolic Type, volume 23 of Translations of Mathematical Monographs. American Mathematical Society, Providence, Rhode Island, 1968.
  • Lieberman [1996] G.M. Lieberman. Second Order Parabolic Differential Equations. World Scientific, Singapore, 1996.
  • Peskir [2002] G. Peskir. On integral equations arising in the first-passage problem for Brownian motion. Integral Equations Appl., 14(4):397–423, 2002.
  • Peskir and Shiryaev [2005] G. Peskir and A.N. Shiryaev. Optimal Stopping and Free Boundary Problems. Birkhäuser, 2005.
  • Potiron [2021] Y. Potiron. Existence in the inverse Shiryaev problem. Available on arxiv.org, 2021.
  • Protter and Weinberger [1967] M.H. Protter and H.F. Weinberger. Maximum Principles in Differential Equations. Springer, 1967.
  • Zucca and Sacerdote [2009] C. Zucca and L. Sacerdote. On the inverse first-passage-time problem for a Wiener process. Annals of Applied Probability, 19(4):1319–1346, 2009.