This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Concavity of solutions to degenerate elliptic equations on the sphere

Mat Langford Department of Mathematics, University of Tennessee Knoxville, TN 37996, USA [email protected]  and  Julian Scheuer Department of Mathematics, Columbia University New York, NY 10027, USA [email protected]
Abstract.

We prove the concavity of classical solutions to a wide class of degenerate elliptic differential equations on strictly convex domains of the unit sphere. The proof employs a suitable two-point maximum principle, a technique which originates in works of Korevaar, Kawohl and Kennington for equations on Euclidean domains. We emphasize that no differentiability of the differential operator is needed, but only some monotonicity and concavity properties.

Key words and phrases:
Degenerate elliptic equations; Fully nonlinear PDE
2010 Mathematics Subject Classification:
35B45, 35J70
The work of JS is funded by the ”Deutsche Forschungsgemeinschaft” (DFG, German research foundation); Project ”Quermassintegral preserving local curvature flows”; Grant number SCHE 1879/3-1.

1. Introduction

This paper is about classical solutions to fully nonlinear degenerate equations of the form

(1.1) f(|u|,2u)=b(,u,|u|)\displaystyle f(\lvert\nabla u\rvert,-\nabla^{2}u)=b(\cdot,u,\lvert\nabla u\rvert)

on a domain Ω𝕊n\Omega\subset\mathbb{S}^{n}, n2n\geq 2, of the unit sphere whose closure Ω¯\bar{\Omega} is geodesically convex. (We recall that a subset CC of 𝕊n\mathbb{S}^{n} is geodesically convex if no two points of CC are antipodal and the unique minimizing geodesic segment between any two points of CC lies entirely in CC.) Here

(1.2) u=(Du)and2u=(D2u),\displaystyle\nabla u=(Du)^{\sharp}\;\;\text{and}\;\;\nabla^{2}u=(D^{2}u)^{\sharp},

where DD is the Levi-Civita connection of the round metric σ\sigma on 𝕊n\mathbb{S}^{n} and the \sharp-operator is the usual index raising operator with respect to σ\sigma, so that u\nabla u is the gradient vector field and 2u\nabla^{2}u the Hessian endomorphism field on Ω\Omega.

We want to prove concavity of solutions uC2(Ω)C1(Ω¯)u\in C^{2}(\Omega)\cap C^{1}(\bar{\Omega}) to (1.1) satisfying suitable boundary conditions. The question of concavity of solutions to elliptic equations on Euclidean domains is widely studied, in particular in connection with Laplacian eigenvalues. For example, Brascamp and Lieb [4] proved that the first Dirichlet eigenfunction of the Laplacian over a convex domain is log-concave. A recent highlight in this area was, with many partial results preceding it, the proof of the fundamental gap conjecture by Andrews and Clutterbuck [2]. Using a refined log-concavity estimate for the first eigenfunction, they obtained a sharp estimate for the difference between the first and second Dirichlet eigenvalues (the “fundamental gap”). An analogous result on the sphere was obtained by Seto, Wang and Wei [12].

Our main tool will be the “two-point maximum principle” approach introduced by Korevaar [9]. The idea is to estimate the function

(1.3) Z~(λ,x,y)=u(λx+(1λ)y)(λu(x)+(1λ)u(y),0λ1,\displaystyle\tilde{Z}(\lambda,x,y)=u(\lambda x+(1-\lambda)y)-(\lambda u(x)+(1-\lambda)u(y),\quad 0\leq\lambda\leq 1,

using a first and second derivative test from below by zero, where uu is the solution to a certain quasilinear equation with some structure properties. Refinements of this approach for C2C^{2}-solutions are due to Kennington [8] and Kawohl [7]. Using regularization, Sakaguchi [11] has transferred this method to solutions of certain pp-Laplace equations, the solutions of which may be non-smooth. Similar extensions to viscosity solutions are the content of [1] and [6]. The argument was used to obtain proofs of differential Harnack inequalities for hypersurface flows in [3].

To our knowledge, there is no such concavity maximum principle for non-Euclidean domains. The reason for this seems to arise from a major technical obstruction: in order to make the application of the maximum principle to (1.3) work, it is necessary to differentiate a family of geodesics with respect to their end-points. This leads to the consideration of a family of Jacobi fields and their variations. In order to exploit the first order conditions, one needs detailed knowledge about the nonlinear Jacobi fields along the Z~\tilde{Z}-minimizing geodesic. Even worse, the second order condition contains a further space derivative of the Jacobi equation and in general it seems impossible to extract the correct sign, which is required to apply the maximum principle, from these conditions.

The purpose of this paper is to initiate the study of concavity maximum principles for degenerate elliptic equations on a Riemannian manifold and we start this journey on the unit sphere. Here we have managed to deal with the pretty well known Jacobi fields and their maybe not so well known derivatives. One first key observation is that, for technical reasons, the full three-parameter function Z~\tilde{Z} is no longer suitable. The concavity of a solution is equivalent to the concavity of

(1.4) tu(γ(t))\displaystyle t\mapsto u(\gamma(t))

for every geodesic

(1.5) γ:[1,1]Ω¯,\displaystyle\gamma\colon[-1,1]\rightarrow\bar{\Omega},

cf. [13]. By a well known result due to Jensen [5], for continuous functions this is equivalent to asking this function to be midpoint-concave, i.e.

(1.6) u(γ(0))12(u(γ(1))+u(γ(1))).\displaystyle u(\gamma(0))\geq\frac{1}{2}\left(u(\gamma(-1))+u(\gamma(1))\right).

Hence it will suffice to prove that the function

(1.7) Z:Ω¯×Ω¯\displaystyle Z\colon\bar{\Omega}\times\bar{\Omega} \displaystyle\rightarrow\mathbb{R}
(x,y)\displaystyle(x,y) u(γx,y(0))12(u(γx,y(1))+u(γx,y(1)))\displaystyle\mapsto u\left(\gamma_{x,y}(0)\right)-\frac{1}{2}(u(\gamma_{x,y}(-1))+u(\gamma_{x,y}(1)))

is non-negative, where γx,y\gamma_{x,y} is the unique minimizing geodesic from xx to yy. The mid-point approach has also been taken for equations on Euclidean domains in [6, 7].

Before we state our main theorem, we need to introduce some more objects. We denote by (𝒱,)(\mathcal{V},\mathcal{L}) the category of nn-dimensional real vector spaces with linear transformations as its objects. A function ff on that category is called isotropic if it acts on the subclass \mathcal{E} of endomorphisms in a GLn\mathrm{GL}_{n}-invariant way:

(1.8) 𝒲Vinv:f(𝒲)=f(V1𝒲V),\displaystyle\forall\mathcal{W}\in\mathcal{E}~{}\forall V\in\mathcal{L}_{\mathrm{inv}}\colon f(\mathcal{W})=f(V^{-1}\circ\mathcal{W}\circ V),

where inv\mathcal{L}_{\mathrm{inv}} denotes the class of all invertible linear maps, understood to map to the correct space such that (1.8) makes sense. From this property it is clear that the action of ff on real diagonalizable endomorphism is only determined through the action on the ordered set of eigenvalues (κ1,,κn)(\kappa_{1},\dots,\kappa_{n}),

(1.9) f(𝒲)=:f~(EV(𝒲))=f~(κ1,,κn).\displaystyle f(\mathcal{W})=:\tilde{f}(\operatorname{EV}(\mathcal{W}))=\tilde{f}(\kappa_{1},\dots,\kappa_{n}).

The properties we will require from ff will in general only be satisfied if ff is restricted to a subclass of \mathcal{E} and the best way to describe the domain of definition of ff is via f~\tilde{f}. Therefore we suppose the domain of definition for f~\tilde{f}, Γn\Gamma\subset\mathbb{R}^{n}, is a symmetric, open and convex cone which contains the positive cone

(1.10) Γ+={κn:κi>01in}.\displaystyle\Gamma_{+}=\{\kappa\in\mathbb{R}^{n}\colon\kappa_{i}>0~{}\forall 1\leq i\leq n\}.

Denote by Symn×n()\mathrm{Sym}^{n\times n}(\mathbb{R}) the vector space of symmetric (n×n)(n\times n)-matrices. Then there exists an open domain of definition 𝒟ΓSymn×n()\mathcal{D}_{\Gamma}\subset\mathrm{Sym}^{n\times n}(\mathbb{R}) for ff consisting of matrices with eigenvalues in Γ\Gamma, such that (1.9) holds for all 𝒲𝒟Γ\mathcal{W}\in\mathcal{D}_{\Gamma}.

Abusing notation, the orthogonal invariance of ff allows us to understand 𝒟Γ\mathcal{D}_{\Gamma} to be a subset of any other space of g¯\bar{g}-self-adjoint endomorphisms of an arbitrary nn-dimensional inner product space (E,g¯)(E,\bar{g}). Here is our main theorem:

1.1 Theorem.

Let n2n\geq 2, Ω𝕊n\Omega\subset\mathbb{S}^{n} and suppose Ω¯\bar{\Omega} to be geodesically convex. Let Γn\Gamma\subset\mathbb{R}^{n} be a symmetric, open and convex cone containing Γ+\Gamma_{+} and let uC2(Ω)C1(Ω¯)u\in C^{2}(\Omega)\cap C^{1}(\bar{\Omega}) be a solution to

(1.11) f(|u|,2u)=b(,u,|u|)inΩ.\displaystyle f(\lvert\nabla u\rvert,-\nabla^{2}u)=b(\cdot,u,\lvert\nabla u\rvert)\quad\text{in}~{}\Omega.

Suppose the function f:[0,)×𝒟Γ¯f\colon[0,\infty)\times\mathcal{D}_{\bar{\Gamma}}\rightarrow\mathbb{R} has the following properties:

  1. (i)

    For every p[0,)p\in[0,\infty), f(p,)f(p,\cdot) is an isotropic function on 𝒟Γ¯,{\mathcal{D}}_{\bar{\Gamma}},

  2. (ii)

    ff is increasing in the first variable,

    (1.12) pqf(p,)f(q,),\displaystyle p\leq q\quad\Rightarrow\quad f(p,\cdot)\leq f(q,\cdot),
  3. (iii)

    increasing in the second variable,

    (1.13) κiλi1inf(,diag(κ1,,κn))f(,diag(λ1,,λn)),\displaystyle\kappa_{i}\leq\lambda_{i}\quad\forall 1\leq i\leq n\quad\Rightarrow\quad f(\cdot,\mathrm{diag}(\kappa_{1},\dots,\kappa_{n}))\leq f(\cdot,\mathrm{diag}(\lambda_{1},\dots,\lambda_{n})),
  4. (iv)

    convex in the second variable,

    (1.14) f(,12(A+B))12(f(,A)+f(,B))A,B𝒟Γ¯.\displaystyle f(\cdot,\tfrac{1}{2}(A+B))\leq\tfrac{1}{2}(f(\cdot,A)+f(\cdot,B))\quad\forall A,B\in\mathcal{D}_{\bar{\Gamma}}.

Suppose the function b:Ω××[0,)b\colon\Omega\times\mathbb{R}\times[0,\infty)\rightarrow\mathbb{R} is

  1. (a)

    decreasing in the third variable,

  2. (b)

    jointly concave in the first two variables:

    (1.15) b(γx,y(0),12(u(x)+u(y)),p)12b(x,u(x),p)+12b(y,u(y),p),\displaystyle b\left(\gamma_{x,y}(0),\tfrac{1}{2}(u(x)+u(y)),p\right)\geq\tfrac{1}{2}b(x,u(x),p)+\tfrac{1}{2}b(y,u(y),p),
  3. (c)

    strictly decreasing in the second variable.

If furthermore for all (x,y)Ω×Ω¯(x,y)\in\partial\Omega\times\bar{\Omega} there holds

(1.16) Dux(γ˙x,y(1))Duy(γ˙x,y(1))>0,\displaystyle Du_{x}(\dot{\gamma}_{x,y}(-1))-Du_{y}(\dot{\gamma}_{x,y}(1))>0,

then uu is concave.

1.2 Remark.

The boundary condition is slightly stronger than the ones in [8, 9]. This stems from our technical restriction that due to the nonlinear ambient geometry we have to use the notion of mid-point concavity. Hence we can not vary a boundary point without varying another point too. Geometrically it says that at every point xΩx\in\partial\Omega, the totally geodesic hypersurface tangent to graph(u)\operatorname{graph}(u) at (x,u(x))(x,u(x)) must lie above graph(u)\operatorname{graph}(u) and at every point yΩy\in\Omega the totally geodesic hypersurface tangent to graph(u)\operatorname{graph}(u) at (y,u(y))(y,u(y)) must lie above graph(u|Ω)\operatorname{graph}(u_{|\partial\Omega}), and one of these relations must be strict.

1.3 Example.

Let us give a large class of examples of operators to which this theorem applies. Given a convex and increasing function ψ:\psi\colon\mathbb{R}\rightarrow\mathbb{R}, define

(1.17) f~(κ1,,κn)=i=1nψ(κi).\displaystyle\tilde{f}(\kappa_{1},\dots,\kappa_{n})=\sum_{i=1}^{n}\psi(\kappa_{i}).

Then the operator function

(1.18) f(𝒲)=tr(ψ(𝒲))\displaystyle f(\mathcal{W})=\operatorname{tr}(\psi(\mathcal{W}))

associated to f~\tilde{f} is convex and increasing. Here ψ\psi is also understood to be the operator function. Explicit examples are

(1.19) f(2u)=Δu,f(2u)=tr(exp(2u)),\displaystyle f(-\nabla^{2}u)=-\Delta u,\quad f(-\nabla^{2}u)=\operatorname{tr}(\exp(-\nabla^{2}u)),

where the latter is the matrix exponential.

For the proof of Theorem 1.1 we need an easy result from linear algebra and some tedious calculations of Jacobi fields and their derivatives. We address these issues in the next two sections, before we can proceed to the proof of the theorem.

2. A result from linear algebra

The nonlinear ambient space will introduce an algebraic problem. In order deal with it, we need the following lemma from linear algebra. It should be standard, but we did not find a proper reference, so let us give the proof for the reader’s convenience.

2.1 Lemma.

Let n1n\geq 1 and 𝒲\mathcal{W} and VV be symmetric (n×n)(n\times n)-matrices, such that 𝒲\mathcal{W} is non-negative definite and VV defines a bijective contraction map, i.e.

(2.1) |V(x)||x|.\displaystyle\lvert V(x)\rvert\leq\lvert x\rvert.

Then the spectra of the matrices 𝒲\mathcal{W} and V𝒲VV\circ\mathcal{W}\circ V are ordered in the sense

(2.2) λiκi,\displaystyle\lambda_{i}\leq\kappa_{i},

where λ1λn\lambda_{1}\leq\dots\leq\lambda_{n} are the eigenvalues of V𝒲VV\circ\mathcal{W}\circ V and κ1κn\kappa_{1}\leq\dots\leq\kappa_{n} are those of 𝒲\mathcal{W}. The reverse inequality holds if VV is an expansion.

Proof.

Let A\mathcal{R}_{A} denote the Rayleigh quotient of a matrix,

(2.3) A(x)=Ax,x|x|2.\displaystyle\mathcal{R}_{A}(x)=\frac{\left\langle Ax,x\right\rangle}{\lvert x\rvert^{2}}.

Then

(2.4) V𝒲V(x)=(𝒲V)x,Vx|x|2(𝒲V)x,Vx|Vx|2=𝒲(Vx),\displaystyle\mathcal{R}_{V\circ\mathcal{W}\circ V}(x)=\frac{\left\langle(\mathcal{W}\circ V)x,Vx\right\rangle}{\lvert x\rvert^{2}}\leq\frac{\left\langle(\mathcal{W}\circ V)x,Vx\right\rangle}{\lvert Vx\rvert^{2}}=\mathcal{R}_{\mathcal{W}}(Vx),

where we used the assumptions on 𝒲\mathcal{W} and VV. From the Courant min-max principle we get

(2.5) λi\displaystyle\lambda_{i} =minU{maxx{V𝒲V(x):xU}:dim(U)=i}\displaystyle=\min_{U}\left\{\max_{x}\{\mathcal{R}_{V\circ\mathcal{W}\circ V}(x)\colon x\in U\}\colon\mathrm{dim}(U)=i\right\}
minU{maxx{𝒲(Vx):xU}:dim(U)=i}\displaystyle\leq\min_{U}\left\{\max_{x}\{\mathcal{R}_{\mathcal{W}}(Vx)\colon x\in U\}\colon\mathrm{dim}(U)=i\right\}
=minU{maxx{𝒲(Vx):xV1(U)}:dim(U)=i}\displaystyle=\min_{U}\left\{\max_{x}\{\mathcal{R}_{\mathcal{W}}(Vx)\colon x\in V^{-1}(U)\}\colon\mathrm{dim}(U)=i\right\}
=minU{maxx{𝒲(y):yU}:dim(U)=i}\displaystyle=\min_{U}\left\{\max_{x}\{\mathcal{R}_{\mathcal{W}}(y)\colon y\in U\}\colon\mathrm{dim}(U)=i\right\}
=κi.\displaystyle=\kappa_{i}.

To prove the expansion case, just reverse all inequalities. ∎

3. Jacobi fields

We have to differentiate (1.7) twice and in this section we collect the geometric ingredients of this process. The derivatives of γx,y\gamma_{x,y} will be Jacobi fields and variations of them and we will make extensive use of the Jacobi equation. Define

(3.1) Γ:Ω×Ω×[1,1]\displaystyle\Gamma\colon\Omega\times\Omega\times[-1,1] Ω\displaystyle\rightarrow\Omega
(x,y,t)\displaystyle(x,y,t) γx,y(t).\displaystyle\mapsto\gamma_{x,y}(t).

For the moment choose arbitrary coordinates (xi,yi)(x^{i},y^{i}) in Ω×Ω\Omega\times\Omega. The vector fields

(3.2) Jxi(x,y,)=Γxi(x,y,),Jyi(x,y,)=Γyi(x,y,)\displaystyle J_{x^{i}}(x,y,\cdot)=\frac{\partial\Gamma}{\partial x^{i}}(x,y,\cdot),\quad J_{y^{i}}(x,y,\cdot)=\frac{\partial\Gamma}{\partial y^{i}}(x,y,\cdot)

are, for given x,yx,y, Jacobi fields along γx,y\gamma_{x,y} with

(3.3) Jxi(x,y,1)=xi,Jxi(x,y,1)=0,Jyi(x,y,1)=0,Jyi(x,y,1)=yi,\displaystyle J_{x^{i}}(x,y,-1)=\partial_{x^{i}},\quad J_{x^{i}}(x,y,1)=0,\quad J_{y^{i}}(x,y,-1)=0,\quad J_{y^{i}}(x,y,1)=\partial_{y^{i}},

where we identify xi\partial_{x^{i}} with its pushforward under the identity map Γ(,y,1)\Gamma(\cdot,y,-1) and likewise for yi\partial_{y^{i}}. We will have to deal with the second derivatives of Γ\Gamma which are given by (neglecting the distinction between DxjD_{\partial_{x^{j}}} and DΓxjD_{\frac{\partial\Gamma}{\partial x^{j}}}, since later we will work in Riemannian normal coordinates)

(3.4) Kxixj=DxjΓxi,Kyiyj=DyjΓyi,Kxiyj=DyjΓxi,Kyixj=DxjΓyi,\displaystyle K_{x^{i}x^{j}}=D_{\partial_{x^{j}}}\frac{\partial\Gamma}{\partial x^{i}},\quad K_{y^{i}y^{j}}=D_{\partial_{y^{j}}}\frac{\partial\Gamma}{\partial y^{i}},\quad K_{x^{i}y^{j}}=D_{\partial_{y^{j}}}\frac{\partial\Gamma}{\partial x^{i}},\quad K_{y^{i}x^{j}}=D_{\partial_{x^{j}}}\frac{\partial\Gamma}{\partial y^{i}},

Those satisfy a differentiated Jacobi equation, which we derive in the following. We use the following convention for the Riemannian curvature tensor.

(3.5) Rm(X,Y)Z=DXDYZDYDXZD[X,Y]Z,\displaystyle\operatorname{Rm}(X,Y)Z=D_{X}D_{Y}Z-D_{Y}D_{X}Z-D_{[X,Y]}Z,

i.e. on Ω𝕊n\Omega\subset\mathbb{S}^{n} we have

(3.6) Rm(X,Y)Z=σ(Y,Z)Xσ(X,Z)Y,\displaystyle\operatorname{Rm}(X,Y)Z=\sigma(Y,Z)X-\sigma(X,Z)Y,

where we recall that σ\sigma denotes the round metric. With this convention the Jacobi equation, which for example JxiJ_{x^{i}} satisfies for fixed x,yx,y, is

(3.7) 0=Dt2Jxi+Rm(Jxi,Γ˙)Γ˙=Dt2Jxi+|Γ˙|2Jxiσ(Γ˙,Jxi)Γ˙,\displaystyle 0=D_{t}^{2}J_{x^{i}}+\operatorname{Rm}(J_{x^{i}},\dot{\Gamma})\dot{\Gamma}=D_{t}^{2}J_{x^{i}}+\lvert\dot{\Gamma}\rvert^{2}J_{x^{i}}-\sigma(\dot{\Gamma},J_{x^{i}})\dot{\Gamma},

see e.g. [10, Ch. 10]. The same holds for JyiJ_{y^{i}} and now we have to differentiate this equation with respect to xjx^{j} and yjy^{j}. We consider it to be easier to follow those calculations if we do not explicitly put in the specific form of the Riemann tensor just yet. However we will already use DRm=0D\operatorname{Rm}=0. For KxixjK_{x^{i}x^{j}} we calculate:

3.1 Lemma.

There holds

(3.8) 0\displaystyle 0 =Dt2Kxixj+Rm(Kxixj,Γ˙)Γ˙+2Rm(Jxi,Γ˙)DtJxj+2Rm(Jxj,Γ˙)DtJxi\displaystyle=D_{t}^{2}K_{x^{i}x^{j}}+\operatorname{Rm}(K_{x^{i}x^{j}},\dot{\Gamma})\dot{\Gamma}+2\operatorname{Rm}(J_{x^{i}},\dot{\Gamma})D_{t}J_{x^{j}}+2\operatorname{Rm}(J_{x^{j}},\dot{\Gamma})D_{t}J_{x^{i}}

and similarly for KyiyjK_{y^{i}y^{j}}, KxiyjK_{x^{i}y^{j}} and KyixjK_{y^{i}x^{j}}.

Proof.

We only prove this for KxixjK_{x^{i}x^{j}} since the other identities are obtained in a similar manner. Differentiating the Jacobi equation and applying the first Bianchi identity (in the final step) yields

(3.9) 0\displaystyle 0 =Dxj(Dt2Jxi+Rm(Jxi,Γ˙)Γ˙)\displaystyle=D_{\partial_{x^{j}}}\left(D_{t}^{2}J_{x^{i}}+\operatorname{Rm}(J_{x^{i}},\dot{\Gamma})\dot{\Gamma}\right)
=DtDxjDtJxi+Rm(Jxj,Γ˙)DtJxi+Rm(Kxixj,Γ˙)Γ˙+Rm(Jxi,DtJxj)Γ˙\displaystyle=D_{t}D_{\partial_{x^{j}}}D_{t}J_{x^{i}}+\operatorname{Rm}(J_{x^{j}},\dot{\Gamma})D_{t}J_{x^{i}}+\operatorname{Rm}(K_{x^{i}x^{j}},\dot{\Gamma})\dot{\Gamma}+\operatorname{Rm}(J_{x^{i}},D_{t}J_{x^{j}})\dot{\Gamma}
+Rm(Jxi,Γ˙)DtJxj\displaystyle\hphantom{=}+\operatorname{Rm}(J_{x^{i}},\dot{\Gamma})D_{t}J_{x^{j}}
=Dt(DtDxjJxi+Rm(Jxj,Γ˙)Jxi)+Rm(Jxj,Γ˙)DtJxi+Rm(Kxixj,Γ˙)Γ˙\displaystyle=D_{t}\left(D_{t}D_{\partial_{x^{j}}}J_{x^{i}}+\operatorname{Rm}(J_{x^{j}},\dot{\Gamma})J_{x^{i}}\right)+\operatorname{Rm}(J_{x^{j}},\dot{\Gamma})D_{t}J_{x^{i}}+\operatorname{Rm}(K_{x^{i}x^{j}},\dot{\Gamma})\dot{\Gamma}
+Rm(Jxi,DtJxj)Γ˙+Rm(Jxi,Γ˙)DtJxj\displaystyle\hphantom{=}+\operatorname{Rm}(J_{x^{i}},D_{t}J_{x^{j}})\dot{\Gamma}+\operatorname{Rm}(J_{x^{i}},\dot{\Gamma})D_{t}J_{x^{j}}
=Dt2Kxixj+Rm(DtJxj,Γ˙)Jxi+2Rm(Jxj,Γ˙)DtJxi+Rm(Kxixj,Γ˙)Γ˙\displaystyle=D_{t}^{2}K_{x^{i}x^{j}}+\operatorname{Rm}(D_{t}J_{x^{j}},\dot{\Gamma})J_{x^{i}}+2\operatorname{Rm}(J_{x^{j}},\dot{\Gamma})D_{t}J_{x^{i}}+\operatorname{Rm}(K_{x^{i}x^{j}},\dot{\Gamma})\dot{\Gamma}
+Rm(Jxi,DtJxj)Γ˙+Rm(Jxi,Γ˙)DtJxj\displaystyle\hphantom{=}+\operatorname{Rm}(J_{x^{i}},D_{t}J_{x^{j}})\dot{\Gamma}+\operatorname{Rm}(J_{x^{i}},\dot{\Gamma})D_{t}J_{x^{j}}
=Dt2Kxixj+Rm(Kxixj,Γ˙)Γ˙+2Rm(Jxi,Γ˙)DtJxj+2Rm(Jxj,Γ˙)DtJxi\displaystyle=D_{t}^{2}K_{x^{i}x^{j}}+\operatorname{Rm}(K_{x^{i}x^{j}},\dot{\Gamma})\dot{\Gamma}+2\operatorname{Rm}(J_{x^{i}},\dot{\Gamma})D_{t}J_{x^{j}}+2\operatorname{Rm}(J_{x^{j}},\dot{\Gamma})D_{t}J_{x^{i}}

as claimed. ∎

In order to get more information about these quantities, first we note that at t=±1t=\pm 1 all KK are zero. This is due to the fact that

(3.10) Γ(,,1)=prx,Γ(,,1)=pry\displaystyle\Gamma(\cdot,\cdot,-1)=\operatorname{pr}_{x},\quad\Gamma(\cdot,\cdot,1)=\operatorname{pr}_{y}

and the second covariant derivative of an isometry is always zero. Also it is in order to pick the right coordinates. In the following, latin indices range from 0 to n1n-1 and greek indices range from 11 to n1n-1. At a given point x0x_{0} pick Riemannian normal coordinates, where with a rotation we arrange

(3.11) x0(x0)=γ˙|γ˙|(x0),\displaystyle\partial_{x^{0}}(x_{0})=\frac{\dot{\gamma}}{\lvert\dot{\gamma}\rvert}(x_{0}),

while at y0y_{0} we do the same. Since we have Riemannian normal coordinates at x0x_{0} and y0y_{0}, the 2n2n vectors (xi,yi)(\partial_{x^{i}},\partial_{y^{i}}) form an orthonormal basis at (x0,y0)(x_{0},y_{0}), while y0\partial_{y^{0}} is the parallel transport of x0\partial_{x^{0}}. Furthermore, the Christoffel symbols vanish at (x0,y0)(x_{0},y_{0}) in the product space. After a possible orthogonal transformation at y0y_{0} and reordering of basis vectors, there exists a set of vector fields (Ei(t))t[1,1](E_{i}(t))_{t\in[-1,1]} parallel along γx0,y0\gamma_{x_{0},y_{0}}, such that

(3.12) Ei(1)=xi,Ei(1)=yi.\displaystyle E_{i}(-1)=\partial_{x^{i}},\quad E_{i}(1)=\partial_{y^{i}}.
3.2 Lemma.

In the above constructed coordinates there hold:

  1. (i)

    Kx0x0=Ky0y0=Kx0y0=0.K_{x^{0}x^{0}}=K_{y^{0}y^{0}}=K_{x^{0}y^{0}}=0.

  2. (ii)

    Kx0xα+Ky0yα+Kx0yα+Ky0xα=0.K_{x^{0}x^{\alpha}}+K_{y^{0}y^{\alpha}}+K_{x^{0}y^{\alpha}}+K_{y^{0}x^{\alpha}}=0.

  3. (iii)

    The quantities

    (3.13) 𝒦2=(Kxαxβ+Kyαyβ+Kxαyβ+Kyαxβ)ξαξβ\displaystyle\mathcal{K}_{2}=\left(K_{x^{\alpha}x^{\beta}}+K_{y^{\alpha}y^{\beta}}+K_{x^{\alpha}y^{\beta}}+K_{y^{\alpha}x^{\beta}}\right)\xi^{\alpha}\xi^{\beta}

    and

    (3.14) 𝒦3=(Kxαxβ+KyαyβKxαyβKyαxβ)ξαξβ\displaystyle\mathcal{K}_{3}=\left(K_{x^{\alpha}x^{\beta}}+K_{y^{\alpha}y^{\beta}}-K_{x^{\alpha}y^{\beta}}-K_{y^{\alpha}x^{\beta}}\right)\xi^{\alpha}\xi^{\beta}

    where ξ\xi is any parallel vector field, satisfy

    (3.15) 0\displaystyle 0 =Dt2𝒦2+Rm(𝒦2,Γ˙)Γ˙+4Rm(Jxα+Jyα,Γ˙)(DtJxβ+DtJyβ)ξαξβ,\displaystyle=D_{t}^{2}\mathcal{K}_{2}+\operatorname{Rm}(\mathcal{K}_{2},\dot{\Gamma})\dot{\Gamma}+4\operatorname{Rm}(J_{x^{\alpha}}+J_{y^{\alpha}},\dot{\Gamma})(D_{t}J_{x^{\beta}}+D_{t}J_{y^{\beta}})\xi^{\alpha}\xi^{\beta},

    respectively

    (3.16) 0\displaystyle 0 =Dt2𝒦3+Rm(𝒦3,Γ˙)Γ˙+4Rm(JxαJyα,Γ˙)(DtJxβDtJyβ)ξαξβ.\displaystyle=D_{t}^{2}\mathcal{K}_{3}+\operatorname{Rm}(\mathcal{K}_{3},\dot{\Gamma})\dot{\Gamma}+4\operatorname{Rm}(J_{x^{\alpha}}-J_{y^{\alpha}},\dot{\Gamma})(D_{t}J_{x^{\beta}}-D_{t}J_{y^{\beta}})\xi^{\alpha}\xi^{\beta}.
Proof.

(i) Since x0x_{0} and y0y_{0} are not conjugate points, the Jacobi fields are uniquely determined by their boundary values. Hence, suppressing (x0,y0)(x_{0},y_{0}),

(3.17) Jx0(t)=12(1t)E0(t),Jy0(t)=12(1+t)E0(t).\displaystyle J_{x^{0}}(t)=\frac{1}{2}(1-t)E_{0}(t),\quad J_{y^{0}}(t)=\frac{1}{2}(1+t)E_{0}(t).

However, E0E_{0} itself is the tangent to the normalized geodesic. Plugging this into (3.8) yields that those KK are Jacobi fields themselves, though with vanishing boundary values. Hence they are identically zero.

(ii) For any 1αn11\leq\alpha\leq n-1 define

(3.18) 𝒦1=Kx0xα+Ky0yα+Kx0yα+Ky0xα.\displaystyle\mathcal{K}_{1}=K_{x^{0}x^{\alpha}}+K_{y^{0}y^{\alpha}}+K_{x^{0}y^{\alpha}}+K_{y^{0}x^{\alpha}}.

Then 𝒦1\mathcal{K}_{1} solves

(3.19) 0\displaystyle 0 =Dt2𝒦1+Rm(𝒦1,Γ˙)Γ˙+2(Rm(Jxα,Γ˙)DtJx0+Rm(Jyα,Γ˙)DtJy0\displaystyle=D_{t}^{2}\mathcal{K}_{1}+\operatorname{Rm}(\mathcal{K}_{1},\dot{\Gamma})\dot{\Gamma}+2\left(\operatorname{Rm}(J_{x^{\alpha}},\dot{\Gamma})D_{t}J_{x^{0}}+\operatorname{Rm}(J_{y^{\alpha}},\dot{\Gamma})D_{t}J_{y^{0}}\right.
+Rm(Jyα,Γ˙)DtJx0+Rm(Jxα,Γ˙)DtJy0)\displaystyle\hphantom{=}\left.+\operatorname{Rm}(J_{y^{\alpha}},\dot{\Gamma})D_{t}J_{x^{0}}+\operatorname{Rm}(J_{x^{\alpha}},\dot{\Gamma})D_{t}J_{y^{0}}\right)
=Dt2𝒦1+Rm(𝒦1,Γ˙)Γ˙,\displaystyle=D_{t}^{2}\mathcal{K}_{1}+\operatorname{Rm}(\mathcal{K}_{1},\dot{\Gamma})\dot{\Gamma},

where we have used

(3.20) Jx0+Jy0=E0,DtJx0=12E0=DtJy0.\displaystyle J_{x^{0}}+J_{y^{0}}=E_{0},\quad D_{t}J_{x^{0}}=-\frac{1}{2}E_{0}=-D_{t}J_{y^{0}}.

Hence 𝒦1\mathcal{K}_{1} is, as a Jacobi field with vanishing boundary values, identically zero.

(iii)  We add up all four equations of the form (3.8) with the appropriate sign. We treat 𝒦2\mathcal{K}_{2} and 𝒦3\mathcal{K}_{3} simultaneously by using ±\pm in the appropriate places. We get

(3.21) 0\displaystyle 0 =Dt2Kxαxβ+Rm(Kxαxβ,Γ˙)Γ˙+2Rm(Jxα,Γ˙)DtJxβ+2Rm(Jxβ,Γ˙)DtJxα\displaystyle=D_{t}^{2}K_{x^{\alpha}x^{\beta}}+\operatorname{Rm}(K_{x^{\alpha}x^{\beta}},\dot{\Gamma})\dot{\Gamma}+2\operatorname{Rm}(J_{x^{\alpha}},\dot{\Gamma})D_{t}J_{x^{\beta}}+2\operatorname{Rm}(J_{x^{\beta}},\dot{\Gamma})D_{t}J_{x^{\alpha}}
+Dt2Kyαyβ+Rm(Kyαyβ,Γ˙)Γ˙+2Rm(Jyα,Γ˙)DtJyβ+2Rm(Jyβ,Γ˙)DtJyα\displaystyle\hphantom{=}+D_{t}^{2}K_{y^{\alpha}y^{\beta}}+\operatorname{Rm}(K_{y^{\alpha}y^{\beta}},\dot{\Gamma})\dot{\Gamma}+2\operatorname{Rm}(J_{y^{\alpha}},\dot{\Gamma})D_{t}J_{y^{\beta}}+2\operatorname{Rm}(J_{y^{\beta}},\dot{\Gamma})D_{t}J_{y^{\alpha}}
±Dt2Kxαyβ±Rm(Kxαyβ,Γ˙)Γ˙±2Rm(Jxα,Γ˙)DtJyβ±2Rm(Jyβ,Γ˙)DtJxα\displaystyle\hphantom{=}\pm D_{t}^{2}K_{x^{\alpha}y^{\beta}}\pm\operatorname{Rm}(K_{x^{\alpha}y^{\beta}},\dot{\Gamma})\dot{\Gamma}\pm 2\operatorname{Rm}(J_{x^{\alpha}},\dot{\Gamma})D_{t}J_{y^{\beta}}\pm 2\operatorname{Rm}(J_{y^{\beta}},\dot{\Gamma})D_{t}J_{x^{\alpha}}
±Dt2Kyαxβ±Rm(Kyαxβ,Γ˙)Γ˙±2Rm(Jyα,Γ˙)DtJxβ±2Rm(Jxβ,Γ˙)DtJyα.\displaystyle\hphantom{=}\pm D_{t}^{2}K_{y^{\alpha}x^{\beta}}\pm\operatorname{Rm}(K_{y^{\alpha}x^{\beta}},\dot{\Gamma})\dot{\Gamma}\pm 2\operatorname{Rm}(J_{y^{\alpha}},\dot{\Gamma})D_{t}J_{x^{\beta}}\pm 2\operatorname{Rm}(J_{x^{\beta}},\dot{\Gamma})D_{t}J_{y^{\alpha}}.

Applying this to ξαξβ\xi^{\alpha}\xi^{\beta} gives

(3.22) 0\displaystyle 0 =Dt2𝒦2/3+Rm(𝒦2/3,Γ˙)Γ˙+4Rm(Jxα,Γ˙)DtJxβξαξβ\displaystyle=D_{t}^{2}\mathcal{K}_{2/3}+\operatorname{Rm}(\mathcal{K}_{2/3},\dot{\Gamma})\dot{\Gamma}+4\operatorname{Rm}(J_{x^{\alpha}},\dot{\Gamma})D_{t}J_{x^{\beta}}\xi^{\alpha}\xi^{\beta}
+4Rm(Jyα,Γ˙)DtJyβξαξβ±4Rm(Jxα,Γ˙)DtJyβξαξβ\displaystyle\hphantom{=}+4\operatorname{Rm}(J_{y^{\alpha}},\dot{\Gamma})D_{t}J_{y^{\beta}}\xi^{\alpha}\xi^{\beta}\pm 4\operatorname{Rm}(J_{x^{\alpha}},\dot{\Gamma})D_{t}J_{y^{\beta}}\xi^{\alpha}\xi^{\beta}
±4Rm(Jyα,Γ˙)DtJxβξαξβ\displaystyle\hphantom{=}\pm 4\operatorname{Rm}(J_{y^{\alpha}},\dot{\Gamma})D_{t}J_{x^{\beta}}\xi^{\alpha}\xi^{\beta}
=Dt2𝒦2/3+Rm(𝒦2/3,Γ˙)Γ˙+4Rm(Jxα±Jyα,Γ˙)DtJxβξαξβ\displaystyle=D_{t}^{2}\mathcal{K}_{2/3}+\operatorname{Rm}(\mathcal{K}_{2/3},\dot{\Gamma})\dot{\Gamma}+4\operatorname{Rm}(J_{x^{\alpha}}\pm J_{y^{\alpha}},\dot{\Gamma})D_{t}J_{x^{\beta}}\xi^{\alpha}\xi^{\beta}
+4Rm(Jyα±Jxα,Γ˙)DtJyβξαξβ.\displaystyle\hphantom{=}+4\operatorname{Rm}(J_{y^{\alpha}}\pm J_{x^{\alpha}},\dot{\Gamma})D_{t}J_{y^{\beta}}\xi^{\alpha}\xi^{\beta}.

A quick check of both cases reveals that 𝒦2\mathcal{K}_{2} and 𝒦3\mathcal{K}_{3} satisfy the desired equations. ∎

3.3 Lemma.

The quantities 𝒦2\mathcal{K}_{2} and 𝒦3\mathcal{K}_{3} from Lemma 3.2 both vanish at t=0t=0.

Proof.

From (3.7) we obtain

(3.23) Jxα(t)=vα(1t)Eα(t),Jyα(t)=vα(1+t)Eα(t)\displaystyle J_{x^{\alpha}}(t)=v_{\alpha}(1-t)E_{\alpha}(t),\quad J_{y^{\alpha}}(t)=v_{\alpha}(1+t)E_{\alpha}(t)

with a function vαv_{\alpha} that satisfies

(3.24) v¨α=|Γ˙|2vα,vα(0)=0,vα(2)=1.\displaystyle\ddot{v}_{\alpha}=-\lvert\dot{\Gamma}\rvert^{2}v_{\alpha},\quad v_{\alpha}(0)=0,\quad v_{\alpha}(2)=1.

Hence, from multiplying (3.15) and (3.16) with some EβE_{\beta}, we see that 𝒦2/3,EβEβ\left\langle\mathcal{K}_{2/3},E_{\beta}\right\rangle E_{\beta} is a Jacobi field with vanishing boundary values and hence zero. Thus 𝒦2\mathcal{K}_{2} and 𝒦3\mathcal{K}_{3} are both multiples of Γ˙\dot{\Gamma} and again from (3.15) and (3.16) we get

(3.25) 0\displaystyle 0 =t2𝒦2/3,Γ˙4Rm(Jxα±Jyα,Γ˙,Γ˙,DtJxβ±DtJyβ)ξαξβ\displaystyle=\partial_{t}^{2}\left\langle\mathcal{K}_{2/3},\dot{\Gamma}\right\rangle-4\operatorname{Rm}(J_{x^{\alpha}}\pm J_{y^{\alpha}},\dot{\Gamma},\dot{\Gamma},D_{t}J_{x^{\beta}}\pm D_{t}J_{y^{\beta}})\xi^{\alpha}\xi^{\beta}
=4(vα(1t)±vα(1+t))t(vβ(1t)±vβ(1+t))Rm(Eα,Γ˙,Γ˙,Eβ)ξαξβ\displaystyle=-4(v_{\alpha}(1-t)\pm v_{\alpha}(1+t))\partial_{t}(v_{\beta}(1-t)\pm v_{\beta}(1+t))\operatorname{Rm}(E_{\alpha},\dot{\Gamma},\dot{\Gamma},E_{\beta})\xi^{\alpha}\xi^{\beta}
+t2𝒦2,Γ˙.\displaystyle\hphantom{=}+\partial_{t}^{2}\left\langle\mathcal{K}_{2},\dot{\Gamma}\right\rangle.

In this sum, the contribution of terms involving αβ\alpha\neq\beta is zero, because in this case the curvature term is zero. So we get

(3.26) 0\displaystyle 0 =2|Γ˙|2t(vα(1t)±vα(1+t))2ξαξα+t2𝒦2,Γ˙\displaystyle=-2\lvert\dot{\Gamma}\rvert^{2}\partial_{t}(v_{\alpha}(1-t)\pm v_{\alpha}(1+t))^{2}\xi^{\alpha}\xi^{\alpha}+\partial_{t}^{2}\left\langle\mathcal{K}_{2},\dot{\Gamma}\right\rangle
=:g˙(t)+h¨(t).\displaystyle=:-\dot{g}(t)+\ddot{h}(t).

We solve this equation by integration with respect to the boundary values h(1)=h(1)=0h(-1)=h(1)=0 and evaluate hh at t=0t=0. In general we get with some constants aa and bb,

(3.27) h˙(t)=1tg˙+a=g(t)g(1)+a\displaystyle\dot{h}(t)=\int_{-1}^{t}\dot{g}+a=g(t)-g(-1)+a

and

(3.28) h(t)=1tg(1+t)g(1)+(1+t)a.\displaystyle h(t)=\int_{-1}^{t}g-(1+t)g(-1)+(1+t)a.

Since h(1)=0h(1)=0, we find

(3.29) a=g(1)1211g\displaystyle a=g(-1)-\frac{1}{2}\int_{-1}^{1}g

and hence we have

(3.30) h(t)=1tg1+t211g.\displaystyle h(t)=\int_{-1}^{t}g-\frac{1+t}{2}\int_{-1}^{1}g.

Due to the symmetry of the function gg around 0 we get h(0)=0h(0)=0 and the proof is complete. ∎

4. Proof of Theorem 1.1

To prove Theorem 1.1, we need to prove that the two-point function

(4.1) Z(x,y)=u(γx,y(0))12(u(x)+u(y))\displaystyle Z(x,y)=u(\gamma_{x,y}(0))-\frac{1}{2}(u(x)+u(y))

defined in Ω¯×Ω¯\bar{\Omega}\times\bar{\Omega} is non-negative at minimal points. Therefore we may assume that xyx\neq y. If one of the minimizing points, say xx, is at the boundary, then we prove that by moving xx and yy towards each other at the same speed (i.e. while fixing the mid-point zz), we could decrease the value of ZZ. To see this, consider the function

(4.2) c(t)=u(z)12(u(γx,y(t1))+u(γx,y(1t))).\displaystyle c(t)=u(z)-\frac{1}{2}(u(\gamma_{x,y}(t-1))+u(\gamma_{x,y}(1-t))).

There holds

(4.3) 2c˙(0)=Dux(γ˙x,y(1))+Duy(γ˙x,y(1))>0.\displaystyle-2\dot{c}(0)=Du_{x}(\dot{\gamma}_{x,y}(-1))+Du_{y}(\dot{\gamma}_{x,y}(1))>0.

Hence c˙(0)\dot{c}(0) is strictly negative and ZZ must achieve its minimum at interior points.

Let z=γx,y(0)z=\gamma_{x,y}(0). The first order conditions on ZZ are (suppressing the spatial arguments for JJ)

(4.4) 0=Duz(Jxi(0))12iux=vi(1)iuz12iux\displaystyle 0=Du_{z}(J_{x^{i}}(0))-\frac{1}{2}\partial_{i}u_{x}=v_{i}(1)\partial_{{i}}u_{z}-\frac{1}{2}\partial_{i}u_{x}

and

(4.5) 0=Duz(Jyi(0))12iuy=vi(1)iuz12iuy,\displaystyle 0=Du_{z}(J_{y^{i}}(0))-\frac{1}{2}\partial_{i}u_{y}=v_{i}(1)\partial_{{i}}u_{z}-\frac{1}{2}\partial_{i}u_{y},

with

(4.6) v0(1)=12,vα(1)=:v(1),\displaystyle v_{0}(1)=\frac{1}{2},\quad v_{\alpha}(1)=:v(1),

and the latter solves (3.24) and hence

(4.7) v(t)=sin(|Γ˙|t)sin(2|Γ˙|).\displaystyle v(t)=\frac{\sin(\lvert\dot{\Gamma}\rvert t)}{\sin(2\lvert\dot{\Gamma}\rvert)}.

We use the bases

(4.8) Ei(1)=xi,Ei(0),Ei(1)=yi,\displaystyle E_{i}(-1)=\partial_{x^{i}},\quad E_{i}(0),\quad E_{i}(1)=\partial_{y^{i}},

which are orthonormal at the respective points x,z,yx,z,y, to identify all of the three tangent spaces canonically with one single Euclidean n\mathbb{R}^{n}. In these coordinates, define an endomorphism VV through its matrix representation,

(4.9) V=diag(1,12v(1),,12v(1)).\displaystyle V=\mathrm{diag}\left(1,\frac{1}{2v(1)},\dots,\frac{1}{2v(1)}\right).

Then

(4.10) |uz|=|V(uy)|=|V(ux)||ux|=|uy|,\displaystyle|\nabla u_{z}|=|V(\nabla u_{y})|=|V(\nabla u_{x})|\leq\lvert\nabla u_{x}\rvert=\lvert\nabla u_{y}\rvert,

since

(4.11) v(1)=12cos|Γ˙|12.\displaystyle v(1)=\frac{1}{2\cos\lvert\dot{\Gamma}\rvert}\geq\frac{1}{2}.

The second order condition implies that for all ξn\xi\in\mathbb{R}^{n} and all a,bGLn()a,b\in\mathrm{GL}_{n}(\mathbb{R})

(4.12) 0\displaystyle 0 ξiξj(aipxp+bipyp)(ajqxq+bjqyq)Z\displaystyle\leq\xi^{i}\xi^{j}(a_{i}^{p}\partial_{x^{p}}+b_{i}^{p}\partial_{y^{p}})(a_{j}^{q}\partial_{x^{q}}+b_{j}^{q}\partial_{y^{q}})Z
=ξiξj(aipajqxpxqZ+aipbjqxpyqZ+bipajqypxqZ+bipbjqypyqZ)\displaystyle=\xi^{i}\xi^{j}\left(a_{i}^{p}a_{j}^{q}\partial_{x^{p}}\partial_{x^{q}}Z+a_{i}^{p}b_{j}^{q}\partial_{x^{p}}\partial_{y^{q}}Z+b^{p}_{i}a^{q}_{j}\partial_{y^{p}}\partial_{x^{q}}Z+b_{i}^{p}b_{j}^{q}\partial_{y^{p}}\partial_{y^{q}}Z\right)
=ξiξjaipajq(D2uz(Jxp,Jxq)+Duz(Kxpxq)12Dpq2ux)\displaystyle=\xi^{i}\xi^{j}a_{i}^{p}a_{j}^{q}\left(D^{2}u_{z}(J_{x^{p}},J_{x^{q}})+Du_{z}(K_{x^{p}x^{q}})-\frac{1}{2}D^{2}_{pq}u_{x}\right)
+ξiξjaipbjq(D2uz(Jxp,Jyq)+Duz(Kxpyq))\displaystyle\hphantom{=}+\xi^{i}\xi^{j}a_{i}^{p}b_{j}^{q}(D^{2}u_{z}(J_{x^{p}},J_{y^{q}})+Du_{z}(K_{x^{p}y^{q}}))
+ξiξjbipajq(D2uz(Jyp,Jxq)+Duz(Kypxq))\displaystyle\hphantom{=}+\xi^{i}\xi^{j}b^{p}_{i}a^{q}_{j}(D^{2}u_{z}(J_{y^{p}},J_{x^{q}})+Du_{z}(K_{y^{p}x^{q}}))
+ξiξjbipbjq(D2uz(Jyp,Jyq)+Duz(Kypyq)12Dpq2uy)\displaystyle\hphantom{=}+\xi^{i}\xi^{j}b_{i}^{p}b_{j}^{q}\left(D^{2}u_{z}(J_{y^{p}},J_{y^{q}})+Du_{z}(K_{y^{p}y^{q}})-\frac{1}{2}D^{2}_{pq}u_{y}\right)
=14ξiξj(V1c)ip(V1c)jqDpq2uz12(aipajqDpq2ux+bipbjqDpq2uy)\displaystyle=\frac{1}{4}\xi^{i}\xi^{j}(V^{-1}\circ c)_{i}^{p}(V^{-1}\circ c)_{j}^{q}D^{2}_{pq}u_{z}-\tfrac{1}{2}(a_{i}^{p}a_{j}^{q}D^{2}_{pq}u_{x}+b_{i}^{p}b_{j}^{q}D^{2}_{pq}u_{y})
+ξiξjDuz(aipajqKxpxq+aipbjqKxpyq+bipajqKypxq+bipbjqKypyq),\displaystyle\hphantom{=}+\xi^{i}\xi^{j}Du_{z}(a_{i}^{p}a_{j}^{q}K_{x^{p}x^{q}}+a_{i}^{p}b_{j}^{q}K_{x^{p}y^{q}}+b^{p}_{i}a^{q}_{j}K_{y^{p}x^{q}}+b_{i}^{p}b_{j}^{q}K_{y^{p}y^{q}}),

where

(4.13) c:=a+b.\displaystyle c:=a+b.

We test this relation twice with suitable aa and bb to deduce a concavity relation. First let id=a=b.\operatorname{id}=a=-b. Then we obtain in the sense of bilinear forms on n\mathbb{R}^{n},

(4.14) Dij2ux+Dij2uy0,\displaystyle D^{2}_{ij}u_{x}+D^{2}_{ij}u_{y}\leq 0,

since in this case c=0c=0 and the term involving KK also vanishes in view of the 𝒦3(0)=0\mathcal{K}_{3}(0)=0 case in Lemmata 3.2, LABEL: and 3.3. Due to the same reasoning using 𝒦2(0)=0\mathcal{K}_{2}(0)=0, from setting a=b=V/2a=b=V/2 we obtain

(4.15) Dij2uz12(D2ux+D2uy)(Vi,Vj).\displaystyle-D^{2}_{ij}u_{z}\leq-\frac{1}{2}(D^{2}u_{x}+D^{2}u_{y})(V_{i},V_{j}).

As we are working in Euclidean coordinates, we may view both sides to lie in 𝒟Γ¯\mathcal{D}_{\bar{\Gamma}}. Hence we may apply f(|uz|,)f(\lvert\nabla u_{z}\rvert,\cdot) after raising an index with respect to the Euclidean scalar product and use the naturality of ff to obtain

(4.16) b(z,uz,|uz|)\displaystyle b(z,u_{z},\lvert\nabla u_{z}\rvert) =f(|uz|,2uz)f(|uz|,12(D2ux+D2uy)),\displaystyle=f(\lvert\nabla u_{z}\rvert,-\nabla^{2}u_{z})\leq f\left(\lvert\nabla u_{z}\rvert,-\frac{1}{2}(D^{2}u_{x}+D^{2}u_{y})^{\sharp}\right),

where we have used the monotonicity of ff in the second variable and Lemma 2.1. Using the convexity and the other monotonicity properties, we get

(4.17) b(z,uz,|uz|)\displaystyle b(z,u_{z},\lvert\nabla u_{z}\rvert) 12f(|ux|,2ux)12f(|uy|,2uy)\displaystyle\leq\frac{1}{2}f(\lvert\nabla u_{x}\rvert,-\nabla^{2}u_{x})-\frac{1}{2}f(\lvert\nabla u_{y}\rvert,-\nabla^{2}u_{y})
=12b(x,ux,|ux|)+12b(y,uy,|uy|)\displaystyle=\frac{1}{2}b(x,u_{x},\lvert\nabla u_{x}\rvert)+\frac{1}{2}b(y,u_{y},\lvert\nabla u_{y}\rvert)
12b(x,ux,|uz|)+12b(y,uy,|uz|)\displaystyle\leq\frac{1}{2}b(x,u_{x},\lvert\nabla u_{z}\rvert)+\frac{1}{2}b(y,u_{y},\lvert\nabla u_{z}\rvert)
b(z,12(ux+uy),|uz|).\displaystyle\leq b\left(z,\frac{1}{2}(u_{x}+u_{y}),\lvert\nabla u_{z}\rvert\right).

Hence, due to the strict monotonicity in the second variable,

(4.18) uz12(ux+uy)\displaystyle u_{z}\geq\frac{1}{2}(u_{x}+u_{y})

and the proof is complete.

4.1 Remark.

The method to introduce different variational direction in the xx and yy variables was already used in [7, Thm. 3.13], although in a simplified (quasilinear Euclidean) setting, where it gave a lot more information.

Acknowledgments

This work was made possible through a research scholarship JS received from the DFG and which was carried out at Columbia University in New York. JS would like to thank the DFG, Columbia University and especially Prof. Simon Brendle for their support.

References

  • [1] Olivier Alvarez, Jean-Michel Lasry, and Pierre-Louis Lions, Convex viscolsity soluitons and state constraints, J. Math. Pures Appl. 76 (1997), no. 3, 265–288.
  • [2] Ben Andrews and Julie Clutterbuck, Proof of the fundamental gap conjecture, J. Am. Math. Soc. 24 (2011), no. 3, 899–916.
  • [3] Theodora Bourni and Mat Langford, Differential Harnack inequalities via concavity of the arrival time, arxiv:1912.06623, 2019.
  • [4] Herm Brascamp and Elliot Lieb, On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation, J. Funct. Anal. 22 (1976), no. 4, 366–389.
  • [5] Johan Jensen, Sur les fonctions convexes et les inégalites entre les valeurs moyennes, Acta Math. 30 (1906), 175–193.
  • [6] Petri Juutinen, Concavity maximum principle for viscosity solutions of singular equations, NoDEA 17 (2010), no. 5, 601–618.
  • [7] Bernd Kawohl, Rearrangements and convexity of level sets in PDE, Lecture notes in mathematics, vol. 1150, Springer, Berlin-Heidelberg, 1985.
  • [8] Alan Kennington, Power concavity and boundary value problems, Indiana Univ. Math. J. 34 (1985), no. 3, 687–704.
  • [9] Nicholas Korevaar, Convex solutions to nonlinear elliptic and parabolic boundary value problems, Indiana Univ. Math. J. 32 (1983), no. 4, 603–614.
  • [10] John Lee, Riemannian manifolds - An introduction to curvature, Graduate Texts in Mathematics, vol. 176, Springer New York, 1997.
  • [11] Shigeru Sakaguchi, Concavity properties of solutions to some degenerate quasilinear elliptic Dirichlet problems, Ann. Sc. Norm. Super. Pisa Cl. Sci. (4) 14 (1987), no. 3, 403–421.
  • [12] Shoo Seto, Lili Wang, and Guofang Wei, Sharp fundamental gap estimate on convex domains of sphere, J. Differ. Geom. 112 (2019), no. 2, 347–389.
  • [13] Constantin Udriste, Convex functions and optimization methods on Riemannian manifolds, Mathematics and its applications, vol. 297, Springer Science+Business Media, Dordrecht, 1994.