This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

First and Second derivative Hölder estimates for generated Jacobian equations

Cale Rankin
Abstract.

We prove two Hölder regularity results for solutions of generated Jacobian equations. First, that under the A3 condition and the assumption of nonnegative LpL^{p} valued data solutions are C1,αC^{1,\alpha} for an α\alpha that is sharp. Then, under the additional assumption of positive Dini continuous data, we prove a C2C^{2} estimate. Thus the equation is uniformly elliptic and when the data is Hölder continuous solutions are in C2,αC^{2,\alpha}.

This research is supported by ARC DP 200101084

1. Introduction

Generated Jacobian equations are a class of PDE which model problems in geometric optics and have recently seen applications to monopolist problems in economics [19, 15]. These equations are of the form

(1) detDY(,u,Du)=ψ(,u,Du) in Ω,\det DY(\cdot,u,Du)=\psi(\cdot,u,Du)\text{ in }\Omega,

where Y:𝐑n×𝐑×𝐑n𝐑nY:\mathbf{R}^{n}\times\mathbf{R}\times\mathbf{R}^{n}\rightarrow\mathbf{R}^{n}, Ω\Omega is a domain, and u:Ω𝐑u:\Omega\rightarrow\mathbf{R} satisfies a convexity condition that ensures the PDE is elliptic.

The precise form of this condition on uu and the structure of YY requires a number of definitions which we introduce in Section 2. However, the framework we work in includes, in addition to the just listed applications, two well-known special cases. First, the Monge–Ampère equation (take Y(,u,Du)=DuY(\cdot,u,Du)=Du). Second, the Monge–Ampère type equations from optimal transport (take Y(,u,Du)=Y(,Du)Y(\cdot,u,Du)=Y(\cdot,Du) as the optimal transport map depending on the corresponding potential function uu for the optimal transport problem). Indeed, generated Jacobian equations (GJEs) were introduced to treat the aforementioned new applications using the techniques from optimal transport. That’s what we do here: Two fundamental results for the regularity theory in optimal transport are the C1,αC^{1,\alpha} result of Liu [10] and the C2,αC^{2,\alpha} results of Liu, Trudinger, Wang [11]. These are based on the corresponding results of Caffarelli in the Monge–Ampère case, respectively, [1] and [2]. In this paper we extend the results of [10, 11] to generated Jacobian equations. Our main results are stated precisely at the conclusion of Section 2, once the necessary definitions have been introduced. The importance of our results is that, first, the C1,αC^{1,\alpha} (or at least some C1C^{1}) result is a necessary assumptions in the derivation of models in geometric optics and, second, the C2,αC^{2,\alpha} result puts us in the regime of classical elliptic PDE and lets us bootstrap higher regularity.

We note that the corresponding C1,αC^{1,\alpha} result of Loeper in the optimal transport setting [13] has been extended to generated Jacobian equations by Jeong [8]. Thus our key contribution for the C1,αC^{1,\alpha} result is improving the value of α\alpha so that it is sharp. As we explain at the conclusion of Section 2 this yields a corresponding improvement on the C2,αC^{2,\alpha} result. An outline of the paper is provided by the table of contents.

2. Generating functions, gg-convexity, and GJEs

In this section we state the essential definitions. Further introductory material can be found in the expository article of Guillen [6], and more detailed outlines of the whole theory in [19, 20, 7, 16].

The framework for GJEs was introduced by Trudinger [19] and is built around a generalized notion of convexity. The generating function, which we now define, plays a central role, essentially that of affine hyperplanes in classical convexity.

Definition 1.

A generating function is a function, which we denote by gg, satisfying the conditions A0,A1,A1, and A2.

A0. gC4(Γ¯)g\in C^{4}(\overline{\Gamma}) where Γ\Gamma is a bounded domain of the form

Γ={(x,y,z);xU,yV,zIx,y},\Gamma=\{(x,y,z);x\in U,y\in V,z\in I_{x,y}\},

for domains U,V𝐑nU,V\subset\mathbf{R}^{n} and Ix,yI_{x,y} an open interval for each (x,y)U×V(x,y)\in U\times V. Moreover we assume there is an open interval JJ such that g(x,y,Ix,y)Jg(x,y,I_{x,y})\supset J for each (x,y)U×V(x,y)\in U\times V.

A1. For each (x,u,p)𝒰(x,u,p)\in\mathcal{U} defined by

𝒰={(x,g(x,y,z),gx(x,y,z));(x,y,z)Γ},\mathcal{U}=\{(x,g(x,y,z),g_{x}(x,y,z));(x,y,z)\in\Gamma\},

there is a unique (x,y,z)Γ(x,y,z)\in\Gamma, whose y,zy,z components we denote by Y(x,u,p)Y(x,u,p), Z(x,u,p)Z(x,u,p), satisfying

(2) g(x,y,z)=u\displaystyle g(x,y,z)=u gx(x,y,z)=p.\displaystyle g_{x}(x,y,z)=p.

A1. For each fixed y,zy,z the mapping xgygz(x,y,z)x\mapsto\frac{g_{y}}{g_{z}}(x,y,z) is injective on its domain of definition.

A2. On Γ¯\overline{\Gamma} there holds gz<0g_{z}<0 and the matrix

E:=gi,jgz1gi,zg,jE:=g_{i,j}-g_{z}^{-1}g_{i,z}g_{,j}

satisfies detE0.\det E\neq 0. Here subscripts before and after a comma denote, respectively, differentiation in xx and yy.

Two examples are g(x,y,z)=xyzg(x,y,z)=x\cdot y-z which generates (in accordance with Definition 3) the Monge–Ampère equation and standard convexity, and g(x,y,z)=c(x,y)zg(x,y,z)=c(x,y)-z, where cc is a cost function from optimal transport, which generates the Monge–Ampère type equation from optimal transport, and the cost convexity theory [21, Chapter 5]. By a duality structure, which we do not need and thus don’t introduce here, the condition A1 is dual to A1A1, thereby justifying the name. The A0 condition is weakened by some authors who treat C2C^{2} generating functions [7, 9]. However our interior Pogorelov estimate, an essential tool for the C2,αC^{2,\alpha} result, is obtained by differentiating the PDE twice which relies on a C4C^{4} generating function.

Definition 2.

Let ΩU\Omega\subset U be a domain and uC0(Ω¯)u\in C^{0}(\overline{\Omega}). We call uu gg-convex provided for every x0Ωx_{0}\in\Omega there is y0,z0y_{0},z_{0} such that

(3) u(x0)=g(x0,y0,z0),\displaystyle u(x_{0})=g(x_{0},y_{0},z_{0}),
(4) u(x)g(x,y0,z0) for all xΩ satisfying xx0,\displaystyle u(x)\geq g(x,y_{0},z_{0})\text{ for all $x\in\Omega$ satisfying $x\neq x_{0}$},
(5) g(Ω¯,y0,z0)J.\displaystyle g(\overline{\Omega},y_{0},z_{0})\subset J.

When the inequality in (4) is strict uu is called strictly gg-convex. When the function xg(x,y0,z0)x\mapsto g(x,y_{0},z_{0}) satisfies (3),(4), and (5) it is called a gg-support of uu at x0x_{0}.

The set of all yy such that there is zz for which g(,y,z)g(\cdot,y,z) is a gg-support at xx is denoted Yu(x)Yu(x). When uu is differentiable and g(,y,z)g(\cdot,y,z) is a gg-support at xx then ug(,y,z)u-g(\cdot,y,z) has a minimum at xx implying by (2) that y=Y(x,u,Du)y=Y(x,u,Du). Similarly, if g(,y,z)g(\cdot,y,z) is a gg-support at xx and uu is C2C^{2} then D2u(x)gxx(x,y,z)D^{2}u(x)-g_{xx}(x,y,z) is a nonnegative definite matrix. When uu is not differentiable at xx, Yu(x)Yu(x) is not a singleton.

Definition 3.

A generated Jacobian equation is an equation of the form (1) where the mapping YY derives from a generating function as in the A1 condition.

For a GJE to make sense we must have (Id,u,Du)(Ω)𝒰(\text{Id},u,Du)(\Omega)\subset\mathcal{U}. By calculations which are now standard [19], a C2C^{2} solution of (1) satisfies the Monge–Ampère type equation

(6) det[D2uA(,u,Du)]=B(,u,Du) in Ω,\det[D^{2}u-A(\cdot,u,Du)]=B(\cdot,u,Du)\text{ in }\Omega,

where A,BA,B are defined on UU by

(7) Aij(x,u,p)\displaystyle A_{ij}(x,u,p) =gij(x,Y(x,u,p),Z(x,u,p)),\displaystyle=g_{ij}(x,Y(x,u,p),Z(x,u,p)),
(8) B(x,u,p)\displaystyle B(x,u,p) =detE(x,u,p)ψ(x,u,p).\displaystyle=\det E(x,u,p)\psi(x,u,p).

This equation is degenerate elliptic provided uu is gg-convex.

The following definition extends the definition of Aleksandrov solution for the Monge–Ampère equation to generated Jacobian equations.

Definition 4.

A gg-convex function u:Ω𝐑u:\Omega\rightarrow\mathbf{R} is called an Aleksandrov solution of

(9) detDY(,u,Du)=ν in Ω,\det DY(\cdot,u,Du)=\nu\text{ in }\Omega,

for ν\nu a Borel measure on Ω\Omega, provided for every Borel EΩE\subset\Omega

|Yu(E)|=ν(E).|Yu(E)|=\nu(E).

Whilst (9) would, classically, require a C2C^{2} function, we’ve defined Aleksandrov solutions for merely gg-convex functions. However it is a consequence of the change of variables formula and the Lebesgue differentiation theorem that C2C^{2} Aleksandrov solutions are classical solutions.

We introduce one final condition on the generating function. It was introduced by Ma, Trudinger, and Wang [14] and was extended to GJEs in [19]. The necessity of the weakened form, for even C1C^{1} regularity, was proved by Loeper [13].

A3. There is c>0c>0 such that

DpkplAij(x,u,p)ξiξjηkηlc,D_{p_{k}p_{l}}A_{ij}(x,u,p)\xi_{i}\xi_{j}\eta_{k}\eta_{l}\geq c,

for all unit vectors ξ,η\xi,\eta satisfying ξη=0\xi\cdot\eta=0.
The A3 weak (A3w) condition, is the same but with c=0c=0.

2.1. Statement of main theorems

Theorem 1.

Let gg be a generating function satisfying A3. Let u:Ω𝐑u:\Omega\rightarrow\mathbf{R} be a gg-convex Aleksandrov solution of (9) with ν=fdxLp(U)\nu=f\ dx\in L^{p}(U) for p>(n+1)/2p>(n+1)/2. Then uC1,α(Ω)u\in C^{1,\alpha}(\Omega) with

α=β(n+1)2n2+β(n1) where β=1n+12p.\alpha=\frac{\beta(n+1)}{2n^{2}+\beta(n-1)}\text{ where }\beta=1-\frac{n+1}{2p}.
Remark 1.

Liu proved this value, α=(2n1)1\alpha=(2n-1)^{-1} when p=p=\infty, is sharp. That is, there exists a function uu which solves a GJE satisfying the hypothesis of the theorem, and which is in C1,αC^{1,\alpha} for the stated α\alpha, but not in C1,α+εC^{1,\alpha+\varepsilon} for any ε>0\varepsilon>0. We note that Loeper [13] and Jeong [8] have proved the Hölder regularity for the right-hand side a measure ν\nu satisfying ν(Bε(x))Cεn(11/p)\nu(B_{\varepsilon}(x))\leq C\varepsilon^{n(1-1/p)}. Our proof is easily adapted to this condition (which is more general than above but comes at the expense of a smaller α\alpha). We indicate the necessary changes in a remark after the proof of Theorem 1.

Our C2C^{2} estimate is for ν=fdx\nu=f\ dx where ff is Dini continuous.

Definition 5 (Dini continuity).

Let f:Ω𝐑f:\Omega\rightarrow\mathbf{R}. The oscillation of ff is

ωf(r):=sup{|f(x)f(y)|;x,yΩ with |xy|<r}.\omega_{f}(r):=\sup\{|f(x)-f(y)|;x,y\in\Omega\text{ with }|x-y|<r\}.

Then ff is called Dini continuous if

01ωf(r)r𝑑r<.\int_{0}^{1}\frac{\omega_{f}(r)}{r}\ dr<\infty.
Theorem 2.

Let gg be a generating function satisfying A3. Let u:Ω𝐑u:\Omega\rightarrow\mathbf{R} be an Aleksandrov solution of (9) with ν=fdx\nu=f\ dx. If λfΛ\lambda\leq f\leq\Lambda and ff is Dini continuous then uC2(Ω)u\in C^{2}(\Omega). If fCα(Ω)f\in C^{\alpha}(\Omega) then uCloc2,α(Ω)u\in C^{2,\alpha}_{loc}(\Omega).

Remark 2.

Our result can be stated more precisely as follows. Under the above hypothesis for each ΩΩ\Omega^{\prime}\subset\subset\Omega there is C>0C>0 depending on g,Ω,Ω,uC1(Ω)g,\Omega^{\prime},\Omega,\|u\|_{C^{1}(\Omega)} such that

  1. (1)

    If ff is Dini continuous we have

    (10) |D2u(x)D2u(y)|C[d+0dωf(r)r𝑑r+dd1ωf(r)r2𝑑r],|D^{2}u(x)-D^{2}u(y)|\leq C\left[d+\int_{0}^{d}\frac{\omega_{f}(r)}{r}\ dr+d\int_{d}^{1}\frac{\omega_{f}(r)}{r^{2}}\ dr\right],

    where x,yΩx,y\in\Omega^{\prime} and d:=|xy|d:=|x-y|.

  2. (2)

    If fCα(Ω)f\in C^{\alpha}(\Omega) for some α(0,1)\alpha\in(0,1) then uC2,α(Ω)u\in C^{2,\alpha}(\Omega) with

    (11) uC2,α(Ω)C+Cα(1α)fCα(Ω).\|u\|_{C^{2,\alpha}}(\Omega^{\prime})\leq C+\frac{C}{\alpha(1-\alpha)}\|f\|_{C^{\alpha}(\Omega)}.

We do not prove (10) and (11) directly — we just prove a C2C^{2} estimate. This ensures the equation is uniformly elliptic then (10) and (11) follow from [22, Theorem 3.1] (details in Section 6).

Remark 3.

A common form of ν\nu arising in applications is

ν=f(x)f(Y(x,u,Du))dx.\nu=\frac{f(x)}{f^{*}(Y(x,u,Du))}\ dx.

Theorem 1 applies whenever fLpf\in L^{p} and f>λf^{*}>\lambda for some positive constant λ\lambda. Then the assumption of Hölder continuous right-hand side in Theorem 2 is satisfied when λ<f,f<Λ\lambda<f,f^{*}<\Lambda and f,ff,f* are Hölder continuous. More precisely if fCβ,fCγf\in C^{\beta},f^{*}\in C^{\gamma} then Theorem 1 implies the sharp exponent with p=p=\infty and we subsequently obtain uC2,αu\in C^{2,\alpha^{\prime}} for α=min{β,γ/(2n1)}\alpha^{\prime}=\text{min}\{\beta,\gamma/(2n-1)\}.

As is standard we prove apriori estimates for smooth solutions. The results then hold by approximation and uniqueness of the Dirichlet problem in the small [19, Lemma 4.6]. The results above all hold without boundary conditions, that is they are interior local results. This is possible because we are considering Aleksandrov solutions under A3. With A3 weakened to A3w such results are not possible. Moreover for applications to optimal transport to conclude that a potential function is an Aleksandrov solution we require also a boundary condition - the second boundary value problem with a target satisfying a convexity condition [14].

3. Background results and normalization lemma

We will use a number of background results. These originally appeared in [7] though we’ll use the formulation in [18].

We assume gg is a generating function satisfying A3w and uu is a strictly gg-convex function with detDYu=:ν\det DYu=:\nu in the Aleksandrov sense. Let x0x_{0} be given and g(,y0,z0)g(\cdot,y_{0},z_{0}) a support at x0x_{0} put zh=z0hz_{h}=z_{0}-h where h>0h>0 and define the section

Sh=Sh(x0):={u<g(,y0,zh)},S_{h}=S_{h}(x_{0}):=\{u<g(\cdot,y_{0},z_{h})\},

which by the strict gg-convexity is compactly contained in Ω\Omega for sufficiently small hh.

A lemma [18, Theorems 3,5], which we employ repeatedly is the following.

Lemma 1.

There exists C,d>0C,d>0 which depend only on gg, such that if diam(Sh)<d\text{diam}(S_{h})<d and ν\nu is a doubling measure, then

C1|Sh|ν(Sh)supSh|u()g(,y0,zh)|nC|Sh|ν(Sh).C^{-1}|S_{h}|\nu(S_{h})\leq\sup_{S_{h}}|u(\cdot)-g(\cdot,y_{0},z_{h})|^{n}\leq C|S_{h}|\nu(S_{h}).

Note the requirement that ν\nu is a doubling measure is only necessary for the lower bound. Also, C1hsupΩ|u()g(,y0,zh)|ChC^{-1}h\leq\sup_{\Omega}|u(\cdot)-g(\cdot,y_{0},z_{h})|\leq Ch for C>0C>0 depending only on sup|gz|,inf|gz|\sup|g_{z}|,\inf|g_{z}|. In the special case where ν=fdx\nu=f\ dx and λfΛ\lambda\leq f\leq\Lambda we have

(12) C1λ|Sh|2hnCΛ|Sh|2.C^{-1}\lambda|S_{h}|^{2}\leq h^{n}\leq C\Lambda|S_{h}|^{2}.

We introduce new coordinates

(13) x~\displaystyle\tilde{x} :=gygz(x,y0,zh),\displaystyle:=\frac{g_{y}}{g_{z}}(x,y_{0},z_{h}),

When A3w is satisfied Sh(x0)S_{h}(x_{0}) is convex in the x~\tilde{x} coordinates [20, Lemma 2.3]. We often use this result in conjunction with the minimum ellipsoid. The minimum ellipsoid of an arbitrary open convex set UU is the unique ellipsoid of minimal volume, denoted EE, containing UU. It satisfies 1nEUE\frac{1}{n}E\subset U\subset E where this is dilation with respect to the centre of the ellipsoid [12, Lemma 2.1]. We assume, after a rotation and translation that the minimum ellipsoid of ShS_{h} is

(14) E={x;i=1nxi2ri21}, where r1rn.E=\left\{x;\sum_{i=1}^{n}\frac{x_{i}^{2}}{r_{i}^{2}}\leq 1\right\},\text{ where }r_{1}\geq\dots\geq r_{n}.

Then elementary convex geometry implies cnr1rn|Sh|Cnr1rnc_{n}r_{1}\dots r_{n}\leq|S_{h}|\leq C_{n}r_{1}\dots r_{n} and (12) becomes

(15) C1r1rnhn/2Cr1rn.C^{-1}r_{1}\dots r_{n}\leq h^{n/2}\leq Cr_{1}\dots r_{n}.

We also define here the good shape constant. Let UU be any convex set, and assume its minimum ellipsoid is given by (14). Then a good shape constant is any CC satisfying Cr1/rnC\geq r_{1}/r_{n} and the good shape constant is just r1/rnr_{1}/r_{n}, explicitly, the infimum of all good shape constants. For solutions of generated Jacobian equations the good shape constant of Sh(x0)S_{h}(x_{0}) carries information about |D2u(x0)||D^{2}u(x_{0})| (see Lemmas 5 and 6).

When A3w is strengthened to A3 we have a particularly strong estimate concerning the geometry of sections and their height. It is used repeatedly throughout this paper. In optimal transport this estimate is due to Liu [10, Lemma 4] and we largely follow his proof.

Lemma 2.

Assume gg is a generating function satisfying A3 and u:Ω𝐑u:\Omega\rightarrow\mathbf{R} is a C2C^{2} gg-convex function. Assume that Sh(x0)ΩS_{h}(x_{0})\subset\subset\Omega and that the minimum ellipsoid of Sh(x0)S_{h}(x_{0}) (in the x~\tilde{x} coordinates) is (14). Then there is CC depending only on gg such that

hr12rn2C.\frac{hr_{1}^{2}}{r_{n}^{2}}\leq C.
Proof.

We work in the x~\tilde{x} coordinates, though keep the notation xx. Define T:𝐑n𝐑nT:\mathbf{R}^{n}\rightarrow\mathbf{R}^{n} by

(16) T(x)=(x1/r1,,xn/rn).T(x)=(x_{1}/r_{1},\dots,x_{n}/r_{n}).

Note

(17) B1/nU:=TShB1.B_{1/n}\subset U:=TS_{h}\subset B_{1}.

Let x^\hat{x} be the boundary point of UU on the negative xnx_{n} axis. In a neighbourhood of x^\hat{x}, denoted 𝒩\mathcal{N}, we represent Ω\partial\Omega as a graph of some function ρ\rho, that is

U𝒩={(x,ρ(x));x=(x1,,xn1)}.\partial U\cap\mathcal{N}=\{(x^{\prime},\rho(x^{\prime}));x^{\prime}=(x_{1},\dots,x_{n-1})\}.

Using (17) we may assume ρ\rho is defined for |x|<1/n|x^{\prime}|<1/n. Similarly by (17) and the convexity of UU we conclude |Dρ|c(n)|D\rho|\leq c(n) when |x|1/2n|x^{\prime}|\leq 1/2n. Let γ\gamma be the curve

γ={γ(t)=(t,0,,0,ρ(te1));1/4n<t<1/4n}.\gamma=\{\gamma(t)=(t,0,\dots,0,\rho(te_{1}));-1/4n<t<1/4n\}.

Because (17) implies |ρ11(γ(t))|C(n)|\rho_{11}(\gamma(t))|\leq C(n) at some t(1/4n,1/4n)t\in(-1/4n,1/4n) the proof will be complete provided we can show that

(18) C|ρ11(γ(t))|hr12/rn2,C|\rho_{11}(\gamma(t))|\geq hr_{1}^{2}/r_{n}^{2},

for every t(1/4n,1/4n)t\in(-1/4n,1/4n).

Let v:U𝐑v:U\rightarrow\mathbf{R} be defined on x¯U\overline{x}\in U by

v(x¯)=u(T1x¯)g(T1x¯,y,zh).v(\overline{x})=u(T^{-1}\overline{x})-g(T^{-1}\overline{x},y,z_{h}).

Differentiating v(γ(t))=0v(\gamma(t))=0 once, then twice, with respect to tt gives

(19) Dkvγ˙k\displaystyle D_{k}v\dot{\gamma}_{k} =0,\displaystyle=0,
(20) Dγ˙γ˙v\displaystyle D_{\dot{\gamma}\dot{\gamma}}v =Dkvγ¨k=Dnvρ11.\displaystyle=-D_{k}v\ddot{\gamma}_{k}=-D_{n}v\rho_{11}.

To estimate Dγ˙γ˙v=Dklvγ˙kγ˙lD_{\dot{\gamma}\dot{\gamma}}v=D_{kl}v\dot{\gamma}_{k}\dot{\gamma}_{l} we compute at x¯=Tx\overline{x}=Tx

Dklv\displaystyle D_{kl}v =[uklgkl(x,y0,zh)]rkrl\displaystyle=[u_{kl}-g_{kl}(x,y_{0},z_{h})]r_{k}r_{l}
=[uklgkl(x,Yu(x),Zu(x))]rkrl\displaystyle=[u_{kl}-g_{kl}(x,Yu(x),Zu(x))]r_{k}r_{l}
+[gkl(x,Yu(x),Zu(x))gkl(x,y0,zh)]rkrl\displaystyle\quad+[g_{kl}(x,Yu(x),Zu(x))-g_{kl}(x,y_{0},z_{h})]r_{k}r_{l}
[gkl(x,Yu(x),Zu(x))gkl(x,y0,zh)]rkrl.\displaystyle\geq[g_{kl}(x,Yu(x),Zu(x))-g_{kl}(x,y_{0},z_{h})]r_{k}r_{l}.

The inequality is by gg-convexity of uu. Put gh():=g(,y0,zh)g_{h}(\cdot):=g(\cdot,y_{0},z_{h}). Then use the definition of AA (equation (7)), u(x)=gh(x)u(x)=g_{h}(x) on Sh\partial S_{h}, and a Taylor series to obtain

Dklv\displaystyle D_{kl}v [Akl(x,u(x),Du(x))Akl(x,u(x),Dgh(x))]rkrl\displaystyle\geq[A_{kl}(x,u(x),Du(x))-A_{kl}(x,u(x),Dg_{h}(x))]r_{k}r_{l}
(21) =Akl,pm(x,u(x),Dgh(x))Dm(ugh)rkrl\displaystyle=A_{kl,p_{m}}(x,u(x),Dg_{h}(x))D_{m}(u-g_{h})r_{k}r_{l}
(22) +Akl,pmpn(x,u(x),pτ)Dm(ugh)Dn(ugh)rkrl,\displaystyle\quad\quad+A_{kl,p_{m}p_{n}}(x,u(x),p_{\tau})D_{m}(u-g_{h})D_{n}(u-g_{h})r_{k}r_{l},

where pτ=τDu(x)+(1τ)Dgh(x)p_{\tau}=\tau Du(x)+(1-\tau)Dg_{h}(x) for some τ(0,1)\tau\in(0,1). A direct, but involved, calculation which we relegate to Appendix A.1 implies

Akl,pm(x,u(x),Dgh(x))Dm(ugh)rkrlγ˙kγ˙l=0.A_{kl,p_{m}}(x,u(x),Dg_{h}(x))D_{m}(u-g_{h})r_{k}r_{l}\dot{\gamma}_{k}\dot{\gamma}_{l}=0.

Thus, using also Dm(ugh)=Dmv/rmD_{m}(u-g_{h})=D_{m}v/r_{m}, (21) becomes

DklvAkl,pmpnDmvDnvrkrlrmrn.D_{kl}v\geq A_{kl,p_{m}p_{n}}D_{m}vD_{n}v\frac{r_{k}r_{l}}{r_{m}r_{n}}.

Now returning to Dγ˙γ˙vD_{\dot{\gamma}\dot{\gamma}}v we have

(23) Dγ˙γ˙vAkl,pmpnDmvrmDnvrn(γ˙krk)(γ˙lrl).D_{\dot{\gamma}\dot{\gamma}}v\geq A_{kl,p_{m}p_{n}}\frac{D_{m}v}{r_{m}}\frac{D_{n}v}{r_{n}}(\dot{\gamma}_{k}r_{k})(\dot{\gamma}_{l}r_{l}).

Since by (19) γ˙\dot{\gamma} is orthogonal to DvDv we also have orthogonality of ξ:=(r1γ˙1,,rnγ˙n)\xi:=(r_{1}\dot{\gamma}_{1},\dots,r_{n}\dot{\gamma}_{n}) and η:=(D1v/r1,,Dnv/rn)\eta:=(D_{1}v/r_{1},\dots,D_{n}v/r_{n}). Thus employing the A3 condition in (23) yields

Dγ˙γ˙vc|ξ|2|η|2c|Dnv|2r12rn2.D_{\dot{\gamma}\dot{\gamma}}v\geq c|\xi|^{2}|\eta|^{2}\geq c\frac{|D_{n}v|^{2}r_{1}^{2}}{r_{n}^{2}}.

Now we substitute this into (20) to obtain

ρ11c|Dnv|r12rn2.\rho_{11}\geq c\frac{|D_{n}v|r_{1}^{2}}{r_{n}^{2}}.

If we can show |Dnv|Ch|D_{n}v|\geq Ch then we’ve obtained (18).

For this final estimate fix x1Shx_{1}\in\partial S_{h} and set

h(θ):=g(xθ,y1,z1)g(xθ,y0,zh) where xθ=θx1+(1θ)x0,h(\theta):=g(x_{\theta},y_{1},z_{1})-g(x_{\theta},y_{0},z_{h})\text{ where }x_{\theta}=\theta x_{1}+(1-\theta)x_{0},

where g(,y1,z1)g(\cdot,y_{1},z_{1}) supports uu at x1x_{1} so Du(x1)=gx(x1,y1,z1)Du(x_{1})=g_{x}(x_{1},y_{1},z_{1}). A standard argument [16, Eq. A.14] using the A3w condition implies that for KK depending only on gg,

h′′(θ)K|h(θ)|.h^{\prime\prime}(\theta)\geq-K|h^{\prime}(\theta)|.

Then we follow [17, Eq. 19] (there are similar arguments in [21, 7, 20, 19]) to obtain

h(t1)eK(t2t1)h(t2),h^{\prime}(t_{1})\leq e^{K(t_{2}-t_{1})}h^{\prime}(t_{2}),

for t1<t2t_{1}<t_{2}. Choosing t2=1t_{2}=1 and integrating from t1=0t_{1}=0 to 11 we have

(24) h(0)C(K)h(1).-h(0)\leq C(K)h^{\prime}(1).

Now h(0)=g(x0,y0,zh)g(x0,y1,z1))inf|gz|h-h(0)=g(x_{0},y_{0},z_{h})-g(x_{0},y_{1},z_{1}))\geq\inf|g_{z}|h and

|h(1)|=|Diu(x1x0)i|=|Div(x1x0)iri||Dv|,|h^{\prime}(1)|=|D_{i}u(x_{1}-x_{0})_{i}|=\left|D_{i}v\frac{(x_{1}-x_{0})_{i}}{r_{i}}\right|\leq|Dv|,

where we’ve used that by the minimum ellipsoid |(x1x0)i|/ri2|(x_{1}-x_{0})_{i}|/r_{i}\leq 2. Thus (24) becomes |Dv|ch|Dv|\geq ch. To conclude we control |Dv||Dv| by |Dnv||D_{n}v|. Indeed, by (19) D1v+ρ1Dnv=0D_{1}v+\rho_{1}D_{n}v=0, similar reasoning implies Div+ρiDnvD_{i}v+\rho_{i}D_{n}v for i=2,3,,n1i=2,3,\dots,n-1. Thus

Dv=(ρ1Dnv,,ρn1Dnv,Dnv).Dv=(-\rho_{1}D_{n}v,\dots,-\rho_{n-1}D_{n}v,D_{n}v).

Recalling |Dρ|C|D\rho|\leq C we complete the proof with the observation |Dv|C|Dnv||Dv|\leq C|D_{n}v|. ∎

4. C1,αC^{1,\alpha} regularity

The C1,αC^{1,\alpha} regularity is essentially an immediate consequence of Lemmas 1 and 2. The proof of Theorem 1 is as follows.

Proof.

Step 1. [Proof for strictly gg-convex functions] Fix x0x_{0}, without loss of generality equal to 0, and then hh sufficiently small to ensure Sh(x0)ΩS_{h}(x_{0})\subset\subset\Omega. By Lemma 1

(25) hnC|Sh|ν(Sh).h^{n}\leq C|S_{h}|\nu(S_{h}).

Then assuming the minimum ellipsoid of Sh(x0)S_{h}(x_{0}) is given by (14), (25) becomes

hnC(r1rn)Shf𝑑x.h^{n}\leq C(r_{1}\dots r_{n})\int_{S_{h}}f\ dx.

Using Hölder’s inequality (with the second function equal to 1) we have

hn\displaystyle h^{n} C(r1rn)|Sh|11/pfLp\displaystyle\leq C(r_{1}\dots r_{n})|S_{h}|^{1-1/p}\|f\|_{L^{p}}
(26) C(r1rn)21/pfLp.\displaystyle\leq C(r_{1}\dots r_{n})^{2-1/p}\|f\|_{L^{p}}.

Now we conclude as in [10] (which is where Lemma 2 is used). More precisely (26) is the 5th inequality on [10, pg. 446], so the rest of this step is exactly as given there.

Step 2. [Proof for gg-convex functions] When uu is not strictly gg-convex, we may consider on a small enough neighbourhood of x0x_{0} the function u+ε|xx0|2u+\varepsilon|x-x_{0}|^{2}. Indeed by the proof of [16, Theorem 2.22] this function is strictly gg-convex on a neighbourhood of x0x_{0} depending only on gg (in particular, independent of uu and ε\varepsilon). Moreover it is an Aleksandrov solution of a generated Jacobian equation with right-hand side in the original LpL^{p} space. This is an consequence of the identity Yu(x)=Y(x,u,(x),u(x))Yu(x)=Y(x,u,(x),\partial u(x)) for u(x)\partial u(x) the subgradient. Thus by the previous proof

0u(x)g(x,y0,z0)u(x)+ε|xx0|2g(x,y0,z0)C|xx0|1+α,0\leq u(x)-g(x,y_{0},z_{0})\leq u(x)+\varepsilon|x-x_{0}|^{2}-g(x,y_{0},z_{0})\leq C|x-x_{0}|^{1+\alpha},

as required. ∎

Remark 4.

Loeper [13] and Jeong [8] proved the C1,αC^{1,\alpha} regularity for Aleksandrov solutions of

detDYu=ν,\det DYu=\nu,

where ν\nu satisfies that for some p(n,+]p\in(n,+\infty] and Cν>0C_{\nu}>0 there holds

ν(Bε(x))Cνεn((11/p)).\nu(B_{\varepsilon}(x))\leq C_{\nu}\varepsilon^{n((1-1/p))}.

Our proof is easily adapted to this condition. In (25) we use

ν(Sh)ν(B2r1(x0))Cr1n((11/p)),\nu(S_{h})\leq\nu(B_{2r_{1}}(x_{0}))\leq Cr_{1}^{n((1-1/p))},

and combine with Lemma 2.

5. Interior Pogorelov estimate for constant right-hand side

Now we start work on the C2,αC^{2,\alpha} estimate. Recall we will prove this by establishing a C2C^{2} (i.e. uniform ellipticity) estimate when the right-hand side is Dini continuous. Then we obtain the C2,αC^{2,\alpha} estimate from the elliptic theory [22]. The C2C^{2} estimate for Dini continuous right hand side is a perturbation of the same result for the special case of constant right-hand side, the proof of which is the goal of this section. First we introduce a strengthening of the C1,αC^{1,\alpha} result and a strict gg-convexity estimate that holds by duality.

Lemma 3.

Let gg be a generating function satisfying A3 and u:Ω𝐑u:\Omega\rightarrow\mathbf{R} be a gg-convex solution of λdetDYuΛ\lambda\leq detDYu\leq\Lambda. For each ΩΩ\Omega^{\prime}\subset\subset\Omega there is C,d,β,γ>0C,d,\beta,\gamma>0 depending on λ,Λ,g,Ω,Ω\lambda,\Lambda,g,\Omega^{\prime},\Omega for which the following holds. Whenever x0Ωx_{0}\in\Omega^{\prime}, g(,y0,z0)g(\cdot,y_{0},z_{0}) is the gg-support at x0x_{0} and xBd(x0)x\in B_{d}(x_{0}) there holds

(27) C1|xx0|1+γu(x)g(x,y0,z0)C|xx0|1+β.C^{-1}|x-x_{0}|^{1+\gamma}\leq u(x)-g(x,y_{0},z_{0})\leq C|x-x_{0}|^{1+\beta}.

The right-hand inequality is the C1,αC^{1,\alpha} estimate of Guillen and Kitagawa [7] which follows from the strict convexity in [20, Lemma 4.1]. The left-hand side inequality follows from the right-hand side by duality. We give the proof in Appendix A.2.

Lemma 4.

Assume that uC4(Ω)C2(Ω¯)u\in C^{4}(\Omega)\cap C^{2}(\overline{\Omega}) is a gg-convex solution of

det[D2uA(,u,Du)]=f0 in Ω,\displaystyle\det[D^{2}u-A(\cdot,u,Du)]=f_{0}\text{ in }\Omega,
(28) u=g(,y,zh) on Ω,\displaystyle u=g(\cdot,y,z-h)\text{ on }\partial\Omega,

where g(,y,z)g(\cdot,y,z) is a support at some x¯Ω\overline{x}\in\Omega and f0>0f_{0}>0 is a real number. We assume gg satisfies A3w, h>0h>0 is sufficiently small (determined in the proof), and C0C_{0} is the good shape constant of Ω\Omega. For each τ(0,1)\tau\in(0,1) there is C>0C>0 depending on AC2,uC1(Ω),τ,C0,h\|A\|_{C^{2}},\|u\|_{C^{1}(\Omega)},\tau,C_{0},h such that in

Sτh:={x;u(x)<g(x,y,zτh)},S_{\tau h}:=\{x;u(x)<g(x,y,z-\tau h)\},

we have

(29) supSτh|D2u|C.\sup_{S_{\tau h}}|D^{2}u|\leq C.
Proof.

This is essentially the estimate

(30) (g(,y,zh)u())β|D2u()|C,(g(\cdot,y,z-h)-u(\cdot))^{\beta}|D^{2}u(\cdot)|\leq C,

which was given in [20]. Since we need to ensure the constant only depends on AC2\|A\|_{C^{2}} we will provide full details. The proof is via a Pogorelov type estimate: We consider a certain test function which attains a maximum in Ω\Omega. Our choice of test function ensures it both controls the second derivatives and is controlled at its maximum point.

We let φ:=|xx¯|2\varphi:=|x-\overline{x}|^{2} and introduce both the function

v(x,ξ)=κφ+τ|Du|2/2+log(wξξ)+βlog[g(,y,zh)u],v(x,\xi)=\kappa\varphi+\tau|Du|^{2}/2+\log(w_{\xi\xi})+\beta\log[g(\cdot,y,z-h)-u],

and the differential operator111Here differentiation is with respect to xx, never ξ\xi.

(31) L(v):=wij[DijvDpkAijDkv],L(v):=w^{ij}[D_{ij}v-D_{p_{k}}A_{ij}D_{k}v],

where wij=uijAij(x,u,Du)w_{ij}=u_{ij}-A_{ij}(x,u,Du) and DpkAijD_{p_{k}}A_{ij} is evaluated at (x,u,Du)(x,u,Du).

We use the notation u0=g(,y,zh)u_{0}=g(\cdot,y,z-h) and η=u0u\eta=u_{0}-u. Because the nonnegative function eve^{v} is 0 on Ω\partial\Omega, vv attains an interior maximum at x0Ωx_{0}\in\Omega and some ξ\xi assumed without loss of generality to be e1e_{1}. We also assume, again without loss of generality, that at x0x_{0} ww is diagonal. At an interior maximum Dv=0Dv=0 and D2v0D^{2}v\leq 0 so that

(32) 0Lv=κLφ+τL(|Du|2/2)+L(log(w11))+βLlogη.0\geq Lv=\kappa L\varphi+\tau L(|Du|^{2}/2)+L(\log(w_{11}))+\beta L\log\eta.

We will compute each term in (32) and from this obtain (30).

Term 1: LφL\varphi. This one’s immediate – provided the domain (and subsequently |Dφ||D\varphi|) is chosen sufficiently small depending on |Aij,p||A_{ij,p}| we have

(33) LφwiiC.L\varphi\geq w^{ii}-C.

Term 2: L(|Du|2)L(|Du|^{2}) We compute

(34) L(|Du|2/2)\displaystyle L(|Du|^{2}/2) =wiiukiuki+uk[wii(ukiiDplAiiulk)].\displaystyle=w^{ii}u_{ki}u_{ki}+u_{k}[w^{ii}(u_{kii}-D_{p_{l}}A_{ii}u_{lk})].

We note (using uij=wij+Aiju_{ij}=w_{ij}+A_{ij}) that

(35) wiiukiuki=wii(wki+Aki)(wki+Aki)wiiC(1+wii),w^{ii}u_{ki}u_{ki}=w^{ii}(w_{ki}+A_{ki})(w_{ki}+A_{ki})\geq w_{ii}-C(1+w^{ii}),

and by differentiating the PDE in the direction eke_{k} at x0x_{0}

(36) wij[uijkAij,plulk]=wij(Aij,k+Aij,uuk).\displaystyle w^{ij}[u_{ijk}-A_{ij,p_{l}}u_{lk}]=w^{ij}(A_{ij,k}+A_{ij,u}u_{k}).

Hence (34) becomes

(37) L(|Du|2/2)wiiC(1+wii).L(|Du|^{2}/2)\geq w_{ii}-C(1+w^{ii}).

Term 3: L(log(w11))L(\log(w_{11})). To begin, we differentiate the PDE twice in the e1e_{1} direction and obtain, with the notation wij,k:=Dxkwijw_{ij,k}:=D_{x_{k}}w_{ij}, that

wii[uii11DpkAiiuk11]\displaystyle w^{ii}[u_{ii11}-D_{p_{k}}A_{ii}u_{k11}] =wiiwjjwij,12+wii[Aii,112Aii,1uu1\displaystyle=w^{ii}w^{jj}w_{ij,1}^{2}+w^{ii}\big{[}A_{ii,11}-2A_{ii,1u}u_{1}
+2Aii,1pkuk1+Aii,uu\displaystyle+2A_{ii,1p_{k}}u_{k1}+A_{ii,uu} u12+Aii,uu11+2Aii,pku1u1k+Aii,pkplu1ku1l]\displaystyle u_{1}^{2}+A_{ii,u}u_{11}+2A_{ii,p_{k}}u_{1}u_{1k}+A_{ii,p_{k}p_{l}}u_{1k}u_{1l}\big{]}
(38) wiiwjjwij,12\displaystyle\geq w^{ii}w^{jj}w_{ij,1}^{2} +wiiAii,pkplu1ku1lC(wii+wii+wiiwii).\displaystyle+w^{ii}A_{ii,p_{k}p_{l}}u_{1k}u_{1l}-C(w_{ii}+w^{ii}+w_{ii}w^{ii}).

We use A3w to deal with the second term. Use uij=wij+Aiju_{ij}=w_{ij}+A_{ij} to write

wiiAii,pkplu1ku1lwiiAii,p1p1w112C(wii+wiiwii).w^{ii}A_{ii,p_{k}p_{l}}u_{1k}u_{1l}\geq w^{ii}A_{ii,p_{1}p_{1}}w_{11}^{2}-C(w^{ii}+w^{ii}w_{ii}).

Then by applying A3w with ξ=ei,η=ej\xi=e_{i},\ \eta=e_{j} for iji\neq j we see Aii,pjpj0A_{ii,p_{j}p_{j}}\geq 0 so that

wiiAii,p1p1w112\displaystyle w^{ii}A_{ii,p_{1}p_{1}}w_{11}^{2} =w11A11,p1p1w112+i=2nwiiAii,p1p1w112\displaystyle=w^{11}A_{11,p_{1}p_{1}}w_{11}^{2}+\sum_{i=2}^{n}w^{ii}A_{ii,p_{1}p_{1}}w_{11}^{2}
Cw11.\displaystyle\geq-Cw_{11}.

Thus (38) becomes

L(u11)wiiwjjwij,12C(wii+wii+wiiwii).L(u_{11})\geq w^{ii}w^{jj}w_{ij,1}^{2}-C(w_{ii}+w^{ii}+w_{ii}w^{ii}).

We perform similar calculations for LA11LA_{11} to obtain

(39) L(w11)wiiwjjwij,12C(wii+wii+wiiwii).L(w_{11})\geq w^{ii}w^{jj}w_{ij,1}^{2}-C(w_{ii}+w^{ii}+w_{ii}w^{ii}).

We use this to compute L(logw11)L(\log w_{11}) as follows. First,

L(logw11)=wiiw11,i2w112+L(w11)w11.L(\log w_{11})=-\frac{w^{ii}w_{11,i}^{2}}{w_{11}^{2}}+\frac{L(w_{11})}{w_{11}}.

Hence by (39)

(40) L(logw11)wiiwjjwij,12w11wiiw11,i2w112Cw11(wii+wii+wiiwii).L(\log w_{11})\geq\frac{w^{ii}w^{jj}w_{ij,1}^{2}}{w_{11}}-\frac{w^{ii}w_{11,i}^{2}}{w_{11}^{2}}-\frac{C}{w_{11}}(w_{ii}+w^{ii}+w_{ii}w^{ii}).

When i,j=1i,j=1 in the first term and i=1i=1 in the second these terms cancel. At the expense of an inequality we discard terms with neither ii nor j=1j=1. Thus

(41) wiiwjjwij,12w11\displaystyle\frac{w^{ii}w^{jj}w_{ij,1}^{2}}{w_{11}} wiiw11,i2w112\displaystyle-\frac{w^{ii}w_{11,i}^{2}}{w_{11}^{2}}
i>1wiiwi1,12w112+j>1wjjw1j,12w112i>1wiiw11,i2w112\displaystyle\geq\sum_{i>1}\frac{w^{ii}w_{i1,1}^{2}}{w_{11}^{2}}+\sum_{j>1}\frac{w^{jj}w_{1j,1}^{2}}{w_{11}^{2}}-\sum_{i>1}\frac{w^{ii}w_{11,i}^{2}}{w_{11}^{2}}
=1w112i>1wii[2wi1,12w11,i2]\displaystyle=\frac{1}{w_{11}^{2}}\sum_{i>1}w^{ii}\big{[}2w_{i1,1}^{2}-w_{11,i}^{2}\big{]}
=1w112i>1wiiw11,i2+2w112i>1wii[wi1,12w11,i2]\displaystyle=\frac{1}{w_{11}^{2}}\sum_{i>1}w^{ii}w_{11,i}^{2}+\frac{2}{w_{11}^{2}}\sum_{i>1}w^{ii}[w_{i1,1}^{2}-w_{11,i}^{2}]
=1w112i>1wiiw11,i2\displaystyle=\frac{1}{w_{11}^{2}}\sum_{i>1}w^{ii}w_{11,i}^{2}
+2w112i>1wii(wi1,1w11,i)(wi1,1+w11,i).\displaystyle\quad\quad+\frac{2}{w_{11}^{2}}\sum_{i>1}w^{ii}(w_{i1,1}-w_{11,i})(w_{i1,1}+w_{11,i}).

Rewriting the second sum in terms of the AA matrix yields

wiiwjjwij,12w11wiiw11,i2w112=1w112i>1wiiw11,i2\displaystyle\frac{w^{ii}w^{jj}w_{ij,1}^{2}}{w_{11}}-\frac{w^{ii}w_{11,i}^{2}}{w_{11}^{2}}=\frac{1}{w_{11}^{2}}\sum_{i>1}w^{ii}w_{11,i}^{2}
+2w112i>1wii(DiA11D1Ai1)(2w11,i+DiA11D1Ai1)\displaystyle\quad\quad+\frac{2}{w_{11}^{2}}\sum_{i>1}w^{ii}(D_{i}A_{11}-D_{1}A_{i1})(2w_{11,i}+D_{i}A_{11}-D_{1}A_{i1})
=1w112i>1wii[w11,i2+4w11,i(DiA11D1Ai1)+2(DiA11D1Ai1)2].\displaystyle=\frac{1}{w_{11}^{2}}\sum_{i>1}w^{ii}\big{[}w_{11,i}^{2}+4w_{11,i}(D_{i}A_{11}-D_{1}A_{i1})+2(D_{i}A_{11}-D_{1}A_{i1})^{2}\big{]}.

Now, Cauchy’s inequality implies

4w11,i(DiA11D1Ai1)w11,i228(DiA11D1Ai1)2.\displaystyle 4w_{11,i}(D_{i}A_{11}-D_{1}A_{i1})\geq-\frac{w_{11,i}^{2}}{2}-8(D_{i}A_{11}-D_{1}A_{i1})^{2}.

Thus

wiiwjjwij,12w11wiiw11,i2w11212w112i=2nwiiw11,i2Cwii,\frac{w^{ii}w^{jj}w_{ij,1}^{2}}{w_{11}}-\frac{w^{ii}w_{11,i}^{2}}{w_{11}^{2}}\geq\frac{1}{2w_{11}^{2}}\sum_{i=2}^{n}w^{ii}w_{11,i}^{2}-Cw^{ii},

and on returning to (40)

(42) L(log(w11))12w112i=2nwiiw11,i2C(1+wii+wii).L(\log(w_{11}))\geq\frac{1}{2w_{11}^{2}}\sum_{i=2}^{n}w^{ii}w_{11,i}^{2}-C(1+w_{ii}+w^{ii}).

Now, using (33),(37), and (42) in (32) implies

(43) 0\displaystyle 0 κ(wiiC)+τ[wiiC(1+wii)]+12w112i=2nwiiw11,i2\displaystyle\geq\kappa(w^{ii}-C)+\tau[w_{ii}-C(1+w^{ii})]+\frac{1}{2w_{11}^{2}}\sum_{i=2}^{n}w^{ii}w_{11,i}^{2}
C(1+wii+wii)+βL(logη).\displaystyle\quad\quad-C(1+w_{ii}+w^{ii})+\beta L(\log\eta).

Term 3: L(logη)L(\log\eta). First, write

(44) L(logη)=Lηηi=1nwii(Diηη)2.L(\log\eta)=\frac{L\eta}{\eta}-\sum_{i=1}^{n}w^{ii}\left(\frac{D_{i}\eta}{\eta}\right)^{2}.

We compute (using Diiu0=Aii(,u0,Du0)D_{ii}u_{0}=A_{ii}(\cdot,u_{0},Du_{0}))

Lη=wii\displaystyle L\eta=w^{ii} [Diiu0DiiuDpkAii(,u,Du)Dkη]\displaystyle[D_{ii}u_{0}-D_{ii}u-D_{p_{k}}A_{ii}(\cdot,u,Du)D_{k}\eta]
=wii\displaystyle=w^{ii} [wii+Aii(,u0,Du0)Aii(,u,Du)DpkAii(,u,Du)Dkη]\displaystyle[-w_{ii}+A_{ii}(\cdot,u_{0},Du_{0})-A_{ii}(\cdot,u,Du)-D_{p_{k}}A_{ii}(\cdot,u,Du)D_{k}\eta]
wii\displaystyle\geq w^{ii} [Aii,uη+Aii(,u,Du0)Aii(,u,Du)DpkAii(,u,Du)Dkη]C\displaystyle[A_{ii,u}\eta+A_{ii}(\cdot,u,Du_{0})-A_{ii}(\cdot,u,Du)-D_{p_{k}}A_{ii}(\cdot,u,Du)D_{k}\eta]-C
(45) wiiDpkplAiiDkηDlηCCwiiη.\displaystyle\geq w^{ii}D_{p_{k}p_{l}}A_{ii}D_{k}\eta D_{l}\eta-C-Cw^{ii}\eta.

For each ii write

DpkplAiiDkηDlη\displaystyle D_{p_{k}p_{l}}A_{ii}D_{k}\eta D_{l}\eta =k,liDpkplAiiDkηDlη+2liDpiplAiiDiηDlη\displaystyle=\sum_{k,l\neq i}D_{p_{k}p_{l}}A_{ii}D_{k}\eta D_{l}\eta+2\sum_{l\neq i}D_{p_{i}p_{l}}A_{ii}D_{i}\eta D_{l}\eta
+DpipiAiiDiηDiη\displaystyle\quad\quad+D_{p_{i}p_{i}}A_{ii}D_{i}\eta D_{i}\eta

Then by A3w the first term is nonnegative, so that

DpkplAiiDkηDlη\displaystyle D_{p_{k}p_{l}}A_{ii}D_{k}\eta D_{l}\eta CDiηC(Diη)2.\displaystyle\geq-CD_{i}\eta-C(D_{i}\eta)^{2}.

Returning to (45) we see

LηC(1+wiiη)CwiiDiηCwii(Diη)2.L\eta\geq-C(1+w^{ii}\eta)-Cw^{ii}D_{i}\eta-Cw^{ii}(D_{i}\eta)^{2}.

Which into (44) implies

(46) L(logη)CηCwiiCi=1nwii(Diηη)2.L(\log\eta)\geq-\frac{C}{\eta}-Cw^{ii}-C\sum_{i=1}^{n}w^{ii}\left(\frac{D_{i}\eta}{\eta}\right)^{2}.

Here we’ve used that we can assume η<1\eta<1, and also used Cauchy’s to note

wiiDiηη=wiiwiiDiηηwii+wii(Diηη)2.w^{ii}\frac{D_{i}\eta}{\eta}=\sqrt{w^{ii}}\sqrt{w^{ii}}\frac{D_{i}\eta}{\eta}\leq w^{ii}+w^{ii}\left(\frac{D_{i}\eta}{\eta}\right)^{2}.

Now we deal with the final term in (46). We can assume that w11(D1η/η)21w^{11}(D_{1}\eta/\eta)^{2}\leq 1, for if not we have (29) with β=2\beta=2. Since we are at a maximum Div=0D_{i}v=0, that is

Diηη=1β[w11,iw11+κDiφ+τDkuwik+τDkuAik].\frac{D_{i}\eta}{\eta}=-\frac{1}{\beta}\left[\frac{w_{11,i}}{w_{11}}+\kappa D_{i}\varphi+\tau D_{k}uw_{ik}+\tau D_{k}uA_{ik}\right].

This implies

i=2nwii(Diηη)2C[i=2nwiiw11,i2w112β2+κ2β2wii|Diφ|2+τ2β2(wii+wii)].\sum_{i=2}^{n}w^{ii}\left(\frac{D_{i}\eta}{\eta}\right)^{2}\leq C\left[\sum_{i=2}^{n}\frac{w^{ii}w_{11,i}^{2}}{w_{11}^{2}\beta^{2}}+\frac{\kappa^{2}}{\beta^{2}}w^{ii}|D_{i}\varphi|^{2}+\frac{\tau^{2}}{\beta^{2}}(w_{ii}+w^{ii})\right].

Choosing β1,2C\beta\geq 1,2C and returning to (46) we obtain

βL(logη)Cβηκ2|Dφ|2wiiCτ2β2[wii+wii]Cβwii12w112i=2nwiiw11,i2.\beta L(\log\eta)\geq\frac{-C\beta}{\eta}-\kappa^{2}|D\varphi|^{2}w^{ii}-\frac{C\tau^{2}}{\beta^{2}}[w^{ii}+w_{ii}]-C\beta w^{ii}-\frac{1}{2w_{11}^{2}}\sum_{i=2}^{n}w^{ii}w_{11,i}^{2}.

Substituting into (43) completes the proof:

0\displaystyle 0 κ(wiiC)+τ[wiiC(1+wii)]C(1+wii+wii)Cβη\displaystyle\geq\kappa(w^{ii}-C)+\tau[w_{ii}-C(1+w^{ii})]-C(1+w_{ii}+w^{ii})-\frac{C\beta}{\eta}
Cκ2|Dφ|2wiiCτ2β[wii+wii]Cβwii\displaystyle\quad\quad-C\kappa^{2}|D\varphi|^{2}w^{ii}-C\frac{\tau^{2}}{\beta}[w^{ii}+w_{ii}]-C\beta w^{ii}
=wii[κτCCCκ2|Dφ|2Cτ2βCβ]\displaystyle=w^{ii}[\kappa-\tau C-C-C\kappa^{2}|D\varphi|^{2}-\frac{C\tau^{2}}{\beta}-C\beta]
+wii[τCCτ2β]C[κ+τ+βη].\displaystyle\quad\quad+w_{ii}[\tau-C-\frac{C\tau^{2}}{\beta}]-C[\kappa+\tau+\frac{\beta}{\eta}].

Take diam(Ω)\text{diam}(\Omega), and subsequently |Dφ||D\varphi|, small enough to ensure κ2|Dφ|21\kappa^{2}|D\varphi|^{2}\leq 1 (our choice of κ\kappa will only depend on allowed quantities). A further choice of βτ2\beta\geq\tau^{2}, τ\tau large depending only on CC, and finally κ\kappa large depending on τ,C,β\tau,C,\beta implies

0wii+wiiCη.0\geq w^{ii}+w_{ii}-\frac{C}{\eta}.

This implies ηwiiC\eta w_{ii}\leq C at the maximum point, and the proof is complete. ∎

6. C2,αC^{2,\alpha} regularity

In this section we will prove the C2,αC^{2,\alpha} estimate via a C2C^{2} estimate. We adapt the method of proof used by Liu, Trudinger, and Wang in the optimal transport case [11]. We also use some details from Figalli’s exposition in the Monge–Ampère case [3, Section 4.10]. Here’s how we obtain the C2C^{2} estimate. When the right hand side is constant the C2C^{2} estimate is true by the interior Pogorelov’s estimate of the previous section. Then the argument for Dini-continuous ff is to perturb the argument for constant ff. That is we zoom in, treating a series of normalized approximating problems with constant right hand side.

6.1. Normalization of sections

Here we explain the procedure for normalizing a solution on a section. We assume we are given a strictly gg-convex function u:Ω𝐑u:\Omega\rightarrow\mathbf{R} which is an Aleksandrov solution of

detDYu=f,\displaystyle\det DYu=f,

as well as a point x0x_{0} and corresponding gg-support g(,y0,z0)g(\cdot,y_{0},z_{0}). As usual, we consider the section Sh(x0)S_{h}(x_{0}). The definition of Aleksandrov solution is coordinate independent so we may assume we are in the coordinates given by (13) and ShS_{h} is convex. Assume the minimum ellipsoid is given by (14) and TT is given by (16). We want to consider the PDE solved by

(47) v(x¯)=1h[u(T1x¯)g(T1x¯,y0,z0h)].v(\overline{x})=\frac{1}{h}[u(T^{-1}\overline{x})-g(T^{-1}\overline{x},y_{0},z_{0}-h)].

on U:=TShU:=TS_{h}. Importantly B1/nUB1B_{1/n}\subset U\subset B_{1}, v|U=0v|_{\partial U}=0, and C1|infUv|CC^{-1}\leq|\inf_{U}v|\leq C for CC depending only on gzg_{z}. Thus this is a natural generalization of the normalization procedure for Monge–Ampère equations. We show that vv solves a MATE

(48) det[D2vA¯(,v,Dv)]=B¯(,v,Dv),\displaystyle\det[D^{2}v-\overline{A}(\cdot,v,Dv)]=\overline{B}(\cdot,v,Dv),

for A¯,B¯\overline{A},\overline{B} satisfying A¯C2CAC2\|\overline{A}\|_{C^{2}}\leq C\|A\|_{C^{2}} and C1BB¯CBC^{-1}B\leq\overline{B}\leq CB for CC depending only on gg. When vv is defined by (47) we use the notation

A~ij(x¯,v,Dv)\displaystyle\tilde{A}_{ij}(\overline{x},v,Dv) =rirjhAij(T1x¯,u,Du),\displaystyle=\frac{r_{i}r_{j}}{h}A_{ij}\left(T^{-1}\overline{x},u,Du\right),
Yv\displaystyle Yv =Y(T1x,u,Du)\displaystyle=Y\left(T^{-1}x,u,Du\right)

and similarly for ZvZv. We compute directly vv solves the equation (48) for

A¯ij(x¯,v,Dv)\displaystyle\overline{A}_{ij}(\overline{x},v,Dv) =rirjh[gij(T1x¯,Yv,Zv)gij(T1x¯,y0,z0h)]\displaystyle=\frac{r_{i}r_{j}}{h}\left[g_{ij}(T^{-1}\overline{x},Yv,Zv)-g_{ij}(T^{-1}\overline{x},y_{0},z_{0}-h)\right]
=rirjh[gij(T1x¯,Yv,Zv)gij(T1x¯,y0,z0)+O(h)]\displaystyle=\frac{r_{i}r_{j}}{h}\left[g_{ij}(T^{-1}\overline{x},Yv,Zv)-g_{ij}(T^{-1}\overline{x},y_{0},z_{0})+O(h)\right]
=A~ij(x¯,v(x¯),Dv(x¯))A~ij(x¯,v(x¯0),Dv(x¯0))+O(h)\displaystyle=\tilde{A}_{ij}(\overline{x},v(\overline{x}),Dv(\overline{x}))-\tilde{A}_{ij}(\overline{x},v(\overline{x}_{0}),Dv(\overline{x}_{0}))+O(h)
=A~ij(x¯,v(x¯),Dv(x¯))A~ij(x¯,v(x¯),Dv(x¯0))+O(h).\displaystyle=\tilde{A}_{ij}(\overline{x},v(\overline{x}),Dv(\overline{x}))-\tilde{A}_{ij}(\overline{x},v(\overline{x}),Dv(\overline{x}_{0}))+O(h).

In this form we can argue exactly as in [10, pg. 440] to obtain

A¯ij(x¯,v,Dv)=hrirjrkrlAij,pkpl(T1x¯,hv,hrivi)vkvl+O(h).\overline{A}_{ij}(\overline{x},v,Dv)=\frac{hr_{i}r_{j}}{r_{k}r_{l}}A_{ij,p_{k}p_{l}}(T^{-1}\overline{x},hv,\frac{h}{r_{i}}v_{i})v_{k}v_{l}+O(h).

Then A¯\overline{A} is bounded by Lemma 2 when A3 is satisfied. For the C2C^{2} estimates for A¯\overline{A} note hri\frac{h}{r_{i}} is bounded by the strict convexity and differentiability estimate (27). Finally the pinching estimate on BB follows from B¯(,v,Dv)=hn(detT)2B(,u,Du)\overline{B}(\cdot,v,Dv)=h^{n}(\det T)^{2}B(\cdot,u,Du), (detT)1=r1rn(\det T)^{-1}=r_{1}\dots r_{n}, and (15).

6.2. Lemmas for the C2,αC^{2,\alpha} estimate

Here we show we can estimate the good shape constant of sections by the second derivatives of the function and vice versa. A subtlety is that sections of different height are convex in different coordinates. We assume at the outset some initial coordinates are fixed. We will say a section has good shape constant CC if after performing the change of variables (13) the section has good shape constant CC. Note because the Jacobian of the transformation (13) is EE (from A2) if ShS_{h} has good shape constant CC then it is still true, in the initial coordinates with respect to which ShS_{h} may not be constant, that ShS_{h} contains a ball of radius rr and is contained in a ball of radius RR for R/rC|Λ||λ|R/r\leq C\frac{|\Lambda|}{|\lambda|} where Λ,λ\Lambda,\lambda are respectively the minimum and maximum eigenvalues of EE over Γ¯\overline{\Gamma}.

Lemma 5.

Let uu be a strictly gg-convex solution of

(49) det[D2uA(,u,Du)]=f in Ω.\displaystyle\det[D^{2}u-A(\cdot,u,Du)]=f\text{ in }\Omega.

Fix x0Ωx_{0}\in\Omega and a support g(,y0,z0)g(\cdot,y_{0},z_{0}) at x0x_{0}. Assume Sh(x0)ΩS_{h}(x_{0})\subset\subset\Omega, and |D2u(x0)|M|D^{2}u(x_{0})|\leq M for some MM. Then Sh(x0)S_{h}(x_{0}) has a good shape constant which depends only on M,h,f,gM,h,f,g and the constant in the Pogorelov Lemma 4.

Proof.

We normalize the section ShS_{h} and solution as in Section 6.1 and let the normalized solution be denoted by vv. By the Pogorelov interior estimates we have

cIw¯(Tx0)CI,cI\leq\overline{w}(Tx_{0})\leq CI,

for w¯=D2vA¯(x,v,Dv)\overline{w}=D^{2}v-\overline{A}(x,v,Dv). Similarly by the PDE and |D2u(x0)|M|D^{2}u(x_{0})|\leq M we have cIw(x0)CIcI\leq w(x_{0})\leq CI for w(x0)=D2u(x0)A(x0,u(x0),Du(x0))w(x_{0})=D^{2}u(x_{0})-A(x_{0},u(x_{0}),Du(x_{0})). Using in addition

1hw(x0)=TTw¯(Tx0)T,\frac{1}{h}w(x_{0})=T^{T}\overline{w}(Tx_{0})T,

we obtain (see [3, Eq. 4.26]) a bi-Lipschitz estimate T,T1C\|T\|,\|T^{-1}\|\leq C. Since we can assume TT is of the form (16) we obtain r1/rnC2r_{1}/r_{n}\leq C^{2}, the desired good shape estimate on Sh(x0)S_{h}(x_{0}). ∎

Lemma 6.

Assume uC3(Ω)u\in C^{3}(\Omega) is a strictly gg-convex solution of

det[D2uA(,u,Du)]=f in Ω.\det[D^{2}u-A(\cdot,u,Du)]=f\text{ in }\Omega.

Fix x0x_{0} and assume g(,y0,z0)g(\cdot,y_{0},z_{0}) is a gg-support at x0x_{0}. If there is a sequence hk0h_{k}\rightarrow 0 such that each

Shk:={u<g(,y0,z0hk)},S_{h_{k}}:=\{u<g(\cdot,y_{0},z_{0}-h_{k})\},

has a good shape constant less than some C0C_{0} then |D2u(x0)|C|D^{2}u(x_{0})|\leq C for CC depending on C0,sup|f|C_{0},\sup|f| and gg.

Proof.

Without loss of generality x0=0x_{0}=0 and the minimum ellipsoid of ShkS_{h_{k}} has axis r1(k)rn(k)r_{1}^{(k)}\geq\dots\geq r_{n}^{(k)}. Our assumption is r1(k)C0rn(k)r_{1}^{(k)}\leq C_{0}r_{n}^{(k)}. Then, by (15)

C0n+1(r1(k))nr1(k)rn(k)C|Shk|Chkn/2,C_{0}^{-n+1}(r_{1}^{(k)})^{n}\leq r_{1}^{(k)}\dots r_{n}^{(k)}\leq C|S_{h_{k}}|\leq Ch_{k}^{n/2},

that is r1(k)Chkr_{1}^{(k)}\leq C\sqrt{h_{k}}. Moreover, because r1(k)r_{1}^{(k)} is the largest axis of the minimum ellipsoid ShkB2r1(k)(0)S_{h_{k}}\subset B_{2r_{1}^{(k)}}(0) so that

(50) ShkBChk(0).S_{h_{k}}\subset B_{C\sqrt{h_{k}}}(0).

Let wij=uij(0)gij(0,y0,z0)w_{ij}=u_{ij}(0)-g_{ij}(0,y_{0},z_{0}) and denote the minimum eigenvalue of ww by λ\lambda and corresponding normalized eigenvector by xλx_{\lambda}. Using a Taylor series we have

u(txλ)g(txλ,y0,z0h)t2λ+O(t3)inf|gz|hk.\displaystyle u(tx_{\lambda})-g(tx_{\lambda},y_{0},z_{0}-h)\leq t^{2}\lambda+O(t^{3})-\inf|g_{z}|h_{k}.

Thus provided kk is taken sufficiently large txλShktx_{\lambda}\in S_{h_{k}} for t=inf|gz|hk2λt=\sqrt{\inf|g_{z}|\frac{h_{k}}{2\lambda}}. That is, there is xShkx\in S_{h_{k}} with

(51) |x|\displaystyle|x| Chkλ.\displaystyle\geq C\sqrt{\frac{h_{k}}{\lambda}}.

Combining (50) and (51) we obtain a lower bound on λ\lambda which implies an upper bound on the largest eigenvalue of ww by the PDE. ∎

Before proving the C2C^{2} estimate we state one final lemma.

Lemma 7.

Assume uiu_{i} for i=1,2i=1,2 is a C2,αC^{2,\alpha} solution of

det[D2uiA(,ui,Dui)]=fi in Ω\displaystyle\det[D^{2}u_{i}-A(\cdot,u_{i},Du_{i})]=f_{i}\text{ in }\Omega

where fif_{i} is a Hölder continuous function and uiC4K.\|u_{i}\|_{C^{4}}\leq K. For any ΩΩ\Omega^{\prime}\subset\subset\Omega there is C>0C>0 depending on K,g,dist(Ω,Ω)K,g,\text{dist}(\partial\Omega,\Omega^{\prime}) such that

u1u2C3(Ω)Cf1f2Cα(Ω)+supΩ|u1u2|.\|u_{1}-u_{2}\|_{C^{3}(\Omega^{\prime})}\leq C\|f_{1}-f_{2}\|_{C^{\alpha}(\Omega)}+\sup_{\Omega}|u_{1}-u_{2}|.
Proof.

We linearise the PDE (as in [11, Lemma 4.2]), obtaining L(u1u2)=f1f2L(u_{1}-u_{2})=f_{1}-f_{2} for a suitable linear operator with coefficients and ellipticity constants controlled by estimates uiC4K.\|u_{i}\|_{C^{4}}\leq K. The lemma then follows from the classical Schauder theory [5, Theorem 6.2]. ∎

6.3. Proof of the C2,αC^{2,\alpha} estimate

As stated at the start of this section Theorem 2 follows from an interior C2C^{2} estimate. This is what we now prove.

Theorem 3.

Let gg be a generating function satisfying A3 and uC2(Ω)u\in C^{2}(\Omega) be a gg-convex solution of

det[D2uA(,u,Du)]=f in Ω.\det[D^{2}u-A(\cdot,u,Du)]=f\text{ in }\Omega.

If ff is Dini-continuous with 0<λfΛ<0<\lambda\leq f\leq\Lambda<\infty then for each ΩΩ\Omega^{\prime}\subset\subset\Omega we have an estimate supΩ|D2u|C(f,g,Ω,Ω)\sup_{\Omega^{\prime}}|D^{2}u|\leq C(f,g,\Omega,\Omega^{\prime}).

Proof.

Step 1. [Setup of approximating problems] At the outset we fix x0Ωx_{0}\in\Omega, where without loss of generality x0=0x_{0}=0. Consider

Sh={x;u(x)<g(x,y0,z0h)},S_{h}=\{x;u(x)<g(x,y_{0},z_{0}-h)\},

where g(,y0,z0)g(\cdot,y_{0},z_{0}) is the support at 0 and h>0h>0 is chosen small enough to ensure Lemma 4 applies. We normalize so that B1/nShB1B_{1/n}\subset S_{h}\subset B_{1} with B1B_{1} the minimum ellipsoid. Moreover, we assume hh is chosen small enough to ensure that after this normalization

(52) 01ω(r)r<ε,\int_{0}^{1}\frac{\omega(r)}{r}<\varepsilon,

for an ε\varepsilon to be chosen in the proof (recall ω\omega is from Definition 5). Note such a choice of hh is controlled by (27) and thus up to rescaling we assume h=1h=1.

W introduce a sequence of approximating problems. Define the domains

Uk={x;u(x)<g(x,y0,z01/4k)},U_{k}=\{x;u(x)<g(x,y_{0},z_{0}-1/4^{k})\},

and let fk=infUkff_{k}=\inf_{U_{k}}f.

Let ukC4(Uk)u_{k}\in C^{4}(U_{k}) be the solution (whose existence is guaranteed by the Perron method) of

(53) det[D2ukA(,uk,Duk)]=fk in Uk,\displaystyle\det[D^{2}u_{k}-A(\cdot,u_{k},Du_{k})]=f_{k}\text{ in }U_{k},
(54) uk=u on Uk.\displaystyle u_{k}=u\text{ on }\partial U_{k}.

In addition put

νk=supx,yUk|f(x)f(y)|.\displaystyle\nu_{k}=\sup_{x,y\in U_{k}}|f(x)-f(y)|.

Using (15) the section UkU_{k} is contained in a ball of of radius C2kC2^{-k} for CC depending on the good shape constant of UkU_{k}. Thus when the good shape constant of UkU_{k} is controlled Cνkω(2k)=:ωkC\nu_{k}\leq\omega(2^{-k})=:\omega_{k} which we’ll use at the conclusion of the proof. Lemmas 5 and 6 suggest the structure of our proof: It suffices to show a good shape constant of each UkU_{k} is controlled by a fixed constant independent of kk and this, in turn, follows from a uniform estimate on each |D2uk||D^{2}u_{k}| inside a subsection. More precisely, we’ll prove by induction that for each k=0,1,2,k=0,1,2,\dots we have

(55) |D2uk(x)|supV0τ0|D2u0|+1=:M,|D^{2}u_{k}(x)|\leq\sup_{V^{\tau_{0}}_{0}}|D^{2}u_{0}|+1=:M,

for any xVkτ0x\in V^{\tau_{0}}_{k} which is defined by

Vkτ={x;uk(x)<g(x,y0,z01τ4k)}.V^{\tau}_{k}=\{x;u_{k}(x)<g(x,y_{0},z_{0}-\frac{1-\tau}{4^{k}})\}.

By the uniform estimates in [18]222Use the upper bound [18, Theorem 4] and corresponding lower bound obtained as in [4, Theorem 6.2]. there is a choice of τ0\tau_{0} sufficiently close to 0 such that

(56) x0\displaystyle x_{0} Vkτ0,\displaystyle\in V^{\tau_{0}}_{k},
(57) Uk+1\displaystyle U_{k+1} Vkτ0.\displaystyle\subset V^{\tau_{0}}_{k}.

Step 2. [Induction base case: k=0k=0] It is clear that (55) holds for k=0k=0. However we note here that MM is controlled by the interior Pogorelov estimate, that is in terms of τ0,h\tau_{0},h and our initial normalizing transformation which is, in turn, controlled by (15).

Step 3. [Inductive step] Now we assume (55) up to some fixed kk. We rescale our solution and domain by introducing

u¯k(x¯):=4kuk(x¯2k)\displaystyle\overline{u}_{k}(\overline{x}):=4^{k}u_{k}\left(\frac{\overline{x}}{2^{k}}\right) u¯k+1(x¯):=4kuk+1(x¯2k),\displaystyle\overline{u}_{k+1}(\overline{x}):=4^{k}u_{k+1}\left(\frac{\overline{x}}{2^{k}}\right),

where x¯=2kx\overline{x}=2^{k}x. The function u¯k\overline{u}_{k} solves

det[D2u¯kA¯(,u¯k,Du¯k)]=fk in U¯k\displaystyle\det[D^{2}\overline{u}_{k}-\overline{A}(\cdot,\overline{u}_{k},D\overline{u}_{k})]=f_{k}\text{ in }\overline{U}_{k}
u¯k=4kg(,y0,z0h/4k) on U¯k,\displaystyle\overline{u}_{k}=4^{-k}g(\cdot,y_{0},z_{0}-h/4^{k})\text{ on }\partial\overline{U}_{k},

for U¯k:=2kUk\overline{U}_{k}:=2^{k}U_{k} and A¯(x,u,p)=A(2kx,4ku,2kp)\overline{A}(x,u,p)=A(2^{-k}x,4^{-k}u,2^{-k}p) and similarly for u¯k+1\overline{u}_{k+1}. Note this transformation does not change the magnitude of the second derivatives. Thus the inductive hypothesis (55) and Lemma 5 implies U¯k\overline{U}_{k} has a good shape constant depending only on M,fM,f and the constant in the Pogorelov lemma. We claim that U¯k+1\overline{U}_{k+1} has a good shape constant depending on the same parameters and the constants in (15). To see this assume the minimum ellipsoids of U¯k\overline{U}_{k} and U¯k+1\overline{U}_{k+1} have axis R1RnR_{1}\geq\dots\geq R_{n} and r1rnr_{1}\geq\dots\geq r_{n} respectively. By (15) applied to the section U¯k+1\overline{U}_{k+1} we have

C4n(k+1)/2C|U¯k+1|r1rnr1n1rn.C4^{-n(k+1)/2}\leq C|\overline{U}_{k+1}|\leq r_{1}\dots r_{n}\leq r_{1}^{n-1}r_{n}.

Using this to compute an upper bound on 1/rn1/r_{n}, we obtain

(58) r1rnC4n(k+1)/2r1nC4n(k+1)/2R1n,\frac{r_{1}}{r_{n}}\leq C4^{n(k+1)/2}r_{1}^{n}\leq C4^{n(k+1)/2}R_{1}^{n},

where the final inequality is because U¯k\overline{U}_{k} is the larger section. Now, let c0c_{0} be the good shape constant of U¯k\overline{U}_{k}, that is R1c0RnR_{1}\leq c_{0}R_{n}. Using (15) again, this time applied to the section U¯k+1\overline{U}_{k+1}, we have

(59) C4nk/2R1RnR1Rnn1c0(n1)R1n.C4^{-nk/2}\geq R_{1}\dots R_{n}\geq R_{1}\dots R_{n}^{n-1}\geq c_{0}^{-(n-1)}R_{1}^{n}.

Combining (58) and (59) implies the claimed fact that when U¯k\overline{U}_{k} has good shape so does U¯k+1\overline{U}_{k+1}333To be explicit, because we are proving (55) by induction (not a good shape estimate by induction) it does not matter that the good shape constant of U¯k+1\overline{U}_{k+1} is worse than U¯k\overline{U}_{k}..

Noting that A¯C2AC2\|\overline{A}\|_{C^{2}}\leq\|A\|_{C^{2}} we obtain by Lemma 4 a C2C^{2} estimate depending on allowed quantities and MM in V¯kτ0/4\overline{V}^{\tau_{0}/4}_{k}. Then the Evans–Krylov interior estimates imply C2,αC^{2,\alpha} estimates in V¯kτ0/2\overline{V}^{\tau_{0}/2}_{k} and subsequently higher estimates by the elliptic theory. Similarly for u¯k+1\overline{u}_{k+1} in the corresponding sections V¯k+1τ0/4,V¯k+1τ0/2\overline{V}^{\tau_{0}/4}_{k+1},\overline{V}^{\tau_{0}/2}_{k+1} .

Linearising and using the maximum principle on small domains(see [16, Lemma A.3]) we obtain

|u¯ku¯|,|u¯k+1u¯|Cνk.|\overline{u}_{k}-\overline{u}|,|\overline{u}_{k+1}-\overline{u}|\leq C\nu_{k}.

Thus |u¯ku¯k+1|Cνk|\overline{u}_{k}-\overline{u}_{k+1}|\leq C\nu_{k} in V¯k+1τ0/2\overline{V}^{\tau_{0}/2}_{k+1}. Then, by Lemma 7, in Vk+1τ0V^{\tau_{0}}_{k+1} we have

|D2uk(x)D2uk+1(x)|Cνk.|D^{2}u_{k}(x)-D^{2}u_{k+1}(x)|\leq C\nu_{k}.

(Note we first obtain this for u¯k,u¯k+1\overline{u}_{k},\overline{u}_{k+1} then use D2u¯k=D2ukD^{2}\overline{u}_{k}=D^{2}u_{k}.) We’ve used that by (57) the estimates for u¯k\overline{u}_{k} in V¯kτ0\overline{V}^{\tau_{0}}_{k} hold on the entire smaller section U¯k+1\overline{U}_{k+1}. Since all we’ve used is the induction hypothesis we can conclude the same inequality for kk replaced by i=0,1,2,,ki=0,1,2,\dots,k. Moreover because these sections have a controlled good shape constant we have

(60) |D2ui(x)D2ui+1(x)|Cωi=Cω(2i),|D^{2}u_{i}(x)-D^{2}u_{i+1}(x)|\leq C\omega_{i}=C\omega(2^{-i}),

for i=0,1,2,,ki=0,1,2,\dots,k.

Now, using (60), the calculations are standard:

|D2uk+1(x)|\displaystyle|D^{2}u_{k+1}(x)| |D2u0(x)|+i=0k|D2ui(x)D2ui+1(x)|\displaystyle\leq|D^{2}u_{0}(x)|+\sum_{i=0}^{k}|D^{2}u_{i}(x)-D^{2}u_{i+1}(x)|
|D2u0(x)|+Ci=0k2iω(2i)2i\displaystyle\leq|D^{2}u_{0}(x)|+C\sum_{i=0}^{k}2^{-i}\frac{\omega(2^{-i})}{2^{-i}}
|D2u0(x)|+C2k1ω(r)r𝑑r.\displaystyle\leq|D^{2}u_{0}(x)|+C\int_{2^{-k}}^{1}\frac{\omega(r)}{r}\ dr.

Thus provided ε\varepsilon in (52) is taken sufficiently small we conclude by induction that (55) holds for all kk. By the rescaling used in the proof this implies a fixed good shape estimate for each UkU_{k}. Then the desired C2C^{2} estimate holds by Lemma 6. ∎

Appendix A Omitted calculations

A.1. Proof that Akl,pmγ˙kγ˙l=0A_{kl,p_{m}}\dot{\gamma}_{k}\dot{\gamma}_{l}=0

Let some initial coordinates, denoted xx, be given and define x¯=gygz(x,y0,zh)\overline{x}=\frac{g_{y}}{g_{z}}(x,y_{0},z_{h}). For notation put gh()=g(,y0,zh)g_{h}(\cdot)=g(\cdot,y_{0},z_{h}). We compute that for ξ,η\xi,\eta in 𝐑n\mathbf{R}^{n}

(61) 2xkx¯ix¯jξiξjηk\displaystyle\frac{\partial^{2}x_{k}}{\partial\overline{x}_{i}\partial\overline{x}_{j}}\xi_{i}\xi_{j}\eta_{k} =gz2DpkAij(x,gh(x),Dgh(x))(E1ξ)i(E1ξ)jηh\displaystyle=g_{z}^{2}D_{p_{k}}A_{ij}(x,g_{h}(x),Dg_{h}(x))(E^{-1}\xi)_{i}(E^{-1}\xi)_{j}\eta_{h}
+gzgr,z[EirEjk+EikEjr]ξiξjηk\displaystyle\quad+g_{z}g_{r,z}[E^{ir}E^{jk}+E^{ik}E^{jr}]\xi_{i}\xi_{j}\eta_{k}

Thus if our initial coordinates are the x¯\overline{x} coordinates and ξη=0\xi\cdot\eta=0

(62) DpkAij(x,gh(x),Dgh(x))ξiξjηk=0.D_{p_{k}}A_{ij}(x,g_{h}(x),Dg_{h}(x))\xi_{i}\xi_{j}\eta_{k}=0.

We recall from [19, Eq. 2.3] that

(63) Dpkgij=Er,k[gij,rg,rgij,zgz1].D_{p_{k}}g_{ij}=E^{r,k}[g_{ij,r}-g_{,r}g_{ij,z}g_{z}^{-1}].

From the definition of x¯\overline{x} we compute

xkx¯j=gzEjk,\displaystyle\frac{\partial x_{k}}{\partial\overline{x}_{j}}=g_{z}E^{jk},
(64) x¯i=gzEirxr.\displaystyle\frac{\partial}{\partial\overline{x}_{i}}=g_{z}E^{ir}\frac{\partial}{\partial x_{r}}.

Applying (64) twice and using the identity for differentiating an inverse matrix gives

(65) 1gz2xkx¯ix¯j=EirEjkgr,zgzEirEja(DrEab)Ebk.\frac{1}{g_{z}}\frac{\partial^{2}x_{k}}{\partial\overline{x}_{i}\partial\overline{x}_{j}}=E^{ir}E^{jk}g_{r,z}-g_{z}E^{ir}E^{ja}(D_{r}E_{ab})E^{bk}.

Now, by direct calculation

DrEab=ga,zgzErb+gar,bgar,zg,bgz1,D_{r}E_{ab}=-\frac{g_{a,z}}{g_{z}}E_{rb}+g_{ar,b}-g_{ar,z}g_{,b}g_{z}^{-1},

which when substituted into (65) implies

1gz2xkx¯ix¯j\displaystyle\frac{1}{g_{z}}\frac{\partial^{2}x_{k}}{\partial\overline{x}_{i}\partial\overline{x}_{j}} =EirEjkgr,z+EikEjrgr,z\displaystyle=E^{ir}E^{jk}g_{r,z}+E^{ik}E^{jr}g_{r,z}
+gzEjrEiaEbk[gar,bgar,zg,bgz1].\displaystyle\quad\quad+g_{z}E^{jr}E^{ia}E^{bk}[g_{ar,b}-g_{ar,z}g_{,b}g_{z}^{-1}].

Combined with (63) this implies (61). We note if our initial coordinates are the x¯\overline{x} coordinates then

Eij(x,gh(x),Dgh(x))=gij(x,y0,zh)gi,zgjgz(x,y0,zh)=gzx¯jx¯i=δij.\displaystyle E_{ij}(x,g_{h}(x),Dg_{h}(x))=g_{ij}(x,y_{0},z_{h})-\frac{g_{i,z}g_{j}}{g_{z}}(x,y_{0},z_{h})=g_{z}\frac{\partial\overline{x}_{j}}{\partial\overline{x}_{i}}=\delta_{ij}.

This implies (62).

A.2. Quantitative convexity via duality

Here we prove the first inequality in (27) assuming the second inequality. We follow the duality argument in [11] simplified by the transformation in [18]. Indeed by [18, Lemma 3] it suffices to prove the result at the origin for the generating function

(66) g(x,y,z)=xyz+ai,jkxiyjykf(x,y,z)z,\displaystyle g(x,y,z)=x\cdot y-z+a_{i,jk}x_{i}y_{j}y_{k}-f(x,y,z)z,
f(x,y,z)=bij(1)xixj+bij(2)xiyj+bij(3)yiyj+f(2)(x,y,z)z,\displaystyle f(x,y,z)=b^{(1)}_{ij}x_{i}x_{j}+b^{(2)}_{ij}x_{i}y_{j}+b^{(3)}_{ij}y_{i}y_{j}+f^{(2)}(x,y,z)z,

where aij,k,bijk,f(2)a_{ij,k},b^{k}_{ij},f^{(2)} are C1C^{1} functions. Moreover we assume uu is a strictly gg-convex function satisfying u0u\geq 0 and Du(0),u(0)=0Du(0),u(0)=0. We need to prove u(x)C|x|1+γu(x)\geq C|x|^{1+\gamma}. Throughout the proof we use the notation O(xp)O(x^{p}) to denote any function h(x,y,z)h(x,y,z) satisfying an estimate |h(x,y,z)|C|x|p|h(x,y,z)|\leq C|x|^{p} on a neighbourhood of the origin for CC depending only on gC4\|g\|_{C^{4}}. Similarly for the notation O(yp)O(y^{p}).

The gg^{*}-transform of uu is v:Yu(Ω)𝐑v:Yu(\Omega)\rightarrow\mathbf{R} defined by

v(y)=g(x,y,u(x)),v(y)=g^{*}(x,y,u(x)),

for xx satisfying y=Yu(x)y=Yu(x) and gg^{*} the dual generating function (see [18]). The function vv is gg^{*}-convex with gg^{*}-support g(0,,0)g^{*}(0,\cdot,0) at 0 and the C1,αC^{1,\alpha} result, which holds by duality, implies |v(y)|C¯|y|1+α|v(y)|\leq\overline{C}|y|^{1+\alpha}. Thus, by duality,

(67) u(x)=supyg(x,y,v(y))supyg(x,y,C¯|y|1+α).\displaystyle u(x)=\sup_{y}g(x,y,v(y))\geq\sup_{y}g(x,y,\overline{C}|y|^{1+\alpha}).

We also note by the C1,αC^{1,\alpha} estimate for uu, |Y(x,u(x),Du(x))|C|x|α|Y(x,u(x),Du(x))|\leq C|x|^{\alpha}, so we can assume throughout that the neighbourhood over which the supremum is taken, and subsequently |y||y|, is sufficiently small. The supremum is obtained for yy satisfying

(68) gy(x,y,C¯|y|1+α)+C¯(1+α)gz(x,y,C¯|y|1+α)|y|α1y=0.g_{y}(x,y,\overline{C}|y|^{1+\alpha})+\overline{C}(1+\alpha)g_{z}(x,y,\overline{C}|y|^{1+\alpha})|y|^{\alpha-1}y=0.

Now using (66)

u(x)\displaystyle u(x) g(x,y,C¯|y|1+α)\displaystyle\geq g(x,y,\overline{C}|y|^{1+\alpha})
=xyC¯|y|1+α+O(y2)C¯f(x,y,z)|y|α+1.\displaystyle=x\cdot y-\overline{C}|y|^{1+\alpha}+O(y^{2})-\overline{C}f(x,y,z)|y|^{\alpha+1}.
(69) xyC¯|y|1+α+O(y2)+[O(x)+O(y)]|y|α+1.\displaystyle\geq x\cdot y-\overline{C}|y|^{1+\alpha}+O(y^{2})+[O(x)+O(y)]|y|^{\alpha+1}.

Now we use (68) to estimate xyx\cdot y from below. First note

(70) gy(x,y,C¯|y|1+α)=x+O(x)O(y)+[O(x)+O(y)]|y|1+α.g_{y}(x,y,\overline{C}|y|^{1+\alpha})=x+O(x)O(y)+[O(x)+O(y)]|y|^{1+\alpha}.

Thus substituting into (68) and taking an inner product with yy we obtain (near the origin in x,yx,y)

xy=C¯(1+α)|gz(x,y,C¯|y|1+α)||y|α+1+O(y2)+[O(x)+O(y)]|y|α+1,\displaystyle x\cdot y=\overline{C}(1+\alpha)|g_{z}(x,y,\overline{C}|y|^{1+\alpha})||y|^{\alpha+1}+O(y^{2})+[O(x)+O(y)]|y|^{\alpha+1},

and subsequently returing to (69) implies

u(x)\displaystyle u(x) C¯|y|α+1[(1+α)|gz(x,y,C¯|y|1+α)|1]\displaystyle\geq\overline{C}|y|^{\alpha+1}\left[(1+\alpha)|g_{z}(x,y,\overline{C}|y|^{1+\alpha})|-1\right]
+O(y2)+[O(x)+O(y)]|y|α+1.\displaystyle\quad+O(y^{2})+[O(x)+O(y)]|y|^{\alpha+1}.

Since |gz||g_{z}| is as close to 11 as desired on a sufficiently small neighbourhood of the origin we have

u(x)C(α)|y|α+1.u(x)\geq C(\alpha)|y|^{\alpha+1}.

However also from (68) and (70)

|x|\displaystyle|x| =C(1+α)|gz||y|α+O(x)O(y)+O(yα+1)\displaystyle=C(1+\alpha)|g_{z}||y|^{\alpha}+O(x)O(y)+O(y^{\alpha+1})
C|y|α.\displaystyle\leq C|y|^{\alpha}.

This completes the proof.

References

  • [1] L. A. Caffarelli. A localization property of viscosity solutions to the Monge-Ampère equation and their strict convexity. Ann. of Math. (2), 131(1):129–134, 1990.
  • [2] Luis A. Caffarelli. Interior W2,pW^{2,p} estimates for solutions of the Monge-Ampère equation. Ann. of Math. (2), 131(1):135–150, 1990.
  • [3] Alessio Figalli. The Monge-Ampère equation and its applications. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2017.
  • [4] Alessio Figalli, Young-Heon Kim, and Robert J. McCann. Hölder continuity and injectivity of optimal maps. Arch. Ration. Mech. Anal., 209(3):747–795, 2013.
  • [5] David Gilbarg and Neil S. Trudinger. Elliptic partial differential equations of second order. Classics in Mathematics. Springer-Verlag, Berlin, 2001.
  • [6] Nestor Guillen. A primer on generated Jacobian equations: geometry, optics, economics. Notices Amer. Math. Soc., 66(9):1401–1411, 2019.
  • [7] Nestor Guillen and Jun Kitagawa. Pointwise estimates and regularity in geometric optics and other generated Jacobian equations. Comm. Pure Appl. Math., 70(6):1146–1220, 2017.
  • [8] Seonghyeon Jeong. Local Hölder regularity of solutions to generated Jacobian equations. Pure Appl. Anal., 3(1):163–188, 2021.
  • [9] Yash Jhaveri. Partial regularity of solutions to the second boundary value problem for generated Jacobian equations. Methods Appl. Anal., 24(4):445–475, 2017.
  • [10] Jiakun Liu. Hölder regularity of optimal mappings in optimal transportation. Calc. Var. Partial Differential Equations, 34(4):435–451, 2009.
  • [11] Jiakun Liu, Neil S. Trudinger, and Xu-Jia Wang. Interior C2,αC^{2,\alpha} regularity for potential functions in optimal transportation. Comm. Partial Differential Equations, 35(1):165–184, 2010.
  • [12] Jiakun Liu and Xu-Jia Wang. Interior a priori estimates for the Monge-Ampère equation. In Surveys in differential geometry 2014. Regularity and evolution of nonlinear equations, volume 19 of Surv. Differ. Geom., pages 151–177. Int. Press, Somerville, MA, 2015.
  • [13] Grégoire Loeper. On the regularity of solutions of optimal transportation problems. Acta Math., 202(2):241–283, 2009.
  • [14] Xi-Nan Ma, Neil S. Trudinger, and Xu-Jia Wang. Regularity of potential functions of the optimal transportation problem. Arch. Ration. Mech. Anal., 177(2):151–183, 2005.
  • [15] Robert J. McCann and Kelvin Shuangjian Zhang. On concavity of the monopolist’s problem facing consumers with nonlinear price preferences. Comm. Pure Appl. Math., 72(7):1386–1423, 2019.
  • [16] Cale Rankin. Regularity and uniqueness results for generated Jacobian equations. PhD thesis, Australian National University, 2021.
  • [17] Cale Rankin. Strict convexity and C1C^{1} regularity of solutions to generated Jacobian equations in dimension two. Calc. Var. Partial Differential Equations, 60(6):Paper No. 221, 14, 2021.
  • [18] Cale Rankin. Strict gg-convexity for generated Jacobian equations with applications to global regularity. ArXiV:2111.00448, 2021.
  • [19] Neil S. Trudinger. On the local theory of prescribed Jacobian equations. Discrete Contin. Dyn. Syst., 34(4):1663–1681, 2014.
  • [20] Neil S. Trudinger. On the local theory of prescribed Jacobian equations revisited. Math. Eng., 3(6):Paper No. 048, 17, 2021.
  • [21] Cédric Villani. Optimal transport, volume 338 of Grundlehren der mathematischen Wissenschaften. Springer-Verlag, Berlin, 2009. Old and new.
  • [22] Xu-Jia Wang. Schauder estimates for elliptic and parabolic equations. Chinese Ann. Math. Ser. B, 27(6):637–642, 2006.