This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Asymptotic expansion of the hard-to-soft edge transition

Luming Yao111School of Mathematical Sciences, Fudan University, Shanghai 200433, China. E-mail: {lumingyao, lunzhang}@fudan.edu.cn.   and  Lun Zhang22footnotemark: 2
Abstract

By showing that the symmetrically transformed Bessel kernel admits a full asymptotic expansion for the large parameter, we establish a hard-to-soft edge transition expansion. This resolves a conjecture recently proposed by Bornemann.

1 Introduction and statement of results

Consideration in this paper is a universal phenomenon arising from the random matrix theory, namely, the hard-to-soft edge transition [8]. As a concrete example, let X1X_{1} and X2X_{2} be two n×(n+ν)n\times(n+\nu), ν0\nu\geq 0, random matrices, whose element is chosen to be an independent normal random variable. The n×nn\times n complex Wishart matrix XX or the Laguerre unitary ensemble (LUE), which plays an important role in statistics and signal processing (cf. [27, 30] and the references therein), is defined to be

X=(X1+iX2)(X1+iX2),X=(X_{1}+\mathrm{i}X_{2})(X_{1}+\mathrm{i}X_{2})^{\ast},

where the superscript stands for the operation of conjugate transpose. As nn\to\infty with ν\nu fixed, the smallest eigenvalue of XX accumulates near the hard-edge 0. After proper scaling, the limiting process is a determinantal point process characterized by the Bessel kernel [15, 18]

KνBes(x,y):=Jν(x)yJν(y)Jν(y)xJν(x)2(xy),x,y>0,K_{\nu}^{\textrm{Bes}}(x,y):=\frac{J_{\nu}(\sqrt{x})\sqrt{y}J_{\nu}^{\prime}(\sqrt{y})-J_{\nu}(\sqrt{y})\sqrt{x}J_{\nu}^{\prime}(\sqrt{x})}{2(x-y)},\qquad x,y>0, (1.1)

where JνJ_{\nu} is the Bessel function of the first kind of order ν\nu (cf. [24]). If the parameter ν\nu grows simultaneously with nn in such a way that ν/n\nu/n approaches a positive constant, it comes out that the smallest eigenvalue is pushed away from the origin, creating a soft-edge. The fluctuation around the soft-edge, however, is given by the Airy point process [15, 18], which is determined by the Airy kernel

KAi(x,y)=Ai(x)Ai(y)Ai(x)Ai(y)xy,x,y,\displaystyle K^{\mathrm{Ai}}(x,y)=\frac{\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)-\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y)}{x-y},\qquad x,y\in\mathbb{R}, (1.2)

where Ai\mathrm{Ai} is the standard Airy function. One encounters the same limiting process by considering the scaled cumulative distribution of largest eigenvalues for large random Hermitian matrices with complex Gaussian entries, which is also known as the Tracy-Widom distribution [28].

Besides the above explanation of the hard-to-soft edge transition, the present work is also highly motivated by its connection with distribution of the length of longest increasing subsequences. Let SnS_{n} be the set of all permutations on {1,2,,n}\{1,2,\ldots,n\}. Given σSn\sigma\in S_{n}, we denote by Ln(σ)L_{n}(\sigma) the length of longest increasing subsequences, which is defined as the maximum of all kk such that 1i1<i2<<ikn1\leq i_{1}<i_{2}<\cdots<i_{k}\leq n with σ(i1)<σ(i2)<<σ(ik)\sigma(i_{1})<\sigma(i_{2})<\cdots<\sigma(i_{k}). Equipped SnS_{n} with the uniform measure, the question of the distribution of discrete random variable Ln(σ)L_{n}(\sigma) for large nn was posed by Ulam in the early 1960s [29]. After the efforts of many people (cf. [1, 26] and the references therein), Baik, Deift and Johansson finally answered this question in a celebrated work [2] by showing

limn(Ln2nn1/6t)=F(t),\lim_{n\to\infty}\mathbb{P}\left(\frac{L_{n}-2\sqrt{n}}{n^{1/6}}\leq t\right)=F(t), (1.3)

where

F(t):=det(IKAi)|L2(t,)F(t):=\det(I-K^{\mathrm{Ai}})\big{|}_{L^{2}(t,\infty)} (1.4)

is the aforementioned Tracy-Widom distribution with KAiK^{\mathrm{Ai}} being the Airy kernel in (1.2).

To establish (1.3), a key ingredient of the proof is to introduce the exponential generating function of LnL_{n} defined by

P(r;l)=ern=0(Lnl)rnn!,r>0,P(r;l)=e^{-r}\sum_{n=0}^{\infty}\mathbb{P}(L_{n}\leq l)\frac{r^{n}}{n!},\qquad r>0,

which is known as Hammersley’s Poissonization of the random variable LnL_{n}. The quantity itself can be interpreted as the cumulative distribution of L(r)L(r) – the length of longest up/right path from (0,0)(0,0) to (1,1)(1,1) with nodes chosen randomly according to a Poisson process of intensity rr; cf. [3, Chapter 2]. By representing P(r;l)P(r;l) as a Toeplitz determinant, it was proved in [2] that

limr(L(r)2rr1/6t)=F(t).\lim_{r\to\infty}\mathbb{P}\left(\frac{L(r)-2\sqrt{r}}{r^{1/6}}\leq t\right)=F(t). (1.5)

This, together with Johansson’s de-Poissonization lemma [20], will lead to (1.3).

Alternatively, one has (see [8, 16])

P(r;l)=E2hard(4r;l),P(r;l)=E_{2}^{\mathrm{hard}}(4r;l), (1.6)

where

E2hard(s;ν):=det(IKνBes)|L2(0,s)E_{2}^{\mathrm{hard}}(s;\nu):=\det(I-K_{\nu}^{\mathrm{Bes}})\big{|}_{L^{2}(0,s)} (1.7)

with KνBesK_{\nu}^{\mathrm{Bes}} being the Bessel kernel defined in (1.1) is the scaled hard-edge gap probability of LUE over (0,s)(0,s). Thus, by showing the hard-to-soft transition

limνE2hard((νt(ν/2)1/3)2;ν)=F(t),\lim_{\nu\to\infty}E_{2}^{\mathrm{hard}}\left(\left(\nu-t(\nu/2)^{1/3}\right)^{2};\nu\right)=F(t), (1.8)

Borodin and Forrester reclaimed (1.5) in [8].

An interesting question now is to improve (1.3) and (1.5) by establishing the first few finite-size correction terms or the asymptotic expansion. This is also known as edgeworth expansions in the literature, and we particularly refer to [7, 10, 13, 14, 19, 25] for the relevant results of Laguerre ensembles. In the context of the distribution for the length of longest increasing subsequences, the relationship (1.6) plays an important role in a recent work of Bornemann [5] among various studies toward this aim [4, 6, 17]. Instead of working on the Fredholm determinant directly, the idea in [5] is to establish an expansion between the Bessel kernel and the Airy kernel, which can be lifted to trace class operators. It is the aim of this paper to resolve some conjectures posed therein.

To proceed, we set

hν:=213ν23,h_{\nu}:=2^{-\frac{1}{3}}\nu^{-\frac{2}{3}}, (1.9)

and define, as in [5], the symmetrically transformed Bessel kernel

K^νBes(x,y):=ϕν(x)ϕν(y)KνBes(ϕν(x),ϕν(y)),\displaystyle\hat{K}_{\nu}^{\mathrm{Bes}}(x,y):=\sqrt{\phi_{\nu}^{\prime}(x)\phi_{\nu}^{\prime}(y)}K_{\nu}^{\mathrm{Bes}}(\phi_{\nu}(x),\phi_{\nu}(y)), (1.10)

where

ϕν(t):=ν2(1hνt)2.\phi_{\nu}(t):=\nu^{2}(1-h_{\nu}t)^{2}. (1.11)

Our main result is stated as follows.

Theorem 1.1.

With K^νBes(x,y)\hat{K}_{\nu}^{\mathrm{Bes}}(x,y) defined in (1.10), we have, for any 𝔪\mathfrak{m}\in\mathbb{N},

K^νBes(x,y)=KAi(x,y)+j=1𝔪Kj(x,y)hνj+hν𝔪+1𝒪(e(x+y)),hν0+,\displaystyle\hat{K}_{\nu}^{\mathrm{Bes}}(x,y)=K^{\mathrm{Ai}}(x,y)+\sum_{j=1}^{\mathfrak{m}}K_{j}(x,y)h_{\nu}^{j}+h_{\nu}^{\mathfrak{m}+1}\cdot\mathcal{O}\left(e^{-(x+y)}\right),\qquad h_{\nu}\to 0^{+}, (1.12)

uniformly valid for t0x,y<hν1t_{0}\leq x,y<h_{\nu}^{-1} with t0t_{0} being any fixed real number. Preserving uniformity, the expansion can be repeatedly differentiated w.r.t. the variable xx and yy. Here, KAiK^{\mathrm{Ai}} is the Airy kernel given in (1.2) and

Kj(x,y)=κ,λ{0,1}pj,κλ(x,y)Ai(κ)(x)Ai(λ)(y)\displaystyle K_{j}(x,y)=\sum_{\kappa,\lambda\in\{0,1\}}p_{j,\kappa\lambda}(x,y)\mathrm{Ai}^{(\kappa)}(x)\mathrm{Ai}^{(\lambda)}(y) (1.13)

with pj,κλ(x,y)p_{j,\kappa\lambda}(x,y) being polynomials in xx and yy. Moreover, we have

K1(x,y)\displaystyle K_{1}(x,y) =110(3(x2+xy+y2)Ai(x)Ai(y)+2(Ai(x)Ai(y)+Ai(x)Ai(y))\displaystyle=\frac{1}{10}\left(-3(x^{2}+xy+y^{2})\mathrm{Ai}(x)\mathrm{Ai}(y)+2(\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)+\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y))\right.
+3(x+y)Ai(x)Ai(y)),\displaystyle\left.~{}~{}~{}+3(x+y)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}^{\prime}(y)\right), (1.14)

and

K2(x,y)\displaystyle K_{2}(x,y) =11400((56235(x2+y2)319xy(x+y))Ai(x)Ai(y)\displaystyle=\frac{1}{1400}\left((56-235(x^{2}+y^{2})-319xy(x+y))\mathrm{Ai}(x)\mathrm{Ai}(y)\right.
+(63(x4+x3yx2y2xy3y4)55x+239y)Ai(x)Ai(y)\displaystyle~{}~{}~{}+(63(x^{4}+x^{3}y-x^{2}y^{2}-xy^{3}-y^{4})-55x+239y)\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)
+(63(y4+xy3x2y2x3yx4)55y+239x)Ai(x)Ai(y)\displaystyle~{}~{}~{}+(63(y^{4}+xy^{3}-x^{2}y^{2}-x^{3}y-x^{4})-55y+239x)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y)
+(340(x2+y2)+256xy)Ai(x)Ai(y)).\displaystyle\left.~{}~{}+(340(x^{2}+y^{2})+256xy)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}^{\prime}(y)\right). (1.15)

Based on a uniform version of transient asymptotic expansion of Bessel functions [23], the above theorem is stated in [5] under the condition that 0𝔪𝔪=1000\leq\mathfrak{m}\leq\mathfrak{m}_{*}=100, where the upper bound 𝔪\mathfrak{m}_{*} is obtained through a numerical inspection. It is conjectured therein that (1.12) is valid without such a restriction, Theorem 1.1 thus gives a confirm answer to this conjecture.

As long as the Bessel kernel admits an expansion of the form (1.12), it is generally believed that one can lift the expansion to the associated Fredholm determinants. By carefully estimating trace norms in terms of kernel bounds, this is rigorously established in [5, Theorem 2.1] for the perturbed Airy kernel determinants, which allows us to obtain the following hard-to-soft edge transition expansion with the aid of Theorem 1.1.

Corollary 1.2.

With E2hardE_{2}^{\mathrm{hard}} defined in (1.7), we have, for any 𝔪\mathfrak{m}\in\mathbb{N},

E2hard(ϕν(t);ν)=F(t)+j=1𝔪Fj(t)hνj+hν𝔪+1𝒪(e3t/2),hν0+,\displaystyle E_{2}^{\textrm{hard}}(\phi_{\nu}(t);\nu)=F(t)+\sum_{j=1}^{\mathfrak{m}}F_{j}(t)h_{\nu}^{j}+h_{\nu}^{\mathfrak{m}+1}\cdot\mathcal{O}\left(e^{-3t/2}\right),\qquad h_{\nu}\to 0^{+}, (1.16)

uniformly valid for t0t<hν1t_{0}\leq t<h_{\nu}^{-1} with t0t_{0} being any fixed real number. Preserving uniformity, the expansion can be repeatedly differentiated w.r.t. the variable tt. Here, FF denotes the Tracy-Widom distribution (1.4) and FjF_{j} are certain smooth functions.

Again the above result is stated in [5] under a restriction on the number of summation but with explicit expressions of F1F_{1} and F2F_{2} in terms of the derivatives of FF. The expansion (1.16) serves as a preparatory step in establishing the expansion of the limit law (1.3) in [5]. Finally, we also refer to [9] for exponential moments, central limit theorems and rigidity of the hard-to-soft edge transition.

The rest of this paper is devoted to the proof of Theorem 1.1. The difficulty of using transient asymptotic expansion of the Bessel functions for large order to prove Theorem 1.1 lies in checking the divisibility of a certain sequence of polynomials. Indeed, it was commented in [5] that one probably needs some hidden symmetry of the coefficients in the expansion. The approach we adopt here, however, is based on a Riemann-Hilbert (RH) characterization of the Bessel kernel, as described in Section 2. By performing a Deift-Zhou nonlinear steepest descent analysis [12] to the associated RH problem in Section 3, the initial RH problem will be transformed into a small norm problem, for which one can find a uniform estimate. The proof of Theorem 1.1 is an outcome of our asymptotic analysis, which is presented in Section 4.

Notations

Throughout this paper, the following notations are frequently used.

  • If AA is a matrix, then (A)ij(A)_{ij} stands for its (i,j)(i,j)-th entry. We use II to denote a 2×22\times 2 identity matrix.

  • As usual, the three Pauli matrices {σj}j=13\{\sigma_{j}\}_{j=1}^{3} are defined by

    σ1=(0110),σ2=(0ii0),σ3=(1001).\sigma_{1}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\qquad\sigma_{2}=\begin{pmatrix}0&-\mathrm{i}\\ \mathrm{i}&0\end{pmatrix},\qquad\sigma_{3}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}. (1.17)

2 An RH characterization of the Bessel kernel

Our starting point is the following Bessel parametrix [21], which particularly characterizes the Bessel kernel KνBes(x,y)K_{\nu}^{\textrm{Bes}}(x,y) in (1.1).

For z[0,+)z\in\mathbb{C}\setminus[0,+\infty) and ν>1\nu>-1, we set

Ψν(z)=πeπi4(Iν((z)12)iπKν((z)12)(z)12Iν((z)12)iπ(z)12Kν((z)12)),\displaystyle\Psi_{\nu}(z)=\sqrt{\pi}e^{-\frac{\pi\mathrm{i}}{4}}\begin{pmatrix}I_{\nu}\left((-z)^{\frac{1}{2}}\right)&-\frac{\mathrm{i}}{\pi}K_{\nu}\left((-z)^{\frac{1}{2}}\right)\\ (-z)^{\frac{1}{2}}I_{\nu}^{\prime}\left((-z)^{\frac{1}{2}}\right)&-\frac{\mathrm{i}}{\pi}(-z)^{\frac{1}{2}}K^{\prime}_{\nu}\left((-z)^{\frac{1}{2}}\right)\end{pmatrix}, (2.1)

where IνI_{\nu} and KνK_{\nu} denote the modified Bessel functions of order ν\nu (see [24]) and define

Ψ(z;ν)=Ψν(z){(10eπiν1),argz(0,π3),I,argz(π3,5π3),(10eπiν1),argz(5π3,2π).\displaystyle\Psi(z;\nu)=\Psi_{\nu}(z)\begin{cases}\begin{pmatrix}1&0\\ -e^{-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad\arg z\in(0,\frac{\pi}{3}),\\ I,&\qquad\arg z\in(\frac{\pi}{3},\frac{5\pi}{3}),\\ \begin{pmatrix}1&0\\ e^{\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad\arg z\in(\frac{5\pi}{3},2\pi).\end{cases} (2.2)

Then Ψ(z):=Ψ(z;ν)\Psi(z):=\Psi(z;\nu) satisfies the following RH problem.

RH problem 2.1.
  • (a)

    Ψ(z)\Psi(z) is defined and analytic for z{j=13Γj{0}}z\in\mathbb{C}\setminus\left\{\cup_{j=1}^{3}\Gamma_{j}\cup\{0\}\right\}, where

    Γ1:=eπi3(0,+),Γ2:=(0,+),Γ3:=eπi3(0,+);\displaystyle\Gamma_{1}:=e^{\frac{\pi\mathrm{i}}{3}}(0,+\infty),\qquad\Gamma_{2}:=(0,+\infty),\qquad\Gamma_{3}:=e^{-\frac{\pi\mathrm{i}}{3}}(0,+\infty); (2.3)

    see Figure 1 for an illustration.

  • (b)

    For zΓjz\in\Gamma_{j}, j=1,2,3j=1,2,3, the limiting values of Ψ\Psi exist and satisfy the jump condition

    Ψ+(z)=Ψ(z){(10eπiν1),zΓ1,(0110),zΓ2,(10eπiν1),zΓ3.\displaystyle\Psi_{+}(z)=\Psi_{-}(z)\begin{cases}\begin{pmatrix}1&0\\ e^{-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\Gamma_{1},\\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},&\qquad z\in\Gamma_{2},\\ \begin{pmatrix}1&0\\ e^{\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\Gamma_{3}.\end{cases} (2.4)
  • (c)

    As zz\to\infty, we have

    Ψ(z)=(104ν2+381)(I+𝒪(z1))(z)14σ312(1111)e((z)1/2πi4)σ3,\displaystyle\Psi(z)=\begin{pmatrix}1&0\\ -\frac{4\nu^{2}+3}{8}&1\end{pmatrix}\left(I+\mathcal{O}(z^{-1})\right)(-z)^{-\frac{1}{4}\sigma_{3}}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}e^{((-z)^{1/2}-\frac{\pi\mathrm{i}}{4})\sigma_{3}}, (2.5)

    where σ3\sigma_{3} is defined in (1.17), the branch cuts of (z)±14(-z)^{\pm\frac{1}{4}} and (z)±12(-z)^{\pm\frac{1}{2}} are chosen along [0,+)[0,+\infty).

  • (d)

    As z0z\to 0, we have, for ν\nu\notin\mathbb{Z},

    Ψ(z)=Ψ^(z)(z)ν2σ3(1i2sinπν01){(10eπiν1),argz(0,π3),I,argz(π3,5π3),(10eπiν1),argz(5π3,2π),\displaystyle\Psi(z)=\widehat{\Psi}(z)(-z)^{\frac{\nu}{2}\sigma_{3}}\begin{pmatrix}1&\frac{\mathrm{i}}{2\sin\pi\nu}\\ 0&1\end{pmatrix}\begin{cases}\begin{pmatrix}1&0\\ -e^{-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad\arg z\in(0,\frac{\pi}{3}),\\ I,&\qquad\arg z\in(\frac{\pi}{3},\frac{5\pi}{3}),\\ \begin{pmatrix}1&0\\ e^{\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad\arg z\in(\frac{5\pi}{3},2\pi),\end{cases} (2.6)

    and for ν\nu\in\mathbb{Z},

    Ψ(z)=Ψ^(z)(z)ν2σ3(1eπiν2πiln(z)01){(10eπiν1),argz(0,π3),I,argz(π3,5π3),(10eπiν1),argz(5π3,2π),\displaystyle\Psi(z)=\widehat{\Psi}(z)(-z)^{\frac{\nu}{2}\sigma_{3}}\begin{pmatrix}1&-\frac{e^{\pi\mathrm{i}\nu}}{2\pi\mathrm{i}}\ln(-z)\\ 0&1\end{pmatrix}\begin{cases}\begin{pmatrix}1&0\\ -e^{-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad\arg z\in(0,\frac{\pi}{3}),\\ I,&\qquad\arg z\in(\frac{\pi}{3},\frac{5\pi}{3}),\\ \begin{pmatrix}1&0\\ e^{\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad\arg z\in(\frac{5\pi}{3},2\pi),\end{cases} (2.7)

    where Ψ^(z)\widehat{\Psi}(z) is analytic at z=0z=0 and we choose principal branches for (z)±ν2(-z)^{\pm\frac{\nu}{2}} and ln(z)\ln(-z).

0Γ1\Gamma_{1}Γ2\Gamma_{2}Γ3\Gamma_{3}
Figure 1: The jump contours of the RH problem for Ψ\Psi.

The Bessel kernel then admits the following representation:

KνBes(x,y)=12πi(xy)(eπiν2eπiν2)Ψ+(y)1Ψ+(x)(eπiν2eπiν2),\displaystyle K_{\nu}^{\mathrm{Bes}}(x,y)=\frac{1}{2\pi\mathrm{i}(x-y)}\begin{pmatrix}-e^{-\frac{\pi\mathrm{i}\nu}{2}}&e^{\frac{\pi\mathrm{i}\nu}{2}}\end{pmatrix}\Psi_{+}(y)^{-1}\Psi_{+}(x)\begin{pmatrix}e^{\frac{\pi\mathrm{i}\nu}{2}}\\ e^{-\frac{\pi\mathrm{i}\nu}{2}}\end{pmatrix}, (2.8)

where the ++ sign means taking limits from the upper half-plane.

3 Asymptotic analysis of the RH problem for Ψ\Psi with large ν\nu

In this section, we will perform a Deift-Zhou steepest descent analysis [11] for the RH problem for Ψ\Psi. It consists of a series of explicit and invertible transformations and the final goal is to arrive at an RH problem tending to the identity matrix as ν+\nu\to+\infty. Without loss of generality, we may assume that ν>0\nu>0 in what follows.

3.1 First transformation: ΨY\Psi\to Y

Due to the scaled variable (1.11), the first transformation is a scaling. In addition, we multiply some constant matrices from the left to simplify the asymptotic behavior at infinity.

Define the matrix-valued function

Y(z)=ν12σ3(104ν2+381)Ψ(ν2z).\displaystyle Y(z)=\nu^{\frac{1}{2}\sigma_{3}}\begin{pmatrix}1&0\\ \frac{4\nu^{2}+3}{8}&1\end{pmatrix}\Psi(\nu^{2}z). (3.1)

It is then readily seen from RH problem 2.1 that YY satisfies the following RH problem.

RH problem 3.1.
  • (a)

    Y(z)Y(z) is defined and analytic for z{j=13Γj{0}}z\in\mathbb{C}\setminus\left\{\cup_{j=1}^{3}\Gamma_{j}\cup\{0\}\right\}, where Γi\Gamma_{i}, i=1,2,3i=1,2,3, is defined in (2.3).

  • (b)

    Y(z)Y(z) satisfies the jump condition

    Y+(z)=Y(z){(10eπiν1),zΓ1,(0110),zΓ2,(10eπiν1),zΓ3.\displaystyle Y_{+}(z)=Y_{-}(z)\begin{cases}\begin{pmatrix}1&0\\ e^{-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\Gamma_{1},\\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},&\qquad z\in\Gamma_{2},\\ \begin{pmatrix}1&0\\ e^{\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\Gamma_{3}.\end{cases} (3.2)
  • (c)

    As zz\to\infty, we have

    Y(z)=(I+𝒪(z1))(z)14σ312(1111)e(ν(z)1/2πi4)σ3,\displaystyle Y(z)=\left(I+\mathcal{O}(z^{-1})\right)(-z)^{-\frac{1}{4}\sigma_{3}}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}e^{(\nu(-z)^{1/2}-\frac{\pi\mathrm{i}}{4})\sigma_{3}}, (3.3)

    where we take the principal branch for fractional exponents.

3.2 Second transformation: YTY\to T

In the second transformation we apply contour deformations. The rays Γ1\Gamma_{1} and Γ3\Gamma_{3} emanating from the origin are replaced by their parallel lines Γ~1\widetilde{\Gamma}_{1} and Γ~3\widetilde{\Gamma}_{3} emanating from 11. Let I and II be two regions bounded by Γ1Γ~1[0,1]\Gamma_{1}\cup\widetilde{\Gamma}_{1}\cup[0,1] and Γ3Γ~3[0,1]\Gamma_{3}\cup\widetilde{\Gamma}_{3}\cup[0,1], respectively; see Figure 2 for an illustration. We now define

T(z)=Y(z){(10eπiν1),zI,(10eπiν1),zII,I,elsewhere.\displaystyle T(z)=Y(z)\begin{cases}\begin{pmatrix}1&0\\ e^{-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\textrm{I},\\ \begin{pmatrix}1&0\\ -e^{\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\textrm{II},\\ I,&\qquad\textrm{elsewhere.}\end{cases} (3.4)
011Γ~1\widetilde{\Gamma}_{1}Γ~2\widetilde{\Gamma}_{2}Γ~3\widetilde{\Gamma}_{3}III
Figure 2: The jump contour ΓT\Gamma_{T} of the RH problem for TT.

It is easily seen the following RH problem for TT.

RH problem 3.2.
  • (a)

    T(z)T(z) is defined and analytic in ΓT\mathbb{C}\setminus\Gamma_{T}, where

    ΓT:=i=13Γ~i[0,1],\Gamma_{T}:=\cup_{i=1}^{3}\widetilde{\Gamma}_{i}\cup[0,1], (3.5)

    with

    Γ~1:=eπi3(1,+),Γ~2:=(1,+),Γ~3:=eπi3(1,+);\displaystyle\widetilde{\Gamma}_{1}:=e^{\frac{\pi\mathrm{i}}{3}}(1,+\infty),\qquad\widetilde{\Gamma}_{2}:=(1,+\infty),\qquad\widetilde{\Gamma}_{3}:=e^{-\frac{\pi\mathrm{i}}{3}}(1,+\infty); (3.6)

    see Figure 2.

  • (b)

    T(z)T(z) satisfies the jump condition

    T+(z)=T(z){(10eπiν1),zΓ~1,(0110),zΓ~2,(10eπiν1),zΓ~3,(eπiν10eπiν),z(0,1).\displaystyle T_{+}(z)=T_{-}(z)\begin{cases}\begin{pmatrix}1&0\\ e^{-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\widetilde{\Gamma}_{1},\\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},&\qquad z\in\widetilde{\Gamma}_{2},\\ \begin{pmatrix}1&0\\ e^{\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\widetilde{\Gamma}_{3},\\ \begin{pmatrix}e^{-\pi\mathrm{i}\nu}&1\\ 0&e^{\pi\mathrm{i}\nu}\end{pmatrix},&\qquad z\in(0,1).\end{cases} (3.7)
  • (c)

    As zz\to\infty, we have

    T(z)=(I+𝒪(z1))(z)14σ312(1111)e(ν(z)1/2πi4)σ3.\displaystyle T(z)=\left(I+\mathcal{O}(z^{-1})\right)(-z)^{-\frac{1}{4}\sigma_{3}}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}e^{(\nu(-z)^{1/2}-\frac{\pi\mathrm{i}}{4})\sigma_{3}}. (3.8)

3.3 Third transformation: TST\to S

In order to normalize the behavior at infinity, we apply the third transformation TST\to S by introducing the so-called gg-function:

g(z):=(1z)12+12ln(1+(1z)1/21(1z)1/2)±πi2,±Imz>0.\displaystyle g(z):=-(1-z)^{\frac{1}{2}}+\frac{1}{2}\ln\left(\frac{1+(1-z)^{1/2}}{1-(1-z)^{1/2}}\right)\pm\frac{\pi\mathrm{i}}{2},\quad\pm\mathrm{Im}\,z>0. (3.9)

As before, we take a cut along [1,+)[1,+\infty) for (1z)12(1-z)^{\frac{1}{2}}. The following proposition of gg is immediate from its definition.

Proposition 3.3.
  • (i)

    The function g(z)g(z) is analytic in [0,+)\mathbb{C}\setminus[0,+\infty).

  • (ii)

    For z(,0)z\in(-\infty,0), we have

    g(z)=1z+12ln(1+1z1z1).\displaystyle g(z)=-\sqrt{1-z}+\frac{1}{2}\ln\left(\frac{1+\sqrt{1-z}}{\sqrt{1-z}-1}\right). (3.10)
  • (iii)

    For z(0,1)z\in(0,1), we have

    g±(z)=1z+12ln(1+1z11z)±πi2.\displaystyle g_{\pm}(z)=-\sqrt{1-z}+\frac{1}{2}\ln\left(\frac{1+\sqrt{1-z}}{1-\sqrt{1-z}}\right)\pm\frac{\pi\mathrm{i}}{2}. (3.11)
  • (iv)

    For z(1,+)z\in(1,+\infty), we have

    g±(z)=±iz1±i2arg(1iz11+iz1)±πi2.\displaystyle g_{\pm}(z)=\pm\mathrm{i}\sqrt{z-1}\pm\frac{\mathrm{i}}{2}\arg\left(\frac{1-\mathrm{i}\sqrt{z-1}}{1+\mathrm{i}\sqrt{z-1}}\right)\pm\frac{\pi\mathrm{i}}{2}. (3.12)
  • (v)

    As zz\to\infty, we have

    g(z)=(z)12+12(z)12+𝒪(z1).\displaystyle g(z)=-\left(-z\right)^{\frac{1}{2}}+\frac{1}{2}\left(-z\right)^{-\frac{1}{2}}+\mathcal{O}(z^{-1}). (3.13)
Refer to caption
Figure 3: Image of Reg\mathrm{Re}\,g: the solid line is the contour of Reg(z)=0\mathrm{Re}\,{g(z)}=0, the “-” sign is the region where Reg(z)<0\mathrm{Re}\,{g(z)}<0 and the “++” sign shows the region where Reg(z)>0\mathrm{Re}\,{g(z)}>0.

By setting

S(z)=T(z)eνg(z)σ3,\displaystyle S(z)=T(z)e^{\nu g(z)\sigma_{3}}, (3.14)

it is readily seen from Proposition 3.3 and RH problem 3.2 that SS satisfies the RH problem as follows.

RH problem 3.4.
  • (a)

    S(z)S(z) is defined and analytic in ΓT\mathbb{C}\setminus\Gamma_{T}, where the contour ΓT\Gamma_{T} is defined in (3.5).

  • (b)

    S(z)S(z) satisfies the jump condition

    S+(z)=S(z){(10e2νg(z)πiν1),zΓ~1,(0110),zΓ~2,(10e2νg(z)+πiν1),zΓ~3,(1e2ν(1z12ln(1+1z11z))01),z(0,1).\displaystyle S_{+}(z)=S_{-}(z)\begin{cases}\begin{pmatrix}1&0\\ e^{2\nu g(z)-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\widetilde{\Gamma}_{1},\\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},&\qquad z\in\widetilde{\Gamma}_{2},\\ \begin{pmatrix}1&0\\ e^{2\nu g(z)+\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in\widetilde{\Gamma}_{3},\\ \begin{pmatrix}1&e^{2\nu\left(\sqrt{1-z}-\frac{1}{2}\ln\left(\frac{1+\sqrt{1-z}}{1-\sqrt{1-z}}\right)\right)}\\ 0&1\end{pmatrix},&\qquad z\in(0,1).\end{cases} (3.15)
  • (c)

    As zz\to\infty, we have

    S(z)=(I+𝒪(z1))(z)14σ312(1111)eπi4σ3.\displaystyle S(z)=\left(I+\mathcal{O}(z^{-1})\right)(-z)^{-\frac{1}{4}\sigma_{3}}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}e^{-\frac{\pi\mathrm{i}}{4}\sigma_{3}}. (3.16)

3.4 Global parametrix

As ν+\nu\to+\infty, from the image of Reg\mathrm{Re}\,g depicted in Figure 3, we conclude that all the jump matrices of SS tend to II exponentially fast except for that along (1,+)(1,+\infty). Ignoring the exponential small terms in the jump matrices for SS, we come to the following global parametrix.

RH problem 3.5.
  • (a)

    N(z)N(z) is defined and analytic in [1,+)\mathbb{C}\setminus[1,+\infty).

  • (b)

    N(z)N(z) satisfies the jump condition

    N+(z)=N(z)(0110),z[1,+).\displaystyle N_{+}(z)=N_{-}(z)\begin{pmatrix}0&1\\ -1&0\end{pmatrix},\qquad z\in[1,+\infty). (3.17)
  • (c)

    As zz\to\infty, we have

    N(z)=(I+𝒪(z1))(z)14σ312(1111)eπi4σ3.\displaystyle N(z)=\left(I+\mathcal{O}(z^{-1})\right)(-z)^{-\frac{1}{4}\sigma_{3}}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}e^{-\frac{\pi\mathrm{i}}{4}\sigma_{3}}. (3.18)

An explicit solution to the RH problem for NN is given by

N(z)=(1z)14σ312(1111)eπi4σ3.\displaystyle N(z)=(1-z)^{-\frac{1}{4}\sigma_{3}}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}e^{-\frac{\pi\mathrm{i}}{4}\sigma_{3}}. (3.19)

3.5 Local parametrix

Since the jump matrices for SS and NN are not uniformly close to each other near the point z=1z=1, we next construct local parametrix near this point. In a disc D(1,ε)D(1,\varepsilon) centered at 1 with certain fixed radius 0<ε<10<\varepsilon<1, we seek a 2×22\times 2 matrix-valued function P(z)P(z) satisfying an RH problem as follows.

RH problem 3.6.
  • (a)

    P(z)P(z) is defined and analytic in D(1,ε)ΓTD(1,\varepsilon)\setminus\Gamma_{T}, where the contour ΓT\Gamma_{T} is defined in (3.5).

  • (b)

    P(z)P(z) satisfies the jump condition

    P+(z)=P(z){(10e2νg(z)πiν1),zD(1,ε)Γ~1,(0110),zD(1,ε)Γ~2,(10e2νg(z)+πiν1),zD(1,ε)Γ~3,(1e2ν(1z12ln(1+1z11z))01),zD(1,ε)(0,1).\displaystyle P_{+}(z)=P_{-}(z)\begin{cases}\begin{pmatrix}1&0\\ e^{2\nu g(z)-\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in D(1,\varepsilon)\cap\widetilde{\Gamma}_{1},\\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},&\qquad z\in D(1,\varepsilon)\cap\widetilde{\Gamma}_{2},\\ \begin{pmatrix}1&0\\ e^{2\nu g(z)+\pi\mathrm{i}\nu}&1\end{pmatrix},&\qquad z\in D(1,\varepsilon)\cap\widetilde{\Gamma}_{3},\\ \begin{pmatrix}1&e^{2\nu\left(\sqrt{1-z}-\frac{1}{2}\ln\left(\frac{1+\sqrt{1-z}}{1-\sqrt{1-z}}\right)\right)}\\ 0&1\end{pmatrix},&\qquad z\in D(1,\varepsilon)\cap(0,1).\end{cases} (3.20)
  • (c)

    As ν\nu\to\infty, we have

    P(z)=(I+𝒪(ν1))N(z),zD(1,ε),\displaystyle P(z)=\left(I+\mathcal{O}(\nu^{-1})\right)N(z),\qquad z\in\partial D(1,\varepsilon), (3.21)

    where NN is given in (3.19).

This local parametrix can be constructed by using the Airy parametrix Φ(Ai)\Phi^{(\mathrm{Ai})} introduced in Appendix A. To do this, we introduce the function:

f(z)\displaystyle f(z) =(3g(z)23πi4)23,±Imz>0\displaystyle=\left(\frac{3g(z)}{2}\mp\frac{3\pi\mathrm{i}}{4}\right)^{\frac{2}{3}},\qquad\pm\mathrm{Im}\,z>0
=223(z1)(125(z1)+43175(z1)2+𝒪((z1)3)),z1.\displaystyle=-2^{-\frac{2}{3}}(z-1)\left(1-\frac{2}{5}(z-1)+\frac{43}{175}(z-1)^{2}+\mathcal{O}\left((z-1)^{3}\right)\right),\qquad z\to 1. (3.22)

It is easily obtained that

f(1)=0,f(1)=223<0.\displaystyle f(1)=0,\qquad f^{\prime}(1)=-2^{-\frac{2}{3}}<0. (3.23)

We then set

P(z)=E(z)Φ(Ai)(ν23f(z))eν(g(z)πi2)σ3σ3\displaystyle P(z)=E(z)\Phi^{(\mathrm{Ai})}\left(\nu^{\frac{2}{3}}f(z)\right)e^{\nu\left(g(z)\mp\frac{\pi\mathrm{i}}{2}\right)\sigma_{3}}\sigma_{3} (3.24)

with

E(z):=N(z)σ312(1ii1)f(z)14σ3ν16σ3.\displaystyle E(z):=N(z)\sigma_{3}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-\mathrm{i}\\ -\mathrm{i}&1\end{pmatrix}f(z)^{\frac{1}{4}\sigma_{3}}\nu^{\frac{1}{6}\sigma_{3}}. (3.25)
Proposition 3.7.

The matrix-valued function P(z)P(z) defined in (3.24) solves RH problem 3.6.

Proof.

We first show the prefactor E(z)E(z) is analytic near z=1z=1. According to its definition in (3.25), the only possible jump is on (1,1+ε)(1,1+\varepsilon). It follows from (3.17) and (3.5) that, if z(1,1+ε)z\in(1,1+\varepsilon),

E(z)1E+(z)\displaystyle E_{-}(z)^{-1}E_{+}(z) =ν16σ3f(z)14σ312(1ii1)σ3(0110)σ312(1ii1)f+(z)14σ3ν16σ3\displaystyle=\nu^{-\frac{1}{6}\sigma_{3}}f_{-}(z)^{-\frac{1}{4}\sigma_{3}}\frac{1}{\sqrt{2}}\begin{pmatrix}1&\mathrm{i}\\ \mathrm{i}&1\end{pmatrix}\sigma_{3}\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\sigma_{3}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-\mathrm{i}\\ -\mathrm{i}&1\end{pmatrix}f_{+}(z)^{\frac{1}{4}\sigma_{3}}\nu^{\frac{1}{6}\sigma_{3}}
=ν16σ3f(z)14σ3(i00i)f+(z)14σ3ν16σ3=I,\displaystyle=\nu^{-\frac{1}{6}\sigma_{3}}f_{-}(z)^{-\frac{1}{4}\sigma_{3}}\begin{pmatrix}\mathrm{i}&0\\ 0&-\mathrm{i}\end{pmatrix}f_{+}(z)^{\frac{1}{4}\sigma_{3}}\nu^{\frac{1}{6}\sigma_{3}}=I, (3.26)

since f+(z)=e2πif(z)f_{+}(z)=e^{-2\pi\mathrm{i}}f_{-}(z) for z(1,1+ε)z\in(1,1+\varepsilon). Thus, E(z)E(z) is analytic in D(1,ε){1}D(1,\varepsilon)\setminus\{1\}. Note that, as z1z\to 1, by using (3.5),

E(z)=(2hν)14σ3eπi4σ3σ3(Iσ310(z1)+(1328000511400)(z1)2+𝒪((z1)3))\displaystyle E(z)=(2h_{\nu})^{-\frac{1}{4}\sigma_{3}}e^{-\frac{\pi\mathrm{i}}{4}\sigma_{3}}\sigma_{3}\left(I-\frac{\sigma_{3}}{10}(z-1)+\begin{pmatrix}\frac{13}{280}&0\\ 0&-\frac{51}{1400}\end{pmatrix}(z-1)^{2}+\mathcal{O}((z-1)^{3})\right) (3.27)

with hνh_{\nu} given in (1.9), we conclude that z=1z=1 is a removable singularity. The jump conditions of P(z)P(z) in (3.20) follows directly from the analyticity of E(z)E(z) and (A.1). Finally, as ν+\nu\to+\infty, we apply (A.2) and obtain after a straightforward computation that

P(z)N(z)1=N(z)σ3(I+𝒪(ν1))σ3N(z)1=I+𝒪(ν1),zD(1,ε).\displaystyle P(z)N(z)^{-1}=N(z)\sigma_{3}\left(I+\mathcal{O}(\nu^{-1})\right)\sigma_{3}N(z)^{-1}=I+\mathcal{O}(\nu^{-1}),\qquad z\in\partial D(1,\varepsilon). (3.28)

This completes the proof of Proposition 3.7. ∎

3.6 Final transformation

We define the final transformation

R(z)={S(z)P(z)1,zD(1,ε),S(z)N(z)1,elsewhere.\displaystyle R(z)=\begin{cases}S(z)P(z)^{-1},&\qquad z\in D(1,\varepsilon),\\ S(z)N(z)^{-1},&\qquad\textrm{elsewhere}.\end{cases} (3.29)

It is then readily seen that R(z)R(z) satisfies the following RH problem.

RH problem 3.8.
  • (a)

    R(z)R(z) is defined and analytic in ΓR\mathbb{C}\setminus\Gamma_{R}, where

    ΓR:=D(1,ε)ΓTD(1,ε);\Gamma_{R}:=\partial D(1,\varepsilon)\cup\Gamma_{T}\setminus D(1,\varepsilon);

    see Figure 4 for an illustration.

  • (b)

    R(z)R(z) satisfies the jump condition

    R+(z)=R(z)JR(z),zΓR,\displaystyle R_{+}(z)=R_{-}(z)J_{R}(z),\qquad z\in\Gamma_{R}, (3.30)

    where

    JR(z)={P(z)N(z)1,zD(1,ε),N(z)S(z)N(z)1,zΓRD(1,ε).\displaystyle J_{R}(z)=\begin{cases}P(z)N(z)^{-1},&\qquad z\in\partial D(1,\varepsilon),\\ N(z)S(z)N(z)^{-1},&\qquad z\in\Gamma_{R}\setminus\partial D(1,\varepsilon).\end{cases} (3.31)
  • (c)

    As zz\to\infty, we have

    R(z)=I+𝒪(z1).\displaystyle R(z)=I+\mathcal{O}(z^{-1}). (3.32)
11Γ~1\widetilde{\Gamma}_{1}Γ~3\widetilde{\Gamma}_{3}0
Figure 4: The jump contours of the RH problem for RR.

For zΓRD(1,ε)z\in\Gamma_{R}\setminus\partial D(1,\varepsilon), we have the estimate

JR(z)=I+𝒪(ecν),ν+,J_{R}(z)=I+\mathcal{O}(e^{-c\nu}),\qquad\nu\to+\infty, (3.33)

for some constant c>0c>0.

For zD(1,ε)z\in\partial D(1,\varepsilon), substituting the full expansion (A.4) of Φ(Ai)\Phi^{({\mathrm{Ai}})} into (3.24) and (3.31), we have

JR(z)I+k=1Jk(z)hν3k2,ν+,\displaystyle J_{R}(z)\sim I+\sum_{k=1}^{\infty}J_{k}(z)h_{\nu}^{\frac{3k}{2}},\qquad\nu\to+\infty, (3.34)

where

Jk(z)={3k2k/2f(z)3k/2(0(1z)12𝔲k(1z)12𝔳k0),for odd k3k2k/2f(z)3k/2(𝔲k00𝔳k),for even k,\displaystyle J_{k}(z)=\begin{cases}-\frac{3^{k}}{2^{k/2}f(z)^{3k/2}}\begin{pmatrix}0&(1-z)^{-\frac{1}{2}}\mathfrak{u}_{k}\\ (1-z)^{\frac{1}{2}}\mathfrak{v}_{k}&0\end{pmatrix},&\quad\textrm{for odd $k$, }\\ \frac{3^{k}}{2^{k/2}f(z)^{3k/2}}\begin{pmatrix}\mathfrak{u}_{k}&0\\ 0&\mathfrak{v}_{k}\end{pmatrix},&\quad\textrm{for even $k$, }\end{cases} (3.35)

with f(z)f(z) and 𝔲k\mathfrak{u}_{k}, 𝔳k\mathfrak{v}_{k} given in (3.5) and (A.5), respectively. By a standard argument [11, 12], we conclude that, as ν+\nu\to+\infty,

R(z)I+k=1Rk(z)hν3k2\displaystyle R(z)\sim I+\sum_{k=1}^{\infty}R_{k}(z)h_{\nu}^{\frac{3k}{2}} (3.36)

uniformly for zΓRz\in\mathbb{C}\setminus\Gamma_{R}. A combination of (3.36) and RH problem 3.8 shows that R1R_{1} satisfies

RH problem 3.9.
  • (a)

    R1(z)R_{1}(z) is defined and analytic in D(1,ε)\mathbb{C}\setminus\partial D(1,\varepsilon).

  • (b)

    R1(z)R_{1}(z) satisfies the jump condition

    R1,+(z)=R1,(z)+J1(z),zD(1,ε),\displaystyle R_{1,+}(z)=R_{1,-}(z)+J_{1}(z),\qquad z\in\partial D(1,\varepsilon), (3.37)

    where

    J1(z)=248f(z)3/2(05(1z)127(1z)120).\displaystyle J_{1}(z)=-\frac{\sqrt{2}}{48f(z)^{3/2}}\begin{pmatrix}0&5(1-z)^{-\frac{1}{2}}\\ -7(1-z)^{\frac{1}{2}}&0\end{pmatrix}. (3.38)
  • (c)

    As zz\to\infty, we have R1(z)=𝒪(z1)R_{1}(z)=\mathcal{O}(z^{-1}).

From the local behavior of f(z)f(z) near z=1z=1 given in (3.5), we obtain that

J1(z)=1(z1)2(0522400)+1z1(02872240)(027072400)+𝒪(z1).\displaystyle J_{1}(z)=\frac{1}{(z-1)^{2}}\begin{pmatrix}0&\frac{5\sqrt{2}}{24}\\ 0&0\end{pmatrix}+\frac{1}{z-1}\begin{pmatrix}0&\frac{\sqrt{2}}{8}\\ -\frac{7\sqrt{2}}{24}&0\end{pmatrix}-\begin{pmatrix}0&\frac{\sqrt{2}}{70}\\ \frac{7\sqrt{2}}{40}&0\end{pmatrix}+\mathcal{O}(z-1). (3.39)

By Cauchy’s residue theorem, we have

R1(z)\displaystyle R_{1}(z) =12πiD(1,ε)J1(s)zsds\displaystyle=\frac{1}{2\pi\mathrm{i}}\oint_{\partial D(1,\varepsilon)}\frac{J_{1}(s)}{z-s}\,\mathrm{d}s
={1(z1)2(0522400)+1z1(02872240),zD(1,ε),1(z1)2(0522400)+1z1(02872240)J1(z),zD(1,ε).\displaystyle=\begin{cases}\frac{1}{(z-1)^{2}}\begin{pmatrix}0&\frac{5\sqrt{2}}{24}\\ 0&0\end{pmatrix}+\frac{1}{z-1}\begin{pmatrix}0&\frac{\sqrt{2}}{8}\\ -\frac{7\sqrt{2}}{24}&0\end{pmatrix},\quad&z\in\mathbb{C}\setminus D(1,\varepsilon),\\ \frac{1}{(z-1)^{2}}\begin{pmatrix}0&\frac{5\sqrt{2}}{24}\\ 0&0\end{pmatrix}+\frac{1}{z-1}\begin{pmatrix}0&\frac{\sqrt{2}}{8}\\ -\frac{7\sqrt{2}}{24}&0\end{pmatrix}-J_{1}(z),\quad&z\in D(1,\varepsilon).\end{cases} (3.40)

Similarly, R2R_{2} satisfies the following RH problem.

RH problem 3.10.
  • (a)

    R2(z)R_{2}(z) is defined and analytic in D(1,ε)\mathbb{C}\setminus\partial D(1,\varepsilon).

  • (b)

    R2(z)R_{2}(z) satisfies the jump condition

    R2,+(z)=R2,(z)+R1,(z)J1(z)+J2(z),zD(1,ε),\displaystyle R_{2,+}(z)=R_{2,-}(z)+R_{1,-}(z)J_{1}(z)+J_{2}(z),\qquad z\in\partial D(1,\varepsilon), (3.41)

    where

    J2(z)=92f(z)3(𝔲200𝔳2)\displaystyle J_{2}(z)=\frac{9}{2f(z)^{3}}\begin{pmatrix}\mathfrak{u}_{2}&0\\ 0&\mathfrak{v}_{2}\end{pmatrix} (3.42)

    with 𝔲2\mathfrak{u}_{2} and 𝔳2\mathfrak{v}_{2} given in (A.5).

  • (c)

    As zz\to\infty, we have R2(z)=𝒪(z1)R_{2}(z)=\mathcal{O}(z^{-1}).

From (3.6), (3.38), (3.42) and Cauchy’s residue theorem, it follows that

R2(z)=12πiD(1,ε)R1,(s)J1(s)+J2(s)zsds\displaystyle R_{2}(z)=\frac{1}{2\pi\mathrm{i}}\oint_{\partial D(1,\varepsilon)}\frac{R_{1,-}(s)J_{1}(s)+J_{2}(s)}{z-s}\,\mathrm{d}s (3.43)

is a diagonal matrix. For general k3k\geq 3, the functions RkR_{k} are analytic in D(1,ε)\mathbb{C}\setminus\partial D(1,\varepsilon) with asymptotic behavior 𝒪(1/z)\mathcal{O}(1/z) as zz\to\infty, and satisfy

Rk,+(z)=Rk,(z)+l=1kRkl,(z)Jl(z),zD(1,ε),\displaystyle R_{k,+}(z)=R_{k,-}(z)+\sum_{l=1}^{k}R_{k-l,-}(z)J_{l}(z),\qquad z\in\partial D(1,\varepsilon), (3.44)

where the functions Jk(z)J_{k}(z) are given in (3.35). By Cauchy’s residue theorem, we have

Rk(z)=12πiD(1,ε)l=1kRkl,(s)Jl(s)dszs.R_{k}(z)=\frac{1}{2\pi\mathrm{i}}\oint_{\partial D(1,\varepsilon)}\sum_{l=1}^{k}R_{k-l,-}(s)J_{l}(s)\frac{\,\mathrm{d}s}{z-s}. (3.45)

One can check that, by the structure of Jk(z)J_{k}(z) and mathematical induction, each RkR_{k} takes the following structure:

Rk(z)={(0(Rk(z))12(Rk(z))210),for odd k,((Rk(z))1100(Rk(z))22),for even k.R_{k}(z)=\begin{cases}\begin{pmatrix}0&\left(R_{k}(z)\right)_{12}\\ \left(R_{k}(z)\right)_{21}&0\end{pmatrix},&\textrm{for odd $k$,}\\ \begin{pmatrix}\left(R_{k}(z)\right)_{11}&0\\ 0&\left(R_{k}(z)\right)_{22}\end{pmatrix},&\textrm{for even $k$.}\end{cases} (3.46)

This, together with (3.36), gives us

hν14σ3R(z)hν14σ3\displaystyle h_{\nu}^{\frac{1}{4}\sigma_{3}}R(z)h_{\nu}^{-\frac{1}{4}\sigma_{3}} I+k=0(hν3k+1(00(R2k+1(z))210)+hν3k+2(0(R2k+1(z))1200)\displaystyle\sim I+\sum_{k=0}^{\infty}\left(h_{\nu}^{3k+1}\begin{pmatrix}0&0\\ \left(R_{2k+1}(z)\right)_{21}&0\end{pmatrix}+h_{\nu}^{3k+2}\begin{pmatrix}0&\left(R_{2k+1}(z)\right)_{12}\\ 0&0\end{pmatrix}\right.
+hν3k+3((R2k+2(z))1100(R2k+2(z))22)),ν+.\displaystyle~{}~{}~{}\left.+h_{\nu}^{3k+3}\begin{pmatrix}\left(R_{2k+2}(z)\right)_{11}&0\\ 0&\left(R_{2k+2}(z)\right)_{22}\end{pmatrix}\right),\qquad\nu\to+\infty. (3.47)

We are now ready to prove our main result.

4 Proof of Theorem 1.1

Recall the RH characterization of the Bessel kernel given in (2.8), we then follow the series of transformations ΨYTS\Psi\to Y\to T\to S in (3.1) (3.4) and (3.14) to obtain that

ν2KνBes(ν2u,ν2v)=12πi(uv)(01)eν(g+(v)πi/2)σ3S+(v)1S+(u)×eν(g+(u)πi/2)σ3(10),u,v>0,\nu^{2}K_{\nu}^{\mathrm{Bes}}(\nu^{2}u,\nu^{2}v)=\frac{1}{2\pi\mathrm{i}(u-v)}\begin{pmatrix}0&1\end{pmatrix}e^{\nu(g_{+}(v)-\pi\mathrm{i}/2)\sigma_{3}}S_{+}(v)^{-1}S_{+}(u)\\ \times e^{-\nu(g_{+}(u)-\pi\mathrm{i}/2)\sigma_{3}}\begin{pmatrix}1\\ 0\end{pmatrix},\qquad u,v>0, (4.1)

where gg is given in (3.9). In what follows, we split our discussions into different cases based on different ranges of uu and vv.

If u(0,1ε]u\in(0,1-\varepsilon] and v(1ε,1+ε)v\in(1-\varepsilon,1+\varepsilon), applying the final transformation (3.29) and (3.24) shows that

ν2KνBes(ν2u,ν2v)\displaystyle\nu^{2}K_{\nu}^{\mathrm{Bes}}(\nu^{2}u,\nu^{2}v)
=12πi(uv)(01)eν(g+(v)πi/2)σ3P+(v)1R(v)1R+(u)N(u)eν(g+(u)πi/2)σ3(10)\displaystyle=\frac{1}{2\pi\mathrm{i}(u-v)}\begin{pmatrix}0&1\end{pmatrix}e^{\nu(g_{+}(v)-\pi\mathrm{i}/2)\sigma_{3}}P_{+}(v)^{-1}R(v)^{-1}R_{+}(u)N(u)e^{-\nu(g_{+}(u)-\pi\mathrm{i}/2)\sigma_{3}}\begin{pmatrix}1\\ 0\end{pmatrix}
=eν(g+(u)πi/2)i2π(uv)(iAi(ν23f(v))Ai(ν23f(v)))E(v)1R(v)1R+(u)N(u)(10),\displaystyle=-\frac{e^{-\nu(g_{+}(u)-\pi\mathrm{i}/2)}}{\mathrm{i}\sqrt{2\pi}(u-v)}\begin{pmatrix}\mathrm{i}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f(v))&\mathrm{Ai}(\nu^{\frac{2}{3}}f(v))\end{pmatrix}E(v)^{-1}R(v)^{-1}R_{+}(u)N(u)\begin{pmatrix}1\\ 0\end{pmatrix}, (4.2)

where the functions E,RE,R and NN are given in (3.25), (3.29) and (3.19), respectively. By (3.11), it is readily seen that

g+(x)=1x2x<0,x(0,1ε].g_{+}^{\prime}(x)=-\frac{\sqrt{1-x}}{2x}<0,\qquad x\in(0,1-\varepsilon]. (4.3)

Thus, for u(0,1ε]u\in(0,1-\varepsilon],

eν(g+(u)πi/2)eν(g+(1ε)πi/2)=eν(12ln1+ε1εε)<e32hν1,ν+,e^{-\nu(g_{+}(u)-\pi\mathrm{i}/2)}\leq e^{-\nu(g_{+}(1-\varepsilon)-\pi\mathrm{i}/2)}=e^{-\nu\left(\frac{1}{2}\ln\frac{1+\sqrt{\varepsilon}}{1-\sqrt{\varepsilon}}-\sqrt{\varepsilon}\right)}<e^{-\frac{3}{2}h_{\nu}^{-1}},\qquad\nu\to+\infty, (4.4)

and other terms in (4) are bounded for large positive ν\nu, which follow from (3.19), (3.25) and (3.36). By taking

u=(1xhν)2,v=(1yhν)2u=(1-xh_{\nu})^{2},\qquad v=(1-yh_{\nu})^{2} (4.5)

in (4), where

(11ε)hν1x<hν1,t0y<(11ε)hν1(1-\sqrt{1-\varepsilon})h_{\nu}^{-1}\leq x<h_{\nu}^{-1},\qquad t_{0}\leq y<(1-\sqrt{1-\varepsilon})h_{\nu}^{-1} (4.6)

with t0t_{0} being any fixed real number, we have, for any 𝔪\mathfrak{m}\in\mathbb{N},

ϕν(x)ϕν(y)KνBes(ϕν(x),ϕν(y))<e32hν1𝒪(ey)\displaystyle\sqrt{\phi_{\nu}^{\prime}(x)\phi_{\nu}^{\prime}(y)}K_{\nu}^{\mathrm{Bes}}(\phi_{\nu}(x),\phi_{\nu}(y))<e^{-\frac{3}{2}h_{\nu}^{-1}}\cdot\mathcal{O}(e^{-y})
<e12hν1𝒪(e(x+y))=hν𝔪+1𝒪(e(x+y)),hν0+.\displaystyle<e^{-\frac{1}{2}h_{\nu}^{-1}}\cdot\mathcal{O}\left(e^{-(x+y)}\right)=h_{\nu}^{\mathfrak{m}+1}\cdot\mathcal{O}\left(e^{-(x+y)}\right),\qquad h_{\nu}\to 0^{+}. (4.7)

Here, ϕν\phi_{\nu} is defined in (1.11) and the error term 𝒪(ey)\mathcal{O}(e^{-y}) in the first inequality comes from the estimate [24, Formula 9.7.15]

|p(ζ)|max(|Ai(ζ)|,|Ai(ζ)|)cpeζ,ζ,|p(\zeta)|\cdot\max\left(|\mathrm{Ai}(\zeta)|,|\mathrm{Ai}^{\prime}(\zeta)|\right)\leq c_{p}e^{-\zeta},\qquad\zeta\in\mathbb{R}, (4.8)

where pp is an arbitrary polynomial and the constant cpc_{p} only depends on pp. As a consequence, the transformed Bessel kernel (1.10) will be absorbed into the error term of (1.12) in this case. As for the expansion terms in (1.12), note that xx is large as ν+\nu\to+\infty, we obtain again from [24, Formula 9.7.15] that, for any arbitrary polynomials qq,

q(x)Ai(x)q(x)e23x3/22πx1/4<e32hν1<hν𝔪+1𝒪(ex),q(x)\cdot\mathrm{Ai}(x)\leq q(x)\cdot\frac{e^{-\frac{2}{3}x^{3/2}}}{2\sqrt{\pi}x^{1/4}}<e^{-\frac{3}{2}h_{\nu}^{-1}}<h_{\nu}^{\mathfrak{m}+1}\cdot\mathcal{O}\left(e^{-x}\right), (4.9)

and

q(x)|Ai(x)|q(x)x1/4e23x3/22π(1+748x3/2)<e32hν1<hν𝔪+1𝒪(ex).q(x)\cdot|\mathrm{Ai}^{\prime}(x)|\leq q(x)\cdot\frac{x^{1/4}e^{-\frac{2}{3}x^{3/2}}}{2\sqrt{\pi}}\left(1+\frac{7}{48x^{3/2}}\right)<e^{-\frac{3}{2}h_{\nu}^{-1}}<h_{\nu}^{\mathfrak{m}+1}\cdot\mathcal{O}\left(e^{-x}\right). (4.10)

This, together with the estimate (4.8) for Ai(y)\mathrm{Ai}(y) and Ai(y)\mathrm{Ai}^{\prime}(y), implies that the expansion terms in (1.12) is also absorbed into the error term, which shows that (1.12) is valid under the condition (4.6).

A similar argument holds if u(1ε,1+ε)u\in(1-\varepsilon,1+\varepsilon) and v(0,1ε]v\in(0,1-\varepsilon], which implies (1.12) for t0x<(11ε)hν1t_{0}\leq x<(1-\sqrt{1-\varepsilon})h_{\nu}^{-1} and (11ε)hν1y<hν1(1-\sqrt{1-\varepsilon})h_{\nu}^{-1}\leq y<h_{\nu}^{-1}.

If 0<u,v1ε0<u,v\leq 1-\varepsilon, we obtain from (3.29) that

ν2KνBes(ν2u,ν2v)\displaystyle\nu^{2}K_{\nu}^{\mathrm{Bes}}(\nu^{2}u,\nu^{2}v)
=12πi(uv)(01)eν(g+(v)πi/2)σ3N(v)1R+(v)1R+(u)N(u)eν(g+(u)πi/2)σ3(10)\displaystyle=\frac{1}{2\pi\mathrm{i}(u-v)}\begin{pmatrix}0&1\end{pmatrix}e^{\nu(g_{+}(v)-\pi\mathrm{i}/2)\sigma_{3}}N(v)^{-1}R_{+}(v)^{-1}R_{+}(u)N(u)e^{-\nu(g_{+}(u)-\pi\mathrm{i}/2)\sigma_{3}}\begin{pmatrix}1\\ 0\end{pmatrix}
=eν(g+(u)πi/2)eν(g+(v)πi/2)2πi(uv)(01)N(v)1R+(v)1R+(u)N(u)(10).\displaystyle=\frac{e^{-\nu(g_{+}(u)-\pi\mathrm{i}/2)}e^{-\nu(g_{+}(v)-\pi\mathrm{i}/2)}}{2\pi\mathrm{i}(u-v)}\begin{pmatrix}0&1\end{pmatrix}N(v)^{-1}R_{+}(v)^{-1}R_{+}(u)N(u)\begin{pmatrix}1\\ 0\end{pmatrix}. (4.11)

From (4.3), it follows that

eν(g+(u)πi/2)eν(g+(v)πi/2)e2ν(g+(1ε)πi/2)=eν(ln1+ε1ε2ε)<e3hν1,ν+,e^{-\nu(g_{+}(u)-\pi\mathrm{i}/2)}e^{-\nu(g_{+}(v)-\pi\mathrm{i}/2)}\leq e^{-2\nu(g_{+}(1-\varepsilon)-\pi\mathrm{i}/2)}\\ =e^{-\nu\left(\ln\frac{1+\sqrt{\varepsilon}}{1-\sqrt{\varepsilon}}-2\sqrt{\varepsilon}\right)}<e^{-3h_{\nu}^{-1}},\qquad\nu\to+\infty, (4.12)

and other terms in (4) are bounded for large positive ν\nu, as can be seen from their definitions in (3.19) and (3.36). Substituting (4.5) into (4) with

(11ε)hν1x,y<hν1,(1-\sqrt{1-\varepsilon})h_{\nu}^{-1}\leq x,y<h_{\nu}^{-1}, (4.13)

we have, for any 𝔪\mathfrak{m}\in\mathbb{N},

ϕν(x)ϕν(y)KνBes(ϕν(x),ϕν(y))<e3hν1𝒪(1)\displaystyle\sqrt{\phi_{\nu}^{\prime}(x)\phi_{\nu}^{\prime}(y)}K_{\nu}^{\mathrm{Bes}}(\phi_{\nu}(x),\phi_{\nu}(y))<e^{-3h_{\nu}^{-1}}\cdot\mathcal{O}(1)
<ehν1𝒪(e(x+y))=hν𝔪+1𝒪(e(x+y)),hν0+.\displaystyle<e^{-h_{\nu}^{-1}}\cdot\mathcal{O}\left(e^{-(x+y)}\right)=h_{\nu}^{\mathfrak{m}+1}\cdot\mathcal{O}\left(e^{-(x+y)}\right),\qquad h_{\nu}\to 0^{+}. (4.14)

The transformed Bessel kernel is again absorbed into the error term completely in this case. The removability of the singularities at x=yx=y comes from the symmetric structure of the expansion (4). Since the Airy functions in the expansion (1.12) related to xx and yy are superexponential decay as ν+\nu\to+\infty (see (4.9) and (4.10)), we conclude (1.12) under the condition (4.13).

It remains to consider the final case, namely, 1ε<u,v<1+ε1-\varepsilon<u,v<1+\varepsilon, which corresponds to

t0x,y<(11ε)hν1t_{0}\leq x,y<(1-\sqrt{1-\varepsilon})h_{\nu}^{-1} (4.15)

through (4.5). To proceed, we again observe from (3.29) and (3.24) that

ν2KνBes(ν2u,ν2v)\displaystyle\nu^{2}K_{\nu}^{\mathrm{Bes}}(\nu^{2}u,\nu^{2}v)
=12πi(uv)(01)eν(g+(v)πi/2)σ3P+(v)1R(v)1R(u)P+(u)eν(g+(u)πi/2)σ3(10)\displaystyle=\frac{1}{2\pi\mathrm{i}(u-v)}\begin{pmatrix}0&1\end{pmatrix}e^{\nu(g_{+}(v)-\pi\mathrm{i}/2)\sigma_{3}}P_{+}(v)^{-1}R(v)^{-1}R(u)P_{+}(u)e^{-\nu(g_{+}(u)-\pi\mathrm{i}/2)\sigma_{3}}\begin{pmatrix}1\\ 0\end{pmatrix}
=1i(uv)(iAi(ν23f(v))Ai(ν23f(v)))E(v)1R(v)1R(u)E(u)(Ai(ν23f(u))iAi(ν23f(u))).\displaystyle=-\frac{1}{\mathrm{i}(u-v)}\begin{pmatrix}\mathrm{i}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f(v))&\mathrm{Ai}(\nu^{\frac{2}{3}}f(v))\end{pmatrix}E(v)^{-1}R(v)^{-1}R(u)E(u)\begin{pmatrix}\mathrm{Ai}(\nu^{\frac{2}{3}}f(u))\\ -\mathrm{i}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f(u))\end{pmatrix}. (4.16)

Inserting (4.5) into the above formula gives us that

ϕν(x)ϕν(y)KνBes(ϕν(x),ϕν(y))\displaystyle\sqrt{\phi_{\nu}^{\prime}(x)\phi_{\nu}^{\prime}(y)}K_{\nu}^{\mathrm{Bes}}(\phi_{\nu}(x),\phi_{\nu}(y))
=(1hνx)(1hνy)xyhν2(x2y2)(Ai(ν23f((1hνy)2))iAi(ν23f((1hνy)2)))E((1hνy)2)1\displaystyle=\frac{\sqrt{(1-h_{\nu}x)(1-h_{\nu}y)}}{x-y-\frac{h_{\nu}}{2}(x^{2}-y^{2})}\begin{pmatrix}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f((1-h_{\nu}y)^{2}))&-\mathrm{i}\mathrm{Ai}(\nu^{\frac{2}{3}}f((1-h_{\nu}y)^{2}))\end{pmatrix}E((1-h_{\nu}y)^{2})^{-1}
×R((1hνy)2)1R((1hνx)2)E((1hνx)2)(Ai(ν23f((1hνx)2))iAi(ν23f((1hνx)2))).\displaystyle\quad\times R((1-h_{\nu}y)^{2})^{-1}R((1-h_{\nu}x)^{2})E((1-h_{\nu}x)^{2})\begin{pmatrix}\mathrm{Ai}(\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2}))\\ -\mathrm{i}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2}))\end{pmatrix}. (4.17)

We now show expansions of different parts on the right-hand side of the above formula. According to [5, Lemma 3.1], one has

(1hνx)(1hνy)xyhν2(x2y2)=1xy(xy)j=2rj(x,y)hνj,\frac{\sqrt{(1-h_{\nu}x)(1-h_{\nu}y)}}{x-y-\frac{h_{\nu}}{2}(x^{2}-y^{2})}=\frac{1}{x-y}-(x-y)\sum_{j=2}^{\infty}r_{j}(x,y)h_{\nu}^{j}, (4.18)

where each rj(x,y)r_{j}(x,y) is certain polynomial of degree j2j-2.

Next, it is observed that if x=yx=y,

E((1hνy)2)1R((1hνy)2)1R((1hνx)2)E((1hνx)2)=I.E((1-h_{\nu}y)^{2})^{-1}R((1-h_{\nu}y)^{2})^{-1}R((1-h_{\nu}x)^{2})E((1-h_{\nu}x)^{2})=I. (4.19)

This, together with (3.6) and the fact that E(z)E(z) is an analytic function in D(1,ε)D(1,\varepsilon), implies that, as ν+\nu\to+\infty,

E((1hνy)2)1R((1hνy)2)1R((1hνx)2)E((1hνx)2)I+(xy)j=1ej(x,y)hνj,E((1-h_{\nu}y)^{2})^{-1}R((1-h_{\nu}y)^{2})^{-1}R((1-h_{\nu}x)^{2})E((1-h_{\nu}x)^{2})\\ \sim I+(x-y)\sum_{j=1}^{\infty}e_{j}(x,y)h_{\nu}^{j}, (4.20)

where ej(x,y)e_{j}(x,y) are certain matrices with all the entries being polynomials in xx and yy. Indeed, by (3.27), (3.6) and (3.6), it follows that, as hν0+h_{\nu}\to 0^{+},

E((1hνx)2)\displaystyle E((1-h_{\nu}x)^{2}) =(2hν)14σ3eπi4σ3σ3(I+x5σ3hν+(335x2008175x2)hν2+𝒪(hν3)),\displaystyle=(2h_{\nu})^{-\frac{1}{4}\sigma_{3}}e^{-\frac{\pi\mathrm{i}}{4}\sigma_{3}}\sigma_{3}\left(I+\frac{x}{5}\sigma_{3}h_{\nu}+\begin{pmatrix}\frac{3}{35}x^{2}&0\\ 0&-\frac{8}{175}x^{2}\end{pmatrix}h_{\nu}^{2}+\mathcal{O}(h_{\nu}^{3})\right), (4.21)
R((1hνx)2)\displaystyle R((1-h_{\nu}x)^{2}) =hν14σ3(I+(0072400)hν+(0270225x0)hν2+𝒪(hν3))hν14σ3.\displaystyle=h_{\nu}^{-\frac{1}{4}\sigma_{3}}\left(I+\begin{pmatrix}0&0\\ \frac{7\sqrt{2}}{40}&0\end{pmatrix}h_{\nu}+\begin{pmatrix}0&\frac{\sqrt{2}}{70}\\ \frac{\sqrt{2}}{25}x&0\end{pmatrix}h_{\nu}^{2}+\mathcal{O}(h_{\nu}^{3})\right)h_{\nu}^{\frac{1}{4}\sigma_{3}}. (4.22)

Thus, we have

e1(x,y)=15σ3,e2(x,y)=(15x+8y1750i258x+15y175).\displaystyle e_{1}(x,y)=\frac{1}{5}\sigma_{3},\qquad e_{2}(x,y)=\begin{pmatrix}\frac{15x+8y}{175}&0\\ \frac{\mathrm{i}}{25}&-\frac{8x+15y}{175}\end{pmatrix}. (4.23)

Then, we consider the parts involving the Airy functions. From the expansion of ff near z=1z=1 given in (3.5), we have

ν23f((1hνx)2)=x+j=1p1,j(x)hνj,ν+,\displaystyle\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2})=x+\sum_{j=1}^{\infty}p_{1,j}(x)h_{\nu}^{j},\qquad\textrm{$\nu\to+\infty$}, (4.24)

where p1,j(x)p_{1,j}(x) are certain polynomials of degree j+1j+1 with

p1,1(x)=310x2,p1,2(x)=32175x3.\displaystyle p_{1,1}(x)=\frac{3}{10}x^{2},\qquad p_{1,2}(x)=\frac{32}{175}x^{3}. (4.25)

Moreover, by writing

1m!(ν23f((1hνx)2)x)m=j=mpm,j(x)hνj,hν0+,\displaystyle\frac{1}{m!}\left(\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2})-x\right)^{m}=\sum_{j=m}^{\infty}p_{m,j}(x)h_{\nu}^{j},\qquad h_{\nu}\to 0^{+}, (4.26)

it is easily seen that

pm,j(x)=(mn)!n!m!k=mnjnpmn,k(x)pn,jk(x)\displaystyle p_{m,j}(x)=\frac{(m-n)!n!}{m!}\sum_{k=m-n}^{j-n}p_{m-n,k}(x)p_{n,j-k}(x) (4.27)

for 1<n<mj1<n<m\leq j. We then obtain from the analyticity of Airy function and (4.26) that

Ai(ν23f((1hνx)2))=Ai(x)+j=1m=1jpm,j(x)Ai(m)(x)hνj,ν+,\displaystyle\mathrm{Ai}(\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2}))=\mathrm{Ai}(x)+\sum_{j=1}^{\infty}\sum_{m=1}^{j}p_{m,j}(x)\mathrm{Ai}^{(m)}(x)h_{\nu}^{j},\qquad\nu\to+\infty, (4.28)

where Ai(m)(x)\mathrm{Ai}^{(m)}(x) denotes the mm-th derivative of Ai(x)\mathrm{Ai}(x) with respect to xx. From the differential equation

d2Ai(z)dz2=zAi(z)\frac{\,\mathrm{d}^{2}\mathrm{Ai}(z)}{\,\mathrm{d}z^{2}}=z\mathrm{Ai}(z) (4.29)

satisfied by Airy function, a direct calculation gives us

Ai(m)(x)=Pm(x)Ai(x)+Qm(x)Ai(x),\displaystyle\mathrm{Ai}^{(m)}(x)=P_{m}(x)\mathrm{Ai}(x)+Q_{m}(x)\mathrm{Ai}^{\prime}(x), (4.30)

where Pm(x)P_{m}(x) and Qm(x)Q_{m}(x) are polynomials with P0(x)=1P_{0}(x)=1 and Q0(x)=0Q_{0}(x)=0. They satisfy the recurrence relations (cf. [22])

Pn+1(x)=Pn(x)+xQn(x),Qn+1(x)=Qn(x)+Pn(x).\displaystyle P_{n+1}(x)=P_{n}^{\prime}(x)+xQ_{n}(x),\qquad Q_{n+1}(x)=Q_{n}^{\prime}(x)+P_{n}(x). (4.31)

With the aids of the expansions (4.28) and (4.30), we obtain

(Ai(ν23f((1hνy)2))iAi(ν23f((1hνy)2)))(Ai(ν23f((1hνx)2))iAi(ν23f((1hνx)2)))\displaystyle\begin{pmatrix}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f((1-h_{\nu}y)^{2}))&-\mathrm{i}\mathrm{Ai}(\nu^{\frac{2}{3}}f((1-h_{\nu}y)^{2}))\end{pmatrix}\begin{pmatrix}\mathrm{Ai}(\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2}))\\ -\mathrm{i}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2}))\end{pmatrix}
=(Ai(x)+j=1m=1jpm,j(x)Ai(m)(x)hνj)(Ai(y)+j=1m=1jpm,j(y)Ai(m+1)(y)hνj)\displaystyle=\left(\mathrm{Ai}(x)+\sum_{j=1}^{\infty}\sum_{m=1}^{j}p_{m,j}(x)\mathrm{Ai}^{(m)}(x)h_{\nu}^{j}\right)\cdot\left(\mathrm{Ai}^{\prime}(y)+\sum_{j=1}^{\infty}\sum_{m=1}^{j}p_{m,j}(y)\mathrm{Ai}^{(m+1)}(y)h_{\nu}^{j}\right)
(Ai(x)+j=1m=1jpm,j(x)Ai(m+1)(x)hνj)(Ai(y)+j=1m=1jpm,j(y)Ai(m)(y)hνj)\displaystyle\quad-\left(\mathrm{Ai}^{\prime}(x)+\sum_{j=1}^{\infty}\sum_{m=1}^{j}p_{m,j}(x)\mathrm{Ai}^{(m+1)}(x)h_{\nu}^{j}\right)\cdot\left(\mathrm{Ai}(y)+\sum_{j=1}^{\infty}\sum_{m=1}^{j}p_{m,j}(y)\mathrm{Ai}^{(m)}(y)h_{\nu}^{j}\right)
=Ai(x)Ai(y)Ai(x)Ai(y)+N=1(aN,00(x,y)Ai(x)Ai(y)+aN,01(x,y)Ai(x)Ai(y)\displaystyle=\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)-\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y)+\sum_{N=1}^{\infty}\left(a_{N,00}(x,y)\mathrm{Ai}(x)\mathrm{Ai}(y)+a_{N,01}(x,y)\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)\right.
aN,01(y,x)Ai(x)Ai(y)+aN,11(x,y)Ai(x)Ai(y))hνN,\displaystyle\quad\left.-a_{N,01}(y,x)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y)+a_{N,11}(x,y)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}^{\prime}(y)\right)h_{\nu}^{N}, (4.32)

where

aN,00(x,y)=n=1N(pn,N(y)Pn+1(y)pn,N(x)Pn+1(x))\displaystyle a_{N,00}(x,y)=\sum_{n=1}^{N}\left(p_{n,N}(y)P_{n+1}(y)-p_{n,N}(x)P_{n+1}(x)\right)
+j+k=Nj1,k1m=1jn=1k(pm,j(x)pn,k(y)Pm(x)Pn+1(y)pm,j(y)pn,k(x)Pm(y)Pn+1(x)),\displaystyle\quad+\sum_{\begin{subarray}{c}j+k=N\\ j\geq 1,k\geq 1\end{subarray}}\sum_{m=1}^{j}\sum_{n=1}^{k}\left(p_{m,j}(x)p_{n,k}(y)P_{m}(x)P_{n+1}(y)-p_{m,j}(y)p_{n,k}(x)P_{m}(y)P_{n+1}(x)\right), (4.33)
aN,01(x,y)=n=2N(pn,N(y)Qn+1(y)+pn,N(x)Pn(x))\displaystyle a_{N,01}(x,y)=\sum_{n=2}^{N}\left(p_{n,N}(y)Q_{n+1}(y)+p_{n,N}(x)P_{n}(x)\right)
+j+k=Nj1,k1m=1jn=1kpm,j(x)pn,k(y)(Pm(x)Qn+1(y)Pm+1(x)Qn(y)),\displaystyle\quad+\sum_{\begin{subarray}{c}j+k=N\\ j\geq 1,k\geq 1\end{subarray}}\sum_{m=1}^{j}\sum_{n=1}^{k}p_{m,j}(x)p_{n,k}(y)\left(P_{m}(x)Q_{n+1}(y)-P_{m+1}(x)Q_{n}(y)\right), (4.34)

and

aN,11(x,y)=n=1N(pn,N(x)Qn(x)pn,N(y)Qn(y))\displaystyle a_{N,11}(x,y)=\sum_{n=1}^{N}\left(p_{n,N}(x)Q_{n}(x)-p_{n,N}(y)Q_{n}(y)\right)
+j+k=Nj1,k1m=1jn=1k(pm,j(x)pn,k(y)Qm(x)Qn+1(y)pm,j(y)pn,k(x)Qm(y)Qn+1(x)),\displaystyle\quad+\sum_{\begin{subarray}{c}j+k=N\\ j\geq 1,k\geq 1\end{subarray}}\sum_{m=1}^{j}\sum_{n=1}^{k}\left(p_{m,j}(x)p_{n,k}(y)Q_{m}(x)Q_{n+1}(y)-p_{m,j}(y)p_{n,k}(x)Q_{m}(y)Q_{n+1}(x)\right), (4.35)

are polynomials in xx and yy. Since the polynomials aN,00(x,y)a_{N,00}(x,y) and aN,11(x,y)a_{N,11}(x,y) are anti-symmetric in xx, yy, they must have the form

(xy)×(polynomials in x and y).(x-y)\times(\textrm{polynomials in $x$ and $y$}). (4.36)

We would like to show the polynomials aN,01(x,y)a_{N,01}(x,y) admit the same structure. In other words, we want to show

aN,01(x,x)=0,a_{N,01}(x,x)=0, (4.37)

or equivalently,

n=2Npn,N(x)(Qn+1(x)+Pn(x))\displaystyle\sum_{n=2}^{N}p_{n,N}(x)\left(Q_{n+1}(x)+P_{n}(x)\right)
+j+k=Nj1,k1Nm=1jn=1k(pm,j(x)pn,k(x)(Pm(x)Qn+1(x)Pm+1(x)Qn(x))=0.\displaystyle+\sum_{\begin{subarray}{c}j+k=N\\ j\geq 1,k\geq 1\end{subarray}}^{N}\sum_{m=1}^{j}\sum_{n=1}^{k}\left(p_{m,j}(x)p_{n,k}(x)(P_{m}(x)Q_{n+1}(x)-P_{m+1}(x)Q_{n}(x)\right)=0. (4.38)

To prove the above equality, we need the following lemma.

Lemma 4.1.

With polynomials PmP_{m} and QmQ_{m} defined through (4.30), we have

j=0N1j!(Nj)!(Pj(x)QN+1j(x)Qj(x)PN+1j(x))=0,N1.\displaystyle\sum_{j=0}^{N}\frac{1}{j!(N-j)!}\left(P_{j}(x)Q_{N+1-j}(x)-Q_{j}(x)P_{N+1-j}(x)\right)=0,\qquad N\geq 1. (4.39)
Proof.

We use the method of mathematical induction to prove the above identity. It is clear that (4.39) holds for N=1N=1. Assume that (4.39) is valid for N=k>1N=k>1, i.e.,

j=0k1j!(kj)!(Pj(x)Qk+1j(x)Qj(x)Pk+1j(x))=0.\displaystyle\sum_{j=0}^{k}\frac{1}{j!(k-j)!}\left(P_{j}(x)Q_{k+1-j}(x)-Q_{j}(x)P_{k+1-j}(x)\right)=0. (4.40)

After taking derivative on both sides with respected to xx, it follows that

j=0k1j!(kj)!(Pj(x)Qk+1j(x)+Pj(x)Qk+1j(x)Qj(x)Pk+1j(x)Qj(x)Pk+1j(x))=0,\displaystyle\sum_{j=0}^{k}\frac{1}{j!(k-j)!}\left(P_{j}^{\prime}(x)Q_{k+1-j}(x)+P_{j}(x)Q_{k+1-j}^{\prime}(x)-Q_{j}^{\prime}(x)P_{k+1-j}(x)-Q_{j}(x)P_{k+1-j}^{\prime}(x)\right)=0,

Applying the recurrence relations (4.31) to the above formula, we arrive at

j=0k1j!(kj)!(Pj+1(x)Qk+1j(x)Qj+1(x)Pk+1j(x)+Pj(x)Qk+2j(x)Qj(x)Pk+2j(x))\displaystyle\sum_{j=0}^{k}\frac{1}{j!(k-j)!}\left(P_{j+1}(x)Q_{k+1-j}(x)-Q_{j+1}(x)P_{k+1-j}(x)+P_{j}(x)Q_{k+2-j}(x)-Q_{j}(x)P_{k+2-j}(x)\right)
=j=0k+1jj!(k+1j)!(Pj(x)Qk+2j(x)Qj(x)Pk+2j(x))\displaystyle=\sum_{j=0}^{k+1}\frac{j}{j!(k+1-j)!}\left(P_{j}(x)Q_{k+2-j}(x)-Q_{j}(x)P_{k+2-j}(x)\right)
+j=0k+1k+1jj!(k+1j)!(Pj(x)Qk+2j(x)Qj(x)Pk+2j(x))\displaystyle\qquad+\sum_{j=0}^{k+1}\frac{k+1-j}{j!(k+1-j)!}\left(P_{j}(x)Q_{k+2-j}(x)-Q_{j}(x)P_{k+2-j}(x)\right)
=j=0k+1k+1j!(k+1j)!(Pj(x)Qk+2j(x)Qj(x)Pk+2j(x))=0,\displaystyle=\sum_{j=0}^{k+1}\frac{k+1}{j!(k+1-j)!}\left(P_{j}(x)Q_{k+2-j}(x)-Q_{j}(x)P_{k+2-j}(x)\right)=0,

which is (4.39) with N=k+1N=k+1. ∎

Using(4.27) and (4.39), we could rewrite the left-hand side of (4) as

n=2Npn,N(x)(Qn+1(x)+Pn(x))+m+n=2m1,n1Nj+k=Njm,kn(pm,j(x)pn,k(x)(Pm(x)Qn+1(x)Pm+1(x)Qn(x)))\displaystyle\sum_{n=2}^{N}p_{n,N}(x)\left(Q_{n+1}(x)+P_{n}(x)\right)+\sum_{\begin{subarray}{c}m+n=2\\ m\geq 1,n\geq 1\end{subarray}}^{N}\sum_{\begin{subarray}{c}j+k=N\\ j\geq m,k\geq n\end{subarray}}\left(p_{m,j}(x)p_{n,k}(x)(P_{m}(x)Q_{n+1}(x)-P_{m+1}(x)Q_{n}(x))\right)
=t=2N(pt,N(x)(Qt+1(x)+Pt(x))+m+n=tm1,n1j+k=N,jm,kn(pm,j(x)pn,k(x)(Pm(x)Qn+1(x)Pm+1(x)Qn(x))))\displaystyle=\sum_{t=2}^{N}\left(p_{t,N}(x)\left(Q_{t+1}(x)+P_{t}(x)\right)+\sum_{\begin{subarray}{c}m+n=t\\ m\geq 1,n\geq 1\end{subarray}}\sum_{\begin{subarray}{c}j+k=N,\\ j\geq m,k\geq n\end{subarray}}\left(p_{m,j}(x)p_{n,k}(x)(P_{m}(x)Q_{n+1}(x)-P_{m+1}(x)Q_{n}(x))\right)\right)
=t=2N(pt,N(x)(Qt+1(x)+Pt(x))+m+n=tm1,n1(t!m!n!pt,N(x)(Pm(x)Qn+1(x)Pm+1(x)Qn(x))))\displaystyle=\sum_{t=2}^{N}\left(p_{t,N}(x)\left(Q_{t+1}(x)+P_{t}(x)\right)+\sum_{\begin{subarray}{c}m+n=t\\ m\geq 1,n\geq 1\end{subarray}}\left(\frac{t!}{m!n!}p_{t,N}(x)(P_{m}(x)Q_{n+1}(x)-P_{m+1}(x)Q_{n}(x))\right)\right)
=t=2Npt,N(x)(Qt+1(x)+Pt(x)+t!m=1t1(1m!(tm)!(Pm(x)Qt+1m(x)Pt+1m(x)Qm(x))))\displaystyle=\sum_{t=2}^{N}p_{t,N}(x)\left(Q_{t+1}(x)+P_{t}(x)+t!\sum_{m=1}^{t-1}\left(\frac{1}{m!(t-m)!}(P_{m}(x)Q_{t+1-m}(x)-P_{t+1-m}(x)Q_{m}(x))\right)\right)
=t=2Nt!pt,N(x)(m=0t(1m!(tm)!(Pm(x)Qt+1m(x)Pt+1m(x)Qm(x))))=0,\displaystyle=\sum_{t=2}^{N}t!p_{t,N}(x)\left(\sum_{m=0}^{t}\left(\frac{1}{m!(t-m)!}(P_{m}(x)Q_{t+1-m}(x)-P_{t+1-m}(x)Q_{m}(x))\right)\right)=0, (4.41)

as required. Since the polynomials aN,00,aN,01a_{N,00},a_{N,01} and aN,11a_{N,11} all take the same structure (4.36), we obtain from (4) and the estimates of Airy functions in (4.8) that, for any 𝔪\mathfrak{m}\in\mathbb{N},

1xy(Ai(ν23f((1hνy)2))iAi(ν23f((1hνy)2)))(Ai(ν23f((1hνx)2))iAi(ν23f((1hνx)2)))\displaystyle\frac{1}{x-y}\begin{pmatrix}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f((1-h_{\nu}y)^{2}))&-\mathrm{i}\mathrm{Ai}(\nu^{\frac{2}{3}}f((1-h_{\nu}y)^{2}))\end{pmatrix}\begin{pmatrix}\mathrm{Ai}(\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2}))\\ -\mathrm{i}\mathrm{Ai}^{\prime}(\nu^{\frac{2}{3}}f((1-h_{\nu}x)^{2}))\end{pmatrix}
=Ai(x)Ai(y)Ai(x)Ai(y)xy+N=1𝔪(a~N,00(x,y)Ai(x)Ai(y)+a~N,01(x,y)Ai(x)Ai(y)\displaystyle=\frac{\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)-\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y)}{x-y}+\sum_{N=1}^{\mathfrak{m}}\left(\tilde{a}_{N,00}(x,y)\mathrm{Ai}(x)\mathrm{Ai}(y)+\tilde{a}_{N,01}(x,y)\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)\right.
+a~N,01(y,x)Ai(x)Ai(y)+a~N,11(x,y)Ai(x)Ai(y))hνN+hν𝔪+1𝒪(e(x+y)),\displaystyle\quad\left.+\tilde{a}_{N,01}(y,x)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y)+\tilde{a}_{N,11}(x,y)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}^{\prime}(y)\right)h_{\nu}^{N}+h_{\nu}^{\mathfrak{m}+1}\cdot\mathcal{O}\left(e^{-(x+y)}\right), (4.42)

where a~N,00(x,y)\tilde{a}_{N,00}(x,y), a~N,01(x,y)\tilde{a}_{N,01}(x,y) and a~N,11(x,y)\tilde{a}_{N,11}(x,y) are certain polynomials in xx and yy.

Combining (4.18), (4.20) and (4) together gives us that, under the condition (4.15),

ϕν(x)ϕν(y)KνBes(ϕν(x),ϕν(y))\displaystyle\sqrt{\phi_{\nu}^{\prime}(x)\phi_{\nu}^{\prime}(y)}K_{\nu}^{\mathrm{Bes}}(\phi_{\nu}(x),\phi_{\nu}(y))
=Ai(x)Ai(y)Ai(x)Ai(y)xy+j=1𝔪(pj,00(x,y)Ai(x)Ai(y)+pj,01(x,y)Ai(x)Ai(y)\displaystyle=\frac{\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)-\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y)}{x-y}+\sum_{j=1}^{\mathfrak{m}}\left(p_{j,00}(x,y)\mathrm{Ai}(x)\mathrm{Ai}(y)+p_{j,01}(x,y)\mathrm{Ai}(x)\mathrm{Ai}^{\prime}(y)\right.
+pj,10(x,y)Ai(x)Ai(y)+pj,11(x,y)Ai(x)Ai(y))hνj+hν𝔪+1𝒪(e(x+y)),\displaystyle\quad\left.+p_{j,10}(x,y)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}(y)+p_{j,11}(x,y)\mathrm{Ai}^{\prime}(x)\mathrm{Ai}^{\prime}(y)\right)h_{\nu}^{j}+h_{\nu}^{\mathfrak{m}+1}\cdot\mathcal{O}\left(e^{-(x+y)}\right), (4.43)

as hν0+h_{\nu}\to 0^{+}, where pj,κλ(x,y)p_{j,\kappa\lambda}(x,y), κ,λ{0,1}\kappa,\lambda\in\{0,1\}, are polynomials in xx and yy. Additionally, it is worth noting that even though the calculations involved are inherently complicated, it is possible to derive precise formulaes for pj,κλp_{j,\kappa\lambda} utilizing the explicit expressions (3.5), (3.27), (3.6) and (4.18). The first few polynomials are hereby presented:

p1,00(x,y)\displaystyle p_{1,00}(x,y) =310(x2+xy+y2),\displaystyle=-\frac{3}{10}(x^{2}+xy+y^{2}),
p1,01(x,y)\displaystyle p_{1,01}(x,y) =p1,10(x,y)=15,\displaystyle=p_{1,10}(x,y)=\frac{1}{5},
p1,11(x,y)\displaystyle p_{1,11}(x,y) =310(x+y),\displaystyle=\frac{3}{10}(x+y),

and

p2,00(x,y)\displaystyle p_{2,00}(x,y) =56235(x2+y2)319xy(x+y)1400,\displaystyle=\frac{56-235(x^{2}+y^{2})-319xy(x+y)}{1400},
p2,01(x,y)\displaystyle p_{2,01}(x,y) =p2,10(y,x)=63(x4+x3yx2y2xy3y4)55x+239y1400,\displaystyle=p_{2,10}(y,x)=\frac{63(x^{4}+x^{3}y-x^{2}y^{2}-xy^{3}-y^{4})-55x+239y}{1400},
p2,11(x,y)\displaystyle p_{2,11}(x,y) =340(x2+y2)+256xy1400,\displaystyle=\frac{340(x^{2}+y^{2})+256xy}{1400},

which lead to (1.1) and (1.1).

Finally, since all the terms Ai()\mathrm{Ai}(\cdot), Ai()\mathrm{Ai}^{\prime}(\cdot), E()E(\cdot) and R()R(\cdot) are analytic functions for t0x,y<(11ε)hν1t_{0}\leq x,y<(1-\sqrt{1-\varepsilon})h_{\nu}^{-1}, all the expansions can repeatedly be differentiated w.r.t. the variables xx and yy while preserving the uniformity, which holds for the kernel expansion.

This completes the proof of Theorem 1.1. ∎

Appendix A The Airy parametrix

The Airy parametrix Φ(Ai)\Phi^{({\mathrm{Ai}})} is the unique solution of the following RH problem.

RH problem A.1.
  • (a)

    Φ(Ai)(z)\Phi^{(\mathrm{Ai})}(z) is analytic in {j=14Σj{0}}\mathbb{C}\setminus\{\cup_{j=1}^{4}\Sigma_{j}\cup\{0\}\}, where the contours Σj\Sigma_{j}, j=1,2,3,4j=1,2,3,4, are indicated in Figure 5.

  • (b)

    Φ(Ai)(z)\Phi^{(\mathrm{Ai})}(z) satisfies the jump condition

    Φ+(Ai)(z)=Φ(Ai)(z){(1101),zΣ1,(1011),zΣ2Σ4,(0110),zΣ3.\displaystyle\Phi^{({\mathrm{Ai}})}_{+}(z)=\Phi^{({\mathrm{Ai}})}_{-}(z)\begin{cases}\begin{pmatrix}1&1\\ 0&1\end{pmatrix},&\qquad z\in\Sigma_{1},\\ \begin{pmatrix}1&0\\ 1&1\end{pmatrix},&\qquad z\in\Sigma_{2}\cup\Sigma_{4},\\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},&\qquad z\in\Sigma_{3}.\end{cases} (A.1)
  • (c)

    As zz\to\infty, we have

    Φ(Ai)(z)=12(z1400z14)(1ii1)(I+𝒪(z32))e23z3/2σ3.\displaystyle\Phi^{({\mathrm{Ai}})}(z)=\frac{1}{\sqrt{2}}\begin{pmatrix}z^{-\frac{1}{4}}&0\\ 0&z^{\frac{1}{4}}\end{pmatrix}\begin{pmatrix}1&\mathrm{i}\\ \mathrm{i}&1\end{pmatrix}\left(I+\mathcal{O}(z^{-\frac{3}{2}})\right)e^{-\frac{2}{3}z^{3/2}\sigma_{3}}. (A.2)
  • (d)

    Φ(Ai)(z)\Phi^{({\mathrm{Ai}})}(z) is bounded near the origin.

0Σ1\Sigma_{1}Σ2\Sigma_{2}Σ3\Sigma_{3}Σ4\Sigma_{4}
Figure 5: The jump contours of the RH problem for Φ(Ai)\Phi^{({\mathrm{Ai}})}.

Denote ω:=e2πi/3\omega:=e^{2\pi\mathrm{i}/3}, the unique solution is given by (cf. [11])

Φ(Ai)(z)=2π{(Ai(z)ω2Ai(ω2z)iAi(z)iωAi(ω2z)),argz(0,3π4),(ωAi(ωz)ω2Ai(ω2z)iω2Ai(ωz)iωAi(ω2z)),argz(3π4,π),(ω2Ai(ω2z)ωAi(ωz)iωAi(ω2z)iω2Ai(ωz)),argz(π,3π4),(Ai(z)ωAi(ωz)iAi(z)iω2Ai(ωz)),argz(3π4,0).\displaystyle\Phi^{({\mathrm{Ai}})}(z)=\sqrt{2\pi}\begin{cases}\begin{pmatrix}\mathrm{Ai}(z)&-\omega^{2}\mathrm{Ai}(\omega^{2}z)\\ -\mathrm{i}\mathrm{Ai}^{\prime}(z)&\mathrm{i}\omega\mathrm{Ai}^{\prime}(\omega^{2}z)\end{pmatrix},&\arg z\in\left(0,\frac{3\pi}{4}\right),\\ \begin{pmatrix}-\omega\mathrm{Ai}(\omega z)&-\omega^{2}\mathrm{Ai}(\omega^{2}z)\\ \mathrm{i}\omega^{2}\mathrm{Ai}^{\prime}(\omega z)&\mathrm{i}\omega\mathrm{Ai}^{\prime}(\omega^{2}z)\end{pmatrix},&\arg z\in\left(\frac{3\pi}{4},\pi\right),\\ \begin{pmatrix}-\omega^{2}\mathrm{Ai}(\omega^{2}z)&\omega\mathrm{Ai}(\omega z)\\ \mathrm{i}\omega\mathrm{Ai}^{\prime}(\omega^{2}z)&-\mathrm{i}\omega^{2}\mathrm{Ai}^{\prime}(\omega z)\end{pmatrix},&\arg z\in\left(-\pi,-\frac{3\pi}{4}\right),\\ \begin{pmatrix}\mathrm{Ai}(z)&\omega\mathrm{Ai}(\omega z)\\ -\mathrm{i}\mathrm{Ai}^{\prime}(z)&-\mathrm{i}\omega^{2}\mathrm{Ai}^{\prime}(\omega z)\end{pmatrix},&\arg z\in\left(-\frac{3\pi}{4},0\right).\end{cases} (A.3)

Furthermore, applying the asymptotics of Ai\mathrm{Ai} and Ai\mathrm{Ai}^{\prime} in [24, Chapter 9], we have, as zz\to\infty,

Φ(Ai)(z)122(z1400z14)(1ii1)(k=0(32)k𝔲k+𝔳kz3k/2ik=0(32)k𝔲k𝔳kz3k/2ik=0(32)k𝔲k𝔳kz3k/2k=0(32)k𝔲k+𝔳kz3k/2)e23z3/2σ3,\displaystyle\Phi^{({\mathrm{Ai}})}(z)\sim\frac{1}{2\sqrt{2}}\begin{pmatrix}z^{-\frac{1}{4}}&0\\ 0&z^{\frac{1}{4}}\end{pmatrix}\begin{pmatrix}1&\mathrm{i}\\ \mathrm{i}&1\end{pmatrix}\begin{pmatrix}\sum\limits_{k=0}^{\infty}\left(-\frac{3}{2}\right)^{k}\frac{\mathfrak{u}_{k}+\mathfrak{v}_{k}}{z^{3k/2}}&\mathrm{i}\sum\limits_{k=0}^{\infty}\left(\frac{3}{2}\right)^{k}\frac{\mathfrak{u}_{k}-\mathfrak{v}_{k}}{z^{3k/2}}\\ -\mathrm{i}\sum\limits_{k=0}^{\infty}\left(-\frac{3}{2}\right)^{k}\frac{\mathfrak{u}_{k}-\mathfrak{v}_{k}}{z^{3k/2}}&\sum\limits_{k=0}^{\infty}\left(\frac{3}{2}\right)^{k}\frac{\mathfrak{u}_{k}+\mathfrak{v}_{k}}{z^{3k/2}}\end{pmatrix}e^{-\frac{2}{3}z^{3/2}\sigma_{3}}, (A.4)

where 𝔲0=𝔳0=1\mathfrak{u}_{0}=\mathfrak{v}_{0}=1 and

𝔲k=(6k5)(6k3)(6k1)216(2k1)k𝔲k1,𝔳k=6k+116k𝔲k.\displaystyle\mathfrak{u}_{k}=\frac{(6k-5)(6k-3)(6k-1)}{216(2k-1)k}\mathfrak{u}_{k-1},\qquad\mathfrak{v}_{k}=\frac{6k+1}{1-6k}\mathfrak{u}_{k}. (A.5)

Acknowledgements

We thank Folkmar Bornemann for helpful comments on the preliminary version of this paper. This work was partially supported by National Natural Science Foundation of China under grant numbers 12271105, 11822104, and “Shuguang Program” supported by Shanghai Education Development Foundation and Shanghai Municipal Education Commission.

References

  • [1] D. Aldous and P. Diaconis, Longest increasing subsequences: from patience sorting to the Baik-Deift-Johansson theorem, Bull. Amer. Math. Soc. 36 (1999), 413–432.
  • [2] J. Baik, P. Deift and K. Johansson, On the distribution of the length of the longest increasing subsequence of random permutations, J. Amer. Math. Soc. 12 (1999), 1119–1178.
  • [3] J. Baik, P. Deift and T. Suidan, Combinatorics and Random Matrix Theory, Amer. Math. Soc., Providence, 2016.
  • [4] J. Baik and R. Jenkins, Limiting distribution of maximal crossing and nesting of Poissonized random matchings, Ann. Probab. 41 (2013), 4359–4406.
  • [5] F. Bornemann, Asymptotic expansions relating to the distribution of the length of longest increasing subsequences, preprint arXiv:2301.02022v5.
  • [6] F. Bornemann, A Stirling-type formula for the distribution of the length of longest increasing subsequences, Found. Comput. Math. (2023).
  • [7] F. Bornemann, A note on the expansion of the smallest eigenvalue distribution of the LUE at the hard edge, Ann. Appl. Probab. 26 (2016), 1942–1946.
  • [8] A. Borodin and P. J. Forrester, Increasing subsequences and the hard-to-soft edge transition in matrix ensembles, J. Phys. A 36 (2003), 2963–2981.
  • [9] C. Charlier and J. Lenells, The hard-to-soft edge transition: exponential moments, central limit theorems and rigidity, J. Approx. Theory 285 (2023), 105833, 50pp.
  • [10] L. N. Choup, Edgeworth expansion of the largest eigenvalue distribution function of GUE and LUE, Int. Math. Res. Not. 2006 (2006), 61049, 32pp.
  • [11] P. Deift, Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, vol. 3, New York University, 1999.
  • [12] P. Deift and X. Zhou, A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation, Ann. Math. 137 (1993), 295–368.
  • [13] A. Edelman, A. Guionnet and S. Péché, Beyond universality in random matrix theory, Ann. Appl. Probab. 26 (2016), 1659–1697.
  • [14] N. El Karoui, A rate of convergence result for the largest eigenvalue of complex white Wishart matrices, Ann. Probab. 34 (2006), 2077–2117.
  • [15] P. J. Forrester, The spectrum edge of random matrix ensembles, Nucl. Phys. B 402 (1993), 709–728.
  • [16] P. J. Forrester and T. D. Hughes, Complex Wishart matrices and conductance in mesoscopic systems: exact results, J. Math. Phys. 35 (1994), 6736–6747.
  • [17] P. J. Forrester and A. Mays, Finite size corrections relating to distributions of the length of longest increasing subsequences, Adv. Appl. Math. 145 (2023), 102482, 33pp.
  • [18] P. J. Forrester and T. Nagao, Asymptotic correlations at the spectrum edge of random matrices, Nuclear Phys. B 435 (1995), 401–420.
  • [19] P. J. Forrester and A. K. Trinh, Finite-size corrections at the hard edge for the Laguerre β\beta ensemble, Stud. Appl. Math. 143 (2019), 315–336.
  • [20] K. Johansson, The longest increasing subsequence in a random permutation and a unitary random matrix model, Math. Res. Lett. 5 (1988), 63–82.
  • [21] A. B. J. Kuijlaars, K. T. -R. McLaughlin, W. Van Assche and M. Vanlessen, The Riemann-Hilbert approach to strong asymptotics for orthogonal polynomials on [1,1][-1,1], Adv. Math. 188 (2004), 337–398.
  • [22] B. J. Laurenzi, Polynomials associated with the gigher derivatives of the Airy functions Ai(z)\mathrm{Ai}(z) and Ai(z)\mathrm{Ai}^{\prime}(z), preprint arXiv:1110.2025.
  • [23] F. W. J. Olver, Some new asymptotic expansions for Bessel functions of large orders, Proc. Cambridge Philos. Soc. 48 (1952), 414–427.
  • [24] F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds, NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/, Release 1.1.10 of 2023-06-15.
  • [25] A. Perret and G. Schehr, Finite NN corrections to the limiting distribution of the smallest eigenvalue of Wishart complex matrices, Random Matrices Theory Appl. 5 (2016), 1650001, 27pp.
  • [26] D. Romik, The Surprising Mathematics of Longest Increasing Subsequences, Cambridge Univ. Press, New York, 2015.
  • [27] J. W. Silverstein and P. L. Combettes, Signal detection via spectral theory of large dimensional random matrices, IEEE Trans. Signal Processing 40 (1992), 2100–2105.
  • [28] C. A. Tracy and H. Widom, Level-spacing distributions and the Airy kernel, Commun. Math. Phys. 159 (1994), 151–174.
  • [29] S. M. Ulam, Monte Carlo calculations in problems of mathematical physics. In: Modern mathematics for the engineer: Second series, 261–281, McGraw-Hill, New York, 1961.
  • [30] J. Wishart, The generalized product moment distribution in samples from a normal multivariate population, Biometrika A 20 (1928), 32–52.