This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On the distribution of the time-integral of the geometric Brownian motion

Péter Nándori Department of Mathematical Sciences, Yeshiva University, New York, NY 10016 [email protected]  and  Dan Pirjol School of Business, Stevens Institute of Technology, Hoboken, NJ 07030 [email protected]
(Date: August 2021)
Abstract.

We study the numerical evaluation of several functions appearing in the small time expansion of the distribution of the time-integral of the geometric Brownian motion as well as its joint distribution with the terminal value of the underlying Brownian motion. A precise evaluation of these distributions is relevant for the simulation of stochastic volatility models with log-normally distributed volatility, and Asian option pricing in the Black-Scholes model. We derive series expansions for these distributions, which can be used for numerical evaluations. Using tools from complex analysis, we determine the convergence radius and large order asymptotics of the coefficients in these expansions. We construct an efficient numerical approximation of the joint distribution of the time-integral of the gBM and its terminal value, and illustrate its application to Asian option pricing in the Black-Scholes model.

Key words and phrases:
Complex analysis, asymptotic expansions, numerical approximation

1. Introduction

The distributional properties of the time integral of the geometric Brownian motion (gBM) appear in many problems of applied probability, actuarial science and mathematical finance. The probability distribution function is not known in closed form, although several integral representations have been given in the literature, as we describe next.

A closed form result for the joint distribution of the time integral of a gBM and its terminal value was obtained by Yor [34]. Denoting At(μ)=0te2(Ws+μs)𝑑sA_{t}^{(\mu)}=\int_{0}^{t}e^{2(W_{s}+\mu s)}ds, where WtW_{t} is a standard Brownian motion, this result reads

(1) (1tAt(μ)da,Wt+μtdx)=eμx12μ2te1+e2x2atθexat(t)dadxa,\mathbb{P}\left(\frac{1}{t}A_{t}^{(\mu)}\in da,W_{t}+\mu t\in dx\right)=e^{\mu x-\frac{1}{2}\mu^{2}t}e^{-\frac{1+e^{2x}}{2at}}\theta_{\frac{e^{x}}{at}}(t)\frac{dadx}{a},

where θr(t)\theta_{r}(t) is the Hartman-Watson integral defined by

(2) θr(t)=r2π3teπ22t0eξ22tercoshξsinhξsinπξtdξ.\theta_{r}(t)=\frac{r}{\sqrt{2\pi^{3}t}}e^{\frac{\pi^{2}}{2t}}\int_{0}^{\infty}e^{-\frac{\xi^{2}}{2t}}e^{-r\cosh\xi}\sinh\xi\sin\frac{\pi\xi}{t}d\xi\,.

and rr is a positive parameter. This is related to the Hartman-Watson distribution [14] which is defined by its probability density function fr(t)=θr(t)θr(s)𝑑sf_{r}(t)=\frac{\theta_{r}(t)}{\int\theta_{r}(s)ds}.

Integrating (1) over xx gives an expression for the distribution of the time-average of the gBM as a double integral. For certain values of μ\mu this can be reduced to a single integral [4]. Chapter 4 in [21] gives a detailed overview of the cases for which such simplifications are available.

The numerical evaluation of the integral (2) is challenging for small values of tt [1], which motivated the search for alternative numerical and analytical approximations. We summarize below limit theorems in the literature for the joint distribution of the time integral of the gBM and its terminal value.

An asymptotic expansion for the Hartman-Watson integral θρ/t(t)\theta_{\rho/t}(t) as t0t\to 0 at fixed ρ\rho was derived in [28]. The leading order of this expansion is

(3) θρ/t(t)=12πte1t(F(ρ)π22)G(ρ)+O(t0)\theta_{\rho/t}(t)=\frac{1}{2\pi t}e^{-\frac{1}{t}(F(\rho)-\frac{\pi^{2}}{2})}G(\rho)+O(t^{0})

and depends on two real functions F(ρ),G(ρ)F(\rho),G(\rho). The exact definition of FF and GG is technical and so we postpone it to Section 4.2, see formulas (57) and (60).

Upon substitution into (1), the expansion (3) gives the leading t0t\to 0 asymptotics of the joint distribution of the time-integral of a gBM and its terminal value

(4) (1tAt(μ)da,eWt+μtdv)=12πtvμe12μ2tG(v/a)e1tI(a,v)dadvav+O(t0)\mathbb{P}\left(\frac{1}{t}A_{t}^{(\mu)}\in da,e^{W_{t}+\mu t}\in dv\right)=\frac{1}{2\pi t}v^{\mu}e^{-\frac{1}{2}\mu^{2}t}G(v/a)e^{-\frac{1}{t}I(a,v)}\frac{dadv}{av}+O(t^{0})

with

(5) I(a,v)=1+v22a+F(va)π22.I(a,v)=\frac{1+v^{2}}{2a}+F\left(\frac{v}{a}\right)-\frac{\pi^{2}}{2}\,.

The function I(a,v)I(a,v) appears also in the asymptotic expansion of the log-normal SABR stochastic volatility model discretized in time, as the number of time steps nn\to\infty under appropriate rescaling of the model parameters, see Sec. 3.2 in [29].

The application of Laplace asymptotic methods gives an expansion for the distribution function of the time integral of the gBM, see Proposition 4 in [28]

(6) (1tAt(μ)da)=12πtg(a,μ)e1tJ(a)daa(1+O(t))\mathbb{P}\left(\frac{1}{t}A_{t}^{(\mu)}\in da\right)=\frac{1}{\sqrt{2\pi t}}g(a,\mu)e^{-\frac{1}{t}J(a)}\frac{da}{a}(1+O(t))

where g(a,μ)g(a,\mu) is given in Proposition 4 of [28] and J(a)=infv0I(a,v)J(a)=\inf_{v\geq 0}I(a,v).

1.1. Literature review on the small time asymptotics of the distribution of (At(μ),Wt)(A_{t}^{(\mu)},W_{t})

In order to put these results into context, we summarize briefly the known short maturity asymptotics of the continuous time average of the geometric Brownian motion aT:=1T0Te2(Ws+μs)𝑑sa_{T}:=\frac{1}{T}\int_{0}^{T}e^{2(W_{s}+\mu s)}ds. Dufresne [5] proved convergence in law of this quantity to a normal distribution, see Theorem 2.1 in [5]

(7) limT0aT1T=dN(0,43).\lim_{T\to 0}\frac{a_{T}-1}{\sqrt{T}}\stackrel{{\scriptstyle d}}{{=}}N\left(0,\frac{4}{3}\right)\,.

This can be formulated equivalently as a log-normal limiting distribution in the same T0T\to 0 limit, see Theorem 2.2 in [5]. The log-normal approximation for the time-integral of the gBM is widely used in financial practice, see [5] for a literature review. When applied to the pricing of Asian options, this is known as the Lévy approximation [17].

The log-normal approximation is also used for the conditional distribution of At(μ)A_{t}^{(\mu)} at given terminal value of the gBM, as discussed for example in [22, 23]. This conditional distribution appears in conditional Monte Carlo simulations of stochastic volatility models [33], see [2] for an overview and recent results.

The result (7) can be immediately extended to the joint distribution of (aT,zT)(a_{T},z_{T}) with zT=eWT+μTz_{T}=e^{W_{T}+\mu T}. We state it without proof which can be given along the lines of the proof of Theorem 2.1 in [5].

Proposition 1.1.

Denote x^T=(aT,zT)\hat{x}_{T}=(a_{T},z_{T}) with zT:=eWT+μTz_{T}:=e^{W_{T}+\mu T}.

As T0T\to 0 we have:

(8) limT0x^T1^T=dN(0,Σ^)\lim_{T\to 0}\frac{\hat{x}_{T}-\hat{1}}{\sqrt{T}}\stackrel{{\scriptstyle d}}{{=}}N(0,\hat{\Sigma})

where 1^=(1,1)\hat{1}=(1,1) and

(9) Σ^=(4/3111)\hat{\Sigma}=\left(\begin{array}[]{cc}4/3&1\\ 1&1\\ \end{array}\right)

The covariance matrix Σ^\hat{\Sigma} is computed as (with WsT=TBsW_{sT}=\sqrt{T}B_{s} and 0s10\leq s\leq 1)

(10) Σ11=𝔼[(012Bs𝑑s)2]=40101min(s,u)𝑑s𝑑u=43\displaystyle\Sigma_{11}=\mathbb{E}\Big{[}\Big{(}\int_{0}^{1}2B_{s}ds\Big{)}^{2}\Big{]}=4\int_{0}^{1}\int_{0}^{1}\mbox{min}(s,u)dsdu=\frac{4}{3}
(11) Σ12=𝔼[(012Bs𝑑s)(B1)]=201s𝑑s=1\displaystyle\Sigma_{12}=\mathbb{E}\Big{[}\Big{(}\int_{0}^{1}2B_{s}ds\Big{)}(B_{1})\Big{]}=2\int_{0}^{1}sds=1
(12) Σ22=𝔼[(B1)2]=1.\displaystyle\Sigma_{22}=\mathbb{E}[(B_{1})^{2}]=1\,.

In other words, as T0T\to 0, the joint distribution of (aT,zT)(a_{T},z_{T}) approaches a bivariate normal distribution with correlation ρaz=32\rho_{az}=\frac{\sqrt{3}}{2}. In analogy with Theorem 2.2 in [5], this result can be formulated equivalently as a bivariate log-normal limiting distribution.

Another asymptotic result in the T0T\to 0 limit is the large deviations property of the time-average aTa_{T}. Using large deviations theory methods it was showed in [26] that, for a wide class of diffusions which includes the geometric Brownian motion, the time average of the diffusion satisfies a large deviations property in the small time limit, see Theorem 2 in [26]. For the particular case of a gBM this result reads, in our notations

(13) (1T0Te2(Ws+μs)𝑑sda)=e14TJBS(a)+o(1/T), as T0\mathbb{P}\left(\frac{1}{T}\int_{0}^{T}e^{2(W_{s}+\mu s)}ds\in da\right)=e^{-\frac{1}{4T}J_{\rm BS}(a)+o(1/T)}\,,\mbox{ as }T\to 0

with JBS(a)J_{\rm BS}(a) a rate function given in closed form in Proposition 12 of [26]. For convenience the result is reproduced below in Eq. (22). In contrast to the fluctuations result of (7) which holds in a region of width a1Ta-1\sim\sqrt{T} around the central value a=1a=1, the large deviations result (13) holds for a1O(1)a-1\sim O(1).

1.2. Complete leading small time asymptotics

The explicit asymptotic result (4) for the small time asymptotics of the joint distribution of (At(μ),Wt)(A_{t}^{(\mu)},W_{t}) satisfies all these limiting results and improves on them by including all contributions up to terms of O(t)O(t). It is instructive to see explicitly the connection to the asymptotic results presented above.

The exponential factor in (6) reproduces the large deviations result (13) by identifying J(a)=14JBS(a)J(a)=\frac{1}{4}J_{\rm BS}(a). The emergence of the log-normal limit implied by Proposition  1.1 can be also seen explicitly by expanding the exponent in (4) around a=v=1a=v=1 using the results of Propositions 2 and 3 in [28]. Keeping the leading order terms in the expansion of F(ρ)=π221logρ+log2ρ+O(log3ρ)F(\rho)=\frac{\pi^{2}}{2}-1-\log\rho+\log^{2}\rho+O(\log^{3}\rho) in powers of logρ\log\rho gives the quadratic approximation

(14) I(a,v)=32log2a3logalogv+2log2v+O(|log3a|+|log3v|).I(a,v)=\frac{3}{2}\log^{2}a-3\log a\log v+2\log^{2}v+O(|\log^{3}a|+|\log^{3}v|)\,.

The joint distribution (4) is thus approximated in a neighborhood of (v,a)=(1,1)(v,a)=(1,1) as

(1tAt(μ)da,eWt+μtdv)=32πte12t[3log2a6logalogv+4log2v+]CLN(a,v)dadvav(1+O(t)),\displaystyle\mathbb{P}\left(\frac{1}{t}A_{t}^{(\mu)}\in da,e^{W_{t}+\mu t}\in dv\right)=\frac{\sqrt{3}}{2\pi t}e^{-\frac{1}{2t}[3\log^{2}a-6\log a\log v+4\log^{2}v+\cdots]}C_{\rm LN}(a,v)\frac{dadv}{av}(1+O(t))\,,

where the ()(\cdots) in the exponent are the error terms in (14) and we defined

(16) CLN(a,v):=vμe12μ2t13G(v/a)=vμe12μ2t(1+O(log(v/a))).C_{\rm LN}(a,v):=v^{\mu}e^{-\frac{1}{2}\mu^{2}t}\frac{1}{\sqrt{3}}G(v/a)=v^{\mu}e^{-\frac{1}{2}\mu^{2}t}(1+O(\log(v/a)))\,.

Up to the factor CLN(a,v)C_{\rm LN}(a,v) and the error terms in the exponent, the expression (1.2) coincides with a log-normal bivariate distribution in variables (a,v)(a,v). Recall the bivariate normal distribution of two random variables N(X,Y;σX,σY,ρ)N(X,Y;\sigma_{X},\sigma_{Y},\rho) with volatility and correlation parameters σX,σY,ρ\sigma_{X},\sigma_{Y},\rho

(17) ϕ2(x,y|σX,σY,ρ)=12πσXσY1ρ2e12(1ρ2)(x2σX22ρxyσXσY+y2σY2).\phi_{2}(x,y|\sigma_{X},\sigma_{Y},\rho)=\frac{1}{2\pi\sigma_{X}\sigma_{Y}\sqrt{1-\rho^{2}}}e^{-\frac{1}{2(1-\rho^{2})}\left(\frac{x^{2}}{\sigma_{X}^{2}}-2\rho\frac{xy}{\sigma_{X}\sigma_{Y}}+\frac{y^{2}}{\sigma_{Y}^{2}}\right)}\,.

The density (1.2) corresponds to parameters

(18) σX=23t,σY=t,ρ=32.\sigma_{X}=\frac{2}{\sqrt{3}}\sqrt{t}\,,\sigma_{Y}=\sqrt{t}\,,\rho=\frac{\sqrt{3}}{2}\,.

These parameters correspond precisely to the covariance matrix Σ^\hat{\Sigma} appearing in the fluctuations result of Proposition 1.1.

Improving on the log-normal approximation (1.2) requires adding higher order terms in the coefficient CLN(a,v)C_{\rm LN}(a,v) and in the exponent. All these corrections are expressed in terms of F(ρ),G(ρ)F(\rho),G(\rho) which can be evaluated by expansion around ρ=1\rho=1. In this paper we present series expansions of F(ρ),G(ρ)F(\rho),G(\rho) in powers of logρ\log\rho that can be used for such evaluation, and we study in detail their properties. We determine their convergence radii and coefficient asymptotics in Proposition 4.2. We note that alternative series expansions which improve on the log-normal approximation have been considered in the literature (in the context of the expansion of the density of At(μ)A_{t}^{(\mu)}), based on Gram-Charlier series [6].

Although closed form results are available for the functions F(ρ),G(ρ)F(\rho),G(\rho), they are inefficient for practical use because of the need to solve one non-linear equation for each evaluation. The series expansions for F(ρ),G(ρ)F(\rho),G(\rho) allow much faster and reliable numerical evaluation, inside their convergence domain. In Section 5 we construct efficient numerical approximations for F(ρ),G(ρ)F(\rho),G(\rho), based on the series expansions derived here within the convergence domain, which are easy to use in numerical applications. This gives a numerical approximation for the joint distribution of the time integral of the gBM and its terminal value, which appears in many problems of mathematical finance, such as the simulation of the SABR model [3] and option pricing in the Hull-White model [15]. As an application we discuss the numerical pricing of Asian options in the Black-Scholes model, and demonstrate good agreement with standard benchmark cases used in the literature.

2. Preliminaries

The mathematics of the series expansions for the functions J(a),F(ρ),G(ρ)J(a),F(\rho),G(\rho) involves the study of the analyticity domain and of the singularities of a function defined through the inverse of a complex entire function. Such functions appear in several problems of applied probability, for example in applications of large deviations theory to the short maturity asymptotics of option prices in stochastic volatility models [9] and Asian options in the local volatility model [26, 27]. A similar problem is encountered in the evaluation of functions defined by the Legendre transform of a function

(19) F(x)=infθ(θxf(θ))=θxf(θ)F(x)=\inf_{\theta}(\theta x-f(\theta))=\theta_{*}x-f(\theta_{*})

where θ\theta_{*} is the solution of the equation f(θ)=xf^{\prime}(\theta)=x. If the optimizer θ\theta_{*} can be found exactly at one particular point x0x_{0}, then expansion of f(θ)f(\theta) around θ\theta_{*} gives a series expansion for F(x)F(x) in xx0x-x_{0} which can be used for numerical evaluation of the Legendre transform.

For definiteness, consider the evaluation of the rate function JBSJ_{\rm BS} appearing in the short maturity asymptotics of the distribution of the time-average of the gBM. This determines also the leading short maturity asymptotics of Asian options in the Black-Scholes model111More precisely, in the short maturity limit T0T\to 0 the prices of out-of-the-money Asian options approach zero at an exponential rate C(K,T)e1TJBS(K/S0)C(K,T)\sim e^{-\frac{1}{T}J_{\rm BS}(K/S_{0})}, determined by the rate function JBS(x)J_{\rm BS}(x).. The rate function can be computed explicitly [26] and is given by JBS(x):(0,)+J_{\rm BS}(x):(0,\infty)\to\mathbb{R}_{+}, defined by

(22) JBS(x)={12ξ2ξtanh(ξ2),x1ζtan(ζ2)12ζ2,x1,\displaystyle J_{\rm BS}(x)=\left\{\begin{array}[]{cc}\frac{1}{2}\xi^{2}-\xi\tanh\left(\frac{\xi}{2}\right)\,,&x\geq 1\\ \zeta\tan\left(\frac{\zeta}{2}\right)-\frac{1}{2}\zeta^{2}\,,&x\leq 1,\\ \end{array}\right.

where ξ\xi is the solution of the equation

(23) sinhξξ=x\frac{\sinh\xi}{\xi}=x

and ζ\zeta is the solution in [0,π][0,\pi] of the equation

(24) sinζζ=x.\frac{\sin\zeta}{\zeta}=x\,.

See Proposition 12 in [26]. The function JBS(x)J_{\rm BS}(x) vanishes at x=1x=1 and can be expanded in a series of powers of x1x-1 and logx\log x, see [26]222Similar expansions were given for the rate functions of forward start Asian options in [27].. In practical applications one is interested in the range and rate of convergence of these series expansions. We answer these questions in Proposition 4.1.

Section 3 studies the analyticity properties of the complex inverse of the function determining ξ,ζ\xi,\zeta in (22). These properties are used in Section 4 to study the singularity structure of the rate function JBS(x)J_{BS}(x) and of the functions F(ρ),G(ρ)F(\rho),G(\rho) appearing in Eq. (3). The nature of the singularity on the circle of convergence determines the large-order asymptotics of the coefficients of the series expansions of a complex function around a regular point. This is made precise by the transfer results (see Flajolet and Odlyzko [7, 8]) which relate the expansions around the singular point to the large-order asymptotics of the expansion coefficients. We apply these results in Section 4 to derive the leading large-order asymptotics of the expansion coefficients for the functions JBS(x)J_{\rm BS}(x) and F(ρ),G(ρ)F(\rho),G(\rho) around x=ρ=1x=\rho=1, see Proposition 4.2. Besides the rigorous proofs, numerical tests also confirm a general decreasing trend of the error terms of the leading order asymptotics for the expansion coefficients.

3. On the complex inverse of x(ξ)=sinhξξx(\xi)=\frac{\sinh\xi}{\xi}

An important role in the study of the expansions considered in this paper is played by the singularities of a function defined through the inverse of another function, as in the rate function JBS(x)J_{\rm BS}(x), see Eq. (22). The main issues appear already in the study of the functions ξ(x),ζ(x)\xi(x),\zeta(x). For this reason, we discuss the analytic structure of these functions in details in this section.

Let

(25) f(ξ)=sinhξξ.f(\xi)=\frac{\sinh\xi}{\xi}\,.

Clearly, ff is an entire function with Taylor series around zero given by

(26) f(ξ)=n=0ξ2n(2n+1)!.f(\xi)=\sum_{n=0}^{\infty}\frac{\xi^{2n}}{(2n+1)!}.

We want to make sense of the inverse of ff, restricted to some complex domains. First observe that the equation sinhξ=ξ\sinh\xi=\xi has a unique real solution ξ=0\xi=0 and infinitely many complex solutions. This shows that ff is not invertible on the entire complex plane. In this section we discuss how to make sense of an inverse. Although the results will not surprise experts in complex analysis, this question has not been discussed in the present context before.

3.1. Local analysis

Let us now restrict ff to a small neighborhood of the origin. Then the range of ff is in a small neighborhood of 11. Clearly, ff is not locally invertible as ff is an even function, that is f(ξ)=f(ξ)f(\xi)=f(-\xi), and thus is not injective. A natural way to make the function at least locally invertible is to define

(27) g(z)=n=0zn(2n+1)!.g(z)=\sum_{n=0}^{\infty}\frac{z^{n}}{(2n+1)!}\,.

In light of (26), we have g(z)=f(z)=sinh(z)/zg(z)=f(\sqrt{z})=\sinh(\sqrt{z})/\sqrt{z}. Here, and in the sequel, unless stated otherwise, the notation z\sqrt{z} will only be used in the argument of even functions so as both values of z\sqrt{z} give the same result. Recall the following simple fact: if g(z)0g^{\prime}(z)\neq 0, then there is a small neighborhood DD of zz so that g:Dg(D)g:D\to g(D) is biholomorphic (see for example Theorem 3.4.1 in [30]). Since g(0)0g^{\prime}(0)\neq 0, gg is locally invertible at zero. We also note that gg is an entire function as its Taylor series converges everywhere. We will show next that g:g:\mathbb{C}\mapsto\mathbb{C} is surjective. One says that an entire function ff has finite order ρ\rho if constants A,BA,B can be found such that |f(z)|AeB|z|ρ|f(z)|\leq Ae^{B|z|^{\rho}} for all zz\in\mathbb{C}. See for example Chapter 5 in [31] and Section 1.3 in [16] for a detailed discussion of entire functions of finite order.

The order of the entire function g(z)=n=0anzng(z)=\sum_{n=0}^{\infty}a_{n}z^{n} can be expressed in terms of the coefficients of its Taylor expansion as, see Theorem 2 in Sec. 1.3 of [16]

ρ=lim supnlognlog|an|.\rho=\limsup\frac{n\log n}{-\log|a_{n}|}\,.

Using the expansion (27) it follows that the function gg has order ρ=12\rho=\frac{1}{2}. Any entire function with order <1<1 is surjective, see Section 5.1 of [16]. Since in our case ρ=1/2\rho=1/2, g:g:\mathbb{C}\mapsto\mathbb{C} is surjective.

We study next the critical points of ff and gg which are given by the zeros of ff^{\prime} and gg^{\prime}, respectively. To identify the zeros of gg^{\prime}, let us write 0=η0<η1<η2<0=\eta_{0}<\eta_{1}<\eta_{2}<... for the non-negative solutions of the equation tanη=η\tan\eta=\eta (and note that all real solutions are given by ηk,k\eta_{k},k\in\mathbb{Z} with ηk=ηk\eta_{-k}=-\eta_{k}). We also write

(28) ωk=sin(ηk)ηk,ξk=iηk,zk=ηk2\omega_{k}=\frac{\sin(\eta_{k})}{\eta_{k}},\quad\xi_{k}=i\eta_{k},\quad z_{k}=-\eta_{k}^{2}

(see Table 1 for the numerical values for k=1,,5k=1,...,5). Now we have

Lemma 3.1.

The zero sets of ff^{\prime} and gg^{\prime} are given by

{ξ:f(ξ)=0}={ξk,k}\{\xi:f^{\prime}(\xi)=0\}=\{\xi_{k},k\in\mathbb{Z}\}
{z:g(z)=0}={zk,k+}\{z:g^{\prime}(z)=0\}=\{z_{k},k\in\mathbb{Z}_{+}\}
Proof.

If ξ0\xi\neq 0, then the equation f(ξ)=0f^{\prime}(\xi)=0 reduces to ξ=tanh(ξ)\xi=\tanh(\xi), whose solutions are exactly the numbers ξk\xi_{k}, k0k\neq 0. In case ξ=0\xi=0, we see by (26) that f(0)=0f^{\prime}(0)=0. The first statement follows. For the second statement, observe that g(z)=f(z)/(2z)g^{\prime}(z)=f^{\prime}(\sqrt{z})/(2\sqrt{z}) and g(0)=1/60g^{\prime}(0)=1/6\neq 0. ∎

Lemma 3.2.

For every k+k\in\mathbb{Z}_{+}, g′′(zk)0g^{\prime\prime}(z_{k})\neq 0.

Proof.

Since gg maps \mathbb{R} to \mathbb{R}, in this proof we will denote by z\sqrt{z} for the square root function from +\mathbb{R}_{+} to +\mathbb{R}_{+}. By Lemma 3.1, g(zk)=0g^{\prime}(z_{k})=0, that is tanh(zk)=zk\tanh(\sqrt{z}_{k})=\sqrt{z}_{k}. Next, we compute

g′′(z)=(z+3)sinh(z)3zcosh(z)4z2zg^{\prime\prime}(z)=\frac{(z+3)\sinh(\sqrt{z})-3\sqrt{z}\cosh(\sqrt{z})}{4z^{2}\sqrt{z}}

Assuming g′′(zk)=0g^{\prime\prime}(z_{k})=0, we obtain tanh(zk)=3zkzk+3\tanh(\sqrt{z}_{k})=\frac{3\sqrt{z}_{k}}{z_{k}+3} which is a contradiction with tanh(zk)=zk\tanh(\sqrt{z}_{k})=\sqrt{z}_{k} as zk0z_{k}\neq 0. ∎

We note that ωk\omega_{k}\in\mathbb{R} for all k1k\geq 1. Furthermore, ω10.217234\omega_{1}\approx-0.217234, and |ωk|<|ω1||\omega_{k}|<|\omega_{1}| for all k2k\geq 2.

Let Bρ(A)B_{\rho}(A) be the open ρ\rho-neighborhood of the set AA\subset\mathbb{C} and Bρ(w)=Bρ({ω})B_{\rho}(w)=B_{\rho}(\{\omega\}). Now we are ready to state the analyticity of the local inverse of gg on the region that is most important for our applications.

Theorem 3.1.

There is a function hh, analytic on B1ω1(1)B_{1-\omega_{1}}(1) so that h(1)=0h(1)=0 and g(h(ω))=ωg(h(\omega))=\omega for all wB1ω1(1)w\in B_{1-\omega_{1}}(1).

Proof.

Fix an arbitrary ε>0\varepsilon>0. It suffices to show that hh can be defined analytically on 𝐁:=B1ω1ε(1){\bf B}:=B_{1-\omega_{1}-\varepsilon}(1). First observe that the real interval I=[z1+ε,ε+]I=[z_{1}+\varepsilon_{-},\varepsilon_{+}] is being mapped, under the function gg, to the real interval g(I)=[ω1+ε,1+ε]g(I)=[\omega_{1}+\varepsilon,1+\varepsilon] for suitable ε±\varepsilon_{\pm}. Furthermore, the derivative of gg is non-zero on II. Thus restricting the domain of gg to a small neighborhood of II, the restricted function provides a bijection between a neighborhood of II and a neighborhood of g(I)g(I). More precisely, since the derivative of gg on II is non-zero, for every ωg(I)\omega\in g(I) there is some δω\delta_{\omega} so that gg can be analytically inverted on Bδω(ω)B_{\delta_{\omega}}(\omega). By compactness of g(I)g(I), we can choose a finite cover of g(I)g(I) by the sets Bδω(ω)B_{\delta_{\omega}}(\omega) showing that the inverse function hh is well defined on a small neighborhood Bδ(g(I))B_{\delta^{\prime}}(g(I)) of g(I)g(I).

Next we extend the function hh analytically to the ball 𝐁\bf B. This can be done by a standard procedure. Recall that a point aa\in\mathbb{C} is an asymptotic value of gg if there is a curve Γ\Gamma tending to \infty so that g(z)ag(z)\rightarrow a az zz\rightarrow\infty along Γ\Gamma. Furthermore, if g(c)=0g^{\prime}(c)=0, then g(c)g(c) is a critical value. The singularity set is defined as the union of asymptotic values and critical values. It is known that for any open set DD that is disjoint to the singularity set, the map g:g1(D)Dg:g^{-1}(D)\rightarrow D is a covering (see [25]).

In our case, we have already identified the set of critical values {ωk,k1}\{\omega_{k},k\geq 1\}. An elementary computation shows that the only asymptotic value is ω=0\omega=0. Now let D=𝐁Bδ/2(g(I))¯D={\bf B}\setminus\overline{B_{\delta^{\prime}/2}({g(I)})} (where B¯\bar{B} is the closure of BB). Then DD is open and so g:g1(D)Dg:g^{-1}(D)\rightarrow D is a covering. This implies that along any simple polygonal arc γ\gamma starting from Bδ/2(I)B_{\delta^{\prime}/2}(I) and staying inside 𝐁\bf B, hh has an analytic extension. Indeed, by the covering map property, every ωγ\omega\in\gamma has a neighborhood UωU_{\omega} so that g1(Uω)g^{-1}(U_{\omega}) is a union of disjoint open sets, each of which is mapped homeomorphically onto U by gg. Let us now choose a finite cover of γ\gamma by neighborhoods Uω1,,UωnU_{\omega_{1}},...,U_{\omega_{n}} so that Uω1Bδ/2(I)U_{\omega_{1}}\cap B_{\delta^{\prime}/2}(I)\neq\emptyset and Uωk1UωkU_{\omega_{k-1}}\cap U_{\omega_{k}}\neq\emptyset and by induction on kk we can extend hh analytically to Uωk\cup U_{\omega_{k}}. Observing that DD is simply connected, the result now follows from the monodromy theorem.

3.2. Global analysis

Theorem 3.1 tells us that the inverse h(ω)h(\omega) can be defined with h(1)=0h(1)=0. Furthermore, hh is analytic and the Taylor expansion

(29) h(ω)=n=1cn(ω1)nh(\omega)=\sum_{n=1}^{\infty}c_{n}(\omega-1)^{n}

has a radius of convergence of at least 1ω11-\omega_{1}. We note that the inverse g1g^{-1} can be defined in the sense of a sheaf (also known as global analytic function, or Riemann surface). In this general sense, g1g^{-1} has a second order algebraic branch point at ω1=g(z1)\omega_{1}=g(z_{1}) by Lemma 3.2. Thus the function hh cannot be analytically extended to ω1\omega_{1} and so the power series (29) has radius of convergence exactly 1ω11-\omega_{1}. We will be mostly interested in this expansion (see Section 3.4). However, for completeness we first discuss how to visualize the sheaf g1g^{-1}.

First, observe that hh can be analytically extended to Γ1\mathbb{C}\setminus\Gamma_{1}, where Γ1\Gamma_{1} is any curve starting at ω1\omega_{1} and tending to \infty (for example Γ1=ω1+i0\Gamma_{1}=\omega_{1}+i\mathbb{R}_{\geq 0}; however as we will see later, sometimes other choices are more convenient). Indeed, replacing DD by

(Bδ/2(g(I))¯{x+iy:|xω1|ε,yε})\mathbb{C}\setminus(\overline{B_{\delta^{\prime}/2}({g(I)})}\cup\{x+iy:|x-\omega_{1}|\leq\varepsilon,y\geq-\varepsilon\})

in the last step of the proof of Theorem 3.1 and noting that this domain is simply connected, analyticity of hh follows (of course the expansion (29) is not valid on the larger domain). The function hh can be identified with a chart of the Riemann structure of the sheaf g1g^{-1} (in this identification, its domain is sometimes referred to as Riemann sheet).

Next, we can define another analytic function h2h_{2} on (Γ1Γ2)\mathbb{C}\setminus(\Gamma_{1}\cup\Gamma_{2}), where Γ2=ω2+i0\Gamma_{2}=\omega_{2}+i\mathbb{R}_{\geq 0}. Since the function zg(z)ω1z\mapsto g(z)-\omega_{1} has a zero of order 22 at z1z_{1}, we can extend hh analytically from

{ω1+reiθ:r<ε,θ(π/2,3π/4)}\{\omega_{1}+re^{i\theta}:r<\varepsilon,\theta\in(\pi/2,3\pi/4)\}

to

{ω1+reiθ:r<ε,θ[3π/4,π]}.\{\omega_{1}+re^{i\theta}:r<\varepsilon,\theta\notin[3\pi/4,\pi]\}.

Let the resulting function be denoted by h~\tilde{h}. Clearly, h~\tilde{h} takes values in a small neighborhood of z1z_{1} and since there is a branch point at ω1\omega_{1}, h~(ω)h(ω)\tilde{h}(\omega)\neq h(\omega) for ω=ω1+t\omega=\omega_{1}+t with t+t\in\mathbb{R}_{+}, t<εt<\varepsilon. Now observe that g((z2,z1])=[ω1,ω2)g((z_{2},{z_{1}}])=[\omega_{1},\omega_{2}). Since the branch point is of order 22, we conclude that h~(ω1+t)\tilde{h}(\omega_{1}+t) for tt as above has to be real, and slightly bigger than z1z_{1}. Thus we can extend h~\tilde{h} analytically to Bδ(g(I2))B_{\delta}(g(I_{2})) where I2=[z2+ε,z1ε+]I_{2}=[z_{2}+\varepsilon_{-},z_{1}-\varepsilon_{+}] and g(I2)=[ω1+ε/2,ω2ε/2]g(I_{2})=[\omega_{1}+\varepsilon/2,\omega_{2}-\varepsilon/2]. Now restrict h~\tilde{h} to

{ω1+reiθ:r<ε,θ[π/2,π]}Bδ(g(I2))\{\omega_{1}+re^{i\theta}:r<\varepsilon,\theta\notin[\pi/2,\pi]\}\cup B_{\delta}(g(I_{2}))

and denote this restriced function by h2h_{2}. Then h2h_{2} can be extended analytically to (Γ1Γ2)\mathbb{C}\setminus(\Gamma_{1}\cup\Gamma_{2}) by the monodromy theorem as in the proof of Theorem 3.1.

Furthermore, we can ”glue together” the domains of h1h_{1} and h2h_{2} along Γ\Gamma since limxω1±h1(x+yi)=limxω1h2(x+yi)\lim_{x\rightarrow\omega_{1}\pm}h_{1}(x+yi)=\lim_{x\rightarrow\omega_{1}\mp}h_{2}(x+yi) and the gluing is analytic (see the analyticity of h~\tilde{h} and the fact that the branch point is algebraic of order 22). Since gg preserves the real line and all branch points are of second order, this construction can be continued inductively, i.e. we can define the Riemann sheet k2k\geq 2 by hkh_{k} and glue together with the sheets k±1k\pm 1 along Γk±1\Gamma_{k\pm 1}. This gives a complete elementary description of the sheaf g1g^{-1}.

3.3. Visualizing the sheaf

For better visualization of the sheaf constructed in the previous section, we include some figures. We start with listing the numerical values of the first few critical points ηk\eta_{k}, ηk2\eta_{k}^{2}, ωk\omega_{k} and |iπ+log|ωk|||i\pi+\log|\omega_{k}|| (the latter will be needed later) in Table 1.

Table 1. Numerical values of the first few numbers of interest as defined in (28).
kk ηk\eta_{k} zk=ηk2z_{k}=-\eta_{k}^{2} ωk=sin(ηk)/ηk\omega_{k}=\sin(\eta_{k})/\eta_{k} |iπ+log|ωk|||i\pi+\log|\omega_{k}||
1 4.4934 -20.19 -0.2172 3.4929
2 7.7252 -59.68 0.1284 2.0528
3 10.9041 -118.90 -0.0913 3.9494
4 14.0662 -197.86 0.0709 2.6463
5 17.2208 -296.55 -0.0580 4.2402
Refer to caption
Figure 3.1. Branches of g1(x)g^{-1}(x) for x(ω1,1)x\in(\omega_{1},1) along the real axis.

Given ω\omega, there may be infinitely many zz’s so that g(z)=ωg(z)=\omega. A plot of some of these zz’s for ω(ω1,1)\omega\in(\omega_{1},1) are shown in Fig. 3.1 (note that both ω\omega and zz are real). The first branch corresponds to z(z1,)z\in(z_{1},\infty) when ω>ω1\omega>\omega_{1}. On this branch h1(1)=0h_{1}(1)=0. We call this Riemann sheet the principal branch. The second branch corresponds to z(z2,z1)z\in(z_{2},z_{1}), where ω(ω2,ω1)\omega\in(\omega_{2},\omega_{1}), and the kk-th branch corresponds to z(zk,zk1)z\in(z_{k},z_{k-1}), where ω\omega is in between ωk1\omega_{k-1} and ωk\omega_{k}.

Refer to caption
Figure 3.2. Schematic representation of the first three Riemann sheets of the function g1(x)g^{-1}(x). The blue dots show g1(1)g^{-1}(1) on the various sheets. On the main sheet g1(1)=0g^{-1}(1)=0. This is joined to the second sheet at the branch cut Γ1\Gamma_{1} starting at the branch point ω10.217\omega_{1}\approx-0.217. This is joined to the third sheet at the branch cut Γ2\Gamma_{2} starting at the branch point ω20.128\omega_{2}\approx 0.128, and so on. There is an infinite number of Riemann sheets joined successively at the branch points ωk0\omega_{k}\to 0 which accumulate towards zero as kk\to\infty.

Figure 3.2 is a schematic representation of the first three Riemann sheets. This figure illustrates the apparently counterintuitive result that, although the branch point at ω2=0.1284\omega_{2}=0.1284 is closer to ω=1\omega=1 than the branch point at ω1=0.2172\omega_{1}=-0.2172, it is the latter which determines the radius of convergence of the series expansion around ω=1\omega=1, rather than the former. The explanation for this fact is that the branch point at ω2\omega_{2} is not “visible” on the main Riemann sheet, and thus the branch point at ω1\omega_{1} is the dominant singularity around ω=1\omega=1.

3.4. Properties of the Taylor series

Next, we study the coefficients cnc_{n} of the Taylor expansion (29). The first few coefficients are easily computed by the Lagrange inversion theorem using (27) and Theorem 3.1. In particular, we obtain

(30) h(ω)=6(ω1)95(ω1)2+144175(ω1)3+O((ω1)4)h(\omega)=6(\omega-1)-\frac{9}{5}(\omega-1)^{2}+\frac{144}{175}(\omega-1)^{3}+O((\omega-1)^{4})

Writing ey=ωe^{y}=\omega we can also expand hh as a series in y=logωy=\log\omega (here, logω\log\omega means the principal value of the logarithm). In applications to Asian options pricing [26], the parameter ω\omega is related to the option strike. In practice, it is usual to expand implied volatilities in log-strike, which corresponds to expanding in y=logωy=\log\omega. See for example [11].

Comparing the first few derivatives with (30), the expansion (30) becomes

h(ey)\displaystyle h(e^{y}) =\displaystyle= n=1dnyn\displaystyle\sum_{n=1}^{\infty}d_{n}y^{n}
=\displaystyle= 6y+65y2+4175y32175y4+O(y5).\displaystyle 6y+\frac{6}{5}y^{2}+\frac{4}{175}y^{3}-\frac{2}{175}y^{4}+O(y^{5}).

Explicitly computing high order terms of these expansions is difficult (cf. [24], pages 411-413). However, we can study the asymptotic properties of the sequences cnc_{n} and dnd_{n}.

Proposition 3.1.

(i) The coefficients cnc_{n} of the Taylor expansion (29) satisfy

(32) cn=c1(1ω1)n(1)nn32+O(n52),c=η12(1ω1)π|′′(η1)|8.48671,c_{n}=c_{\infty}\frac{1}{(1-\omega_{1})^{n}}(-1)^{n}n^{-\frac{3}{2}}+O(n^{-\frac{5}{2}})\,,\quad c_{\infty}=-\eta_{1}\sqrt{\frac{2(1-\omega_{1})}{\pi|\ell^{\prime\prime}(\eta_{1})|}}\approx-8.48671,

where (η)=sinηη\ell(\eta)=\frac{\sin\eta}{\eta}.

(ii) The Taylor expansion (3.4) has a radius of convergence ρx=|log|ω1|+iπ|3.49295\rho_{x}=|\log|\omega_{1}|+i\pi|\approx 3.49295. Furthermore, the coefficients dnd_{n} satisfy

(33) dn=d1ρxncos[θx(n12)]n32+O(n52),d=2η12(ω1)ρxπ|′′(η1)|13.4011,d_{n}=d_{\infty}\frac{1}{\rho_{x}^{n}}\cos\left[\theta_{x}\left(n-\frac{1}{2}\right)\right]n^{-\frac{3}{2}}+O(n^{-\frac{5}{2}})\,,\quad d_{\infty}=-2\eta_{1}\sqrt{\frac{2(-\omega_{1})\rho_{x}}{\pi|\ell^{\prime\prime}(\eta_{1})|}}\approx-13.4011,

where θx=arg(iπ+log|ω1|)2.02317\theta_{x}=\mbox{arg}(i\pi+\log|\omega_{1}|)\approx 2.02317.

Refer to caption
Refer to caption
Figure 3.3. Left: plot of [cn]1/n[c_{n}]^{1/n} vs 1/n1/n for the series (30) using the first 100100 terms (blue dots). The red line is at 1/(1ω1)=0.8211/(1-\omega_{1})=0.821. The black solid curve shows the leading nn\to\infty asymptotics for the coefficients cnc_{n} (32). Right: plot of [dn]1/n[d_{n}]^{1/n} for the series (3.4) using the first 100100 terms (blue dots). The red line is at 1/ρx=0.2861/\rho_{x}=0.286 with ρx=|log|ω1|+iπ|=3.49295\rho_{x}=|\log|\omega_{1}|+i\pi|=3.49295.
Proof.

(i) We first prove (32). The asymptotics of the coefficients cnc_{n} for nn\to\infty depend on the nature of the singularity on the circle of convergence. The leading asymptotics of h(ω)h(\omega) around this singularity can be translated into asymptotics for the coefficients of the series expansion using the ”transfer results” (see Flajolet and Odlyzko [7, 8]). Theorem VI.1 and VI.3 of [8] imply that if hh is analytic on Γ\mathbb{C}\setminus\Gamma^{\prime} where Γ=ω1\Gamma^{\prime}=\mathbb{R}_{\leq\omega_{1}} and furthermore

(34) h(ω)=h(ω1)+C1(ωω1)1/2+C2(ωω1)+O((ωω1)(ωω1)1/2)h(\omega)=h(\omega_{1})+C_{1}(\omega-\omega_{1})^{1/2}+C_{2}(\omega-\omega_{1})+O((\omega-\omega_{1})(\omega-\omega_{1})^{1/2})

as ωω1\omega\rightarrow\omega_{1}, 333Here (ωω1)1/2(\omega-\omega_{1})^{1/2} means an analytic function on Bε(0)0B_{\varepsilon}(0)\setminus\mathbb{R}_{\leq 0} whose square is (ωω1)(\omega-\omega_{1}) then (32) holds with

(35) c=C11ω12π.c_{\infty}=-C_{1}\frac{\sqrt{1-\omega_{1}}}{2\sqrt{\pi}}.

We defined here h(ω1)=z1h(\omega_{1})=z_{1}. With this definition, hh is continuous at ω1\omega_{1} (of course there is no open neighborhood of ω1\omega_{1} where hh is defined).

Indeed, Theorem VI.1 of [8] says that the Taylor coefficients ana_{n} of the function (1ω)1/2=anωn(1-\omega)^{1/2}=\sum a_{n}\omega^{n} have the large nn asymptotics an=n3/2Γ(1/2)+O(n5/2)a_{n}=\frac{n^{-3/2}}{\Gamma(-1/2)}+O(n^{-5/2}). The term C2(ωω1)C_{2}(\omega-\omega_{1}) is a polynomial and so it has a finite Taylor expansion and by Theorem VI.3 of [8], the Taylor coefficients of (1ω)(1ω)1/2(1-\omega)(1-\omega)^{1/2} are of O(n5/2)O(n^{-5/2}). We obtain (35) by rescaling the singularity from 11 to ω1\omega_{1}.

To finish the proof of (32), it remains to verify (34). We will prove that ω1\omega_{1} is an algebraic branch point of order 2, meaning that near ω1\omega_{1}, the function hh “behaves like a branch of the square root function near 0” . This behavior results from the inversion of the function f(z)f(\sqrt{z}) around the critical point z1z_{1}. It is known, see for example Theorem 3.5.1 in [30], that the inversion of a non-constant analytic function f(z)=wf(z)=w around a point z0z_{0} with f(z0)=0f^{\prime}(z_{0})=0 gives a multi-valued function which can be expanded in a convergent series in fractional powers of ww0w-w_{0} with w0=f(z0)w_{0}=f(z_{0}), called a Puiseux series.

To make these statements mathematically precise, first note that hh can be analytically defined on Γ\mathbb{C}\setminus\Gamma^{\prime} exactly as in Section 3.2 (replacing Γ1\Gamma_{1} by Γ\Gamma^{\prime}. With a slight abuse of notation we also denote this function by hh.) Then by Theorem 3.5.1 in [30] there is some ε>0\varepsilon>0 and an analytic function ϕ\phi on D:=Bε(0)0D^{\prime}:=B_{\varepsilon}(0)\setminus\mathbb{R}_{\leq 0} so that ϕ2(x)=x\phi^{2}(x)=x (i.e. ϕ\phi is one branch of the square root function) and another function ψ\psi, analytic on Bε(0)B_{\varepsilon}(0) with ψ(0)=z1\psi(0)=z_{1}, ψ(0)0\psi^{\prime}(0)\neq 0 so that h(ωω1)=ψ(ϕ(ω))h(\omega-\omega_{1})=\psi(\phi(\omega)) for all ωD\omega\in D^{\prime}. Furthermore, the function hh has a Pusieux expansion on DD^{\prime}, that is

(36) h(ω)=z1+C1ϕ(ωω1)+C2(ωω1)+O((ωω1)ϕ(ωω1))h(\omega)=z_{1}+C_{1}\phi(\omega-\omega_{1})+C_{2}(\omega-\omega_{1})+O((\omega-\omega_{1})\phi(\omega-\omega_{1}))

where C1C_{1} satisfies C12=8z1/f′′(iξ1)C_{1}^{2}=-8z_{1}/f^{\prime\prime}(i\xi_{1}). Thus we obtain (34) which completes the proof of (32).

(ii) First we prove that the Taylor expansion (3.4) has a radius of convergence ρx:=|log|ω1|+iπ|\rho_{x}:=|\log|\omega_{1}|+i\pi|. First note that the right hand side of (3.4) converges to an analytic function h~\tilde{h} if |y|<π|\Im y|<\pi. Indeed, in this case, eye^{y}\notin\mathbb{R}_{-} and so h~=h(ey)\tilde{h}=h(e^{y}) is well defined as hh is analytic on Γ\mathbb{C}\setminus\Gamma^{\prime}. Now we claim that h~\tilde{h} can be analytically extended to D~=Bρx(0)\tilde{D}=B_{\rho_{x}}(0). To prove this claim, let D~+=D~{y:y>π}\tilde{D}_{+}=\tilde{D}\cap\{y:\Im y>\pi\} and D~=D~{y:y<π}\tilde{D}_{-}=\tilde{D}\cap\{y:\Im y<-\pi\}. Let us denote by \mathbb{H} the upper half plane, that is ={y:y>0}\mathbb{H}=\{y\in\mathbb{C}:\Im y>0\}. Now observe that the image of D~\tilde{D}\cap\mathbb{H} under the exponential function is disjoint to Γ+:=ω1+ei(2πϵ)0\Gamma_{+}:=\omega_{1}+e^{i(2\pi-\epsilon)}\mathbb{R}_{\geq 0} for ϵ\epsilon small enough. Thus defining hh with the branch cut Γ+\Gamma_{+}, h(ey)h(e^{y}) is well defined and analytic for D~\tilde{D}\cap\mathbb{H}. Likewise, we can analytically extend h~\tilde{h} to D~\tilde{D}_{-} using the branch cut Γ:=ω1+eiϵ0\Gamma_{-}:=\omega_{1}+e^{i\epsilon}\mathbb{R}_{\geq 0} in the definition of hh. Clearly, there are singularities at y±=±iπ+log|ω1|y_{\pm}=\pm i\pi+\log|\omega_{1}| as ey±=ω1e^{y_{\pm}}=\omega_{1}. Thus we have verified that the right hand side of (3.4) converges to an analytic function h~\tilde{h} on Bρx(0)B_{\rho_{x}}(0).

Next, we prove (33). The new feature here is the presence of two singularities on the circle of convergence in the yy complex plane, with radius ρx\rho_{x}. This situation is discussed in Chapter VI.5 of [8]. One feature specific to this situation is the oscillatory pattern of the leading asymptotics for the coefficients of the series expansion (3.4).

From (34), we have for yy±y\to y_{\pm}

h~(y)\displaystyle\tilde{h}(y) =\displaystyle= z1+C1(eyey±)1/2+C2(eyey±)+O((eyey±)(eyey±)1/2)\displaystyle z_{1}+C_{1}(e^{y}-e^{y_{\pm}})^{1/2}+C_{2}(e^{y}-e^{y_{\pm}})+O((e^{y}-e^{y_{\pm}})(e^{y}-e^{y_{\pm}})^{1/2})
=\displaystyle= z1+C1(yy±)1/2ω1+C2(eyey±)+O((eyey±)(yy±)1/2).\displaystyle z_{1}+C_{1}(y-{y_{\pm}})^{1/2}\sqrt{\omega_{1}}+C_{2}(e^{y}-e^{y_{\pm}})+O((e^{y}-e^{y_{\pm}})(y-{y_{\pm}})^{1/2}).

Applying again the transfer result, we obtain

(38) dn=C1[(y+)12n+(y)12n]12πn3+O(n52)d_{n}=C_{1}[(y_{+})^{\frac{1}{2}-n}+(y_{-})^{\frac{1}{2}-n}]\frac{-1}{2\sqrt{\pi n^{3}}}+O(n^{-\frac{5}{2}})

which reproduces (33) after substituting here y±=±iπ+log|ω1|y_{\pm}=\pm i\pi+\log|\omega_{1}|.

We present next some numerical tests of these asymptotic results. In Fig. 3.3 (left) we plot |ck|1/k|c_{k}|^{1/k} vs 1/k1/k for the first 100 terms of the series (30) (left panel) and |dk|1/k|d_{k}|^{1/k} vs 1/k1/k for the first 100 terms of the series (3.4) (right panel). The red lines in these plots are at 1/|1ω1|1/|1-\omega_{1}| (left) and 1/|log|ω1|+iπ|1/|\log|\omega_{1}|+i\pi| (right). The general trend as kk\to\infty is towards the limiting result shown by the red lines is consistent with Proposition 3.1.

4. Applications: Series expansions

We apply in this section the results of the previous section to two specific expansions appearing in mathematical finance which were discussed in the Introduction.

4.1. The rate function JBS(x)J_{BS}(x)

The rate function JBSJ_{\rm BS} appears in the small-time asymptotics for the density of the time average of a gBM, and determines the leading short maturity asymptotics for Asian options in the Black-Scholes model. The explicit form of this function for real positive argument x>0x>0 was given above in equation (22). Complexifying the function JBS(x)J_{BS}(x) and denoting the complex variable by ω\omega, JBSJ_{\rm BS} can be written as

(39) JBS(ω)=𝒥(h(ω)),J_{\rm BS}(\omega)=\mathcal{J}(h(\omega)),

where h(ω)h(\omega) is the function introduced in Theorem 3.1, and we define

𝒥(z)=12zztanh(z/2).\mathcal{J}(z)=\frac{1}{2}z-\sqrt{z}\tanh(\sqrt{z}/2).

Note that 𝒥\mathcal{J} is well defined as the function zztanh(z/2)z\mapsto{z}\tanh({z}/2) is even.

Proposition 4.1.

(i)The series expansion of the rate function JBS(ω)=n=0cJ,n(ω1)nJ_{BS}(\omega)=\sum_{n=0}^{\infty}c_{J,n}(\omega-1)^{n} has convergence radius RJ=1R_{J}=1. The first few terms are given by

(40) JBS(ω)=32(ω1)295(ω1)3+333175(ω1)4+O((ω1)5)J_{\rm BS}(\omega)=\frac{3}{2}(\omega-1)^{2}-\frac{9}{5}(\omega-1)^{3}+\frac{333}{175}(\omega-1)^{4}+O((\omega-1)^{5})

and the nn\to\infty asymptotics of the coefficients cJ,nc_{J,n} are given by

(41) cJ,n=(1)n2+O((1ω1)n).c_{J,n}=(-1)^{n}2+O((1-\omega_{1})^{-n})\,.

(ii) The series expansion JBS(ey)=n=0dJ,nynJ_{BS}(e^{y})=\sum_{n=0}^{\infty}d_{J,n}y^{n} around y=0y=0 converges for |y|<ρx=|iπ+log|ω1||3.49295|y|<\rho_{x}=|i\pi+\log|\omega_{1}||\approx 3.49295. The first few terms are

(42) JBS(ey)=32y2310y3+1091400y4+O(y5).J_{\rm BS}(e^{y})=\frac{3}{2}y^{2}-\frac{3}{10}y^{3}+\frac{109}{1400}y^{4}+O(y^{5})\,.

The large nn asymptotics of the coefficients is

(43) dJ,n=dJρxncos[θx(32n)]n52+O(n72).d_{J,n}=d_{J}\rho_{x}^{-n}\cos\left[\theta_{x}\left(\frac{3}{2}-n\right)\right]n^{-\frac{5}{2}}+O(n^{-\frac{7}{2}})\,.

with dJ23.4048d_{J}\approx-23.4048 and θx2.02317\theta_{x}\approx 2.02317.

Proof.

(i) First, observe that the function 𝒥\mathcal{J} has isolated singularities (simple poles) along the real negative axis at zn=(2n+1)2π2z_{n}=-(2n+1)^{2}\pi^{2} for nn\in\mathbb{Z}, and is analytic everywhere else.

The residues of J(z)J(z) at the simple poles znz_{n} are Res 𝒥(z=zn)=4zn\mbox{Res }\mathcal{J}(z=z_{n})=-4z_{n}. Recall that by Theorem 2.1, hh is analytic on B1ω1(1)B_{1-\omega_{1}}(1). Thus for any ωB1ω1(1)\omega\in B_{1-\omega_{1}}(1) with h(ω)(2n+1)2π2h(\omega)\neq-(2n+1)^{2}\pi^{2}, the function JBS(ω)J_{\rm BS}(\omega) is analytic at ω\omega. Noting that the only solution of the equation h(ω)=(2n+1)2π2h(\omega)=-(2n+1)^{2}\pi^{2} is ω=0\omega=0, we conclude that JBS(x)J_{\rm BS}(x) is analytic on B1ω1(1){0}B_{1-\omega_{1}}(1)\setminus\{0\} and has a singularity at 0. Thus RJ=1R_{J}=1.

Next, we prove (41). In order to apply the transfer results (cf. Flajolet and Odlyzko) we need to study the behavior of JBS(ω)J_{\rm BS}(\omega) around the singularity on the circle of convergence. This is a simple pole at ω=0\omega=0, so it is sufficient to compute its residue.

Recall that Res 𝒥(z=π2)=4π2\mbox{Res }\mathcal{J}(z=-\pi^{2})=4\pi^{2}. Thus, as ω0\omega\to 0 we have

(44) JBS(ω)=𝒥(h(ω))=4π2h(ω)+π2+ a bounded term as ω0.\displaystyle J_{BS}(\omega)=\mathcal{J}(h(\omega))=\frac{4\pi^{2}}{h(\omega)+\pi^{2}}+\mbox{ a bounded term as }\omega\to 0\,.

The Taylor expansion of h(ω)h(\omega) around ω=0\omega=0 is

(45) h(ω)=h(0)+h(0)ω+O(ω2)h(\omega)=h(0)+h^{\prime}(0)\omega+O(\omega^{2})

where h(0)=π2h(0)=-\pi^{2} and

(46) h(0)=dhdω|ω=0=1ddhsinhhh|h=π2=2π2.h^{\prime}(0)=\frac{dh}{d\omega}|_{\omega=0}=\frac{1}{\frac{d}{dh}\frac{\sinh\sqrt{h}}{\sqrt{h}}}|_{h=-\pi^{2}}=2\pi^{2}\,.

Substituting into (44) gives

(47) JBS(ω)=2ω+ a bounded term as ω0.J_{BS}(\omega)=\frac{2}{\omega}+\mbox{ a bounded term as }\omega\to 0\,.

This means that the residue of JBS(ω)J_{BS}(\omega) at ω=0\omega=0 is equal to 2.

To derive (41), we write the Laurent series of JBS(ω)J_{BS}(\omega) on a punctured ball around 0 as 2ω+n=0anωn\frac{2}{\omega}+\sum_{n=0}^{\infty}a_{n}\omega^{n}. With the notation ϕ(z)=n=0anωn\phi(z)=\sum_{n=0}^{\infty}a_{n}\omega^{n}, we see that ϕ\phi is analytic in a neighborhood of 0 and

(48) JBS(ω)=2ω+ϕ(ω).J_{BS}(\omega)=\frac{2}{\omega}+\phi(\omega).

By uniqueness of analytic extension, ϕ\phi is analytic and (48) holds on the domain B1ω1(1)B_{1-\omega_{1}}(1). Transferring this result to the asymptotics of the coefficients cJ,nc_{J,n} yields (41).

(ii) To prove the second part of the proposition, let us write ψ(y)=JBS(ey)\psi(y)=J_{BS}(e^{y}). By the first part of the proposition, ψ\psi is analytic as long as hh is analytic at eye^{y}. Indeed, since ey0e^{y}\neq 0 for all complex numbers yy, the only singularities of 𝒥(h(ey))\mathcal{J}(h(e^{y})) are due to singularities of hh.

The equation ey=ω1e^{y}=\omega_{1} has infinitely many solutions of the form i(2k+1)π+log|ω1|i(2k+1)\pi+\log|\omega_{1}|. Let us write y±=±iπ+log|ω1|y_{\pm}=\pm i\pi+\log|\omega_{1}| and ρx=|y±|\rho_{x}=|y_{\pm}| for the singularities which are closest to the origin y=0y=0. Then, exactly as in the proof of Proposition 3.1, we see that ψ\psi is analytic on Bρx(0)B_{\rho_{x}}(0) and there are branch points at y±y_{\pm}. The picture is now similar to that of Proposition 2.1 (ii) and is different from point (i) discussed above as the singularities closest to the origin are branch points (and are not isolated).

It remains to verify (43). We again use the transfer results (cf. [7]). As in the proof of Proposition 3.1, we define h(ω1)=z1h(\omega_{1})=z_{1}. Next, we consider the Taylor expansion of 𝒥\mathcal{J} around z1:=η12z_{1}:=-\eta_{1}^{2} (recall that 𝒥\mathcal{J} is analytic at z1z_{1}).

First, we claim that 𝒥(z1)=0\mathcal{J}^{\prime}(z_{1})=0. Indeed, for any η\eta with η=tanη\eta=\tan\eta, we have η=2tan(η/2)1tan2(η2/2)\eta=\frac{2\tan(\eta/2)}{1-\tan^{2}(\eta^{2}/2)} and so

𝒥(η2)=1212ηtanη214cos2η2=1412ηtanη214tan2η2=0\mathcal{J}^{\prime}(-\eta^{2})=\frac{1}{2}-\frac{1}{2\eta}\tan\frac{\eta}{2}-\frac{1}{4\cos^{2}\frac{\eta}{2}}=\frac{1}{4}-\frac{1}{2\eta}\tan\frac{\eta}{2}-\frac{1}{4}\tan^{2}\frac{\eta}{2}=0

Thus we have the Taylor expansion

(49) 𝒥(z)=𝒥(z1)+12𝒥′′(z1)(zz1)2+16𝒥′′′(z1)(zz1)3+O((zz1)4)\mathcal{J}(z)=\mathcal{J}(z_{1})+\frac{1}{2}\mathcal{J}^{\prime\prime}(z_{1})(z-z_{1})^{2}+\frac{1}{6}\mathcal{J}^{\prime\prime\prime}(z_{1})(z-z_{1})^{3}+O((z-z_{1})^{4})

as zz1z\rightarrow z_{1}. Substituting here zz1h(ω)h(ω1)z-z_{1}\to h(\omega)-h(\omega_{1}) and using the Puiseux expansion (34) gives an expansion of the form

(50) JBS(ω)=𝒥(h(ω))=C1J(ωω1)+C3/2J(ωω1)3/2+O((ωω1)2)J_{BS}(\omega)=\mathcal{J}(h(\omega))=C_{1}^{J}(\omega-\omega_{1})+C_{3/2}^{J}(\omega-\omega_{1})^{3/2}+O((\omega-\omega_{1})^{2})

as ωω1\omega\rightarrow\omega_{1}. The coefficients in this expansion can be obtained in terms of the derivatives of 𝒥(z1)\mathcal{J}(z_{1}) and CiC_{i} in (34) by expanding h(ω)h(\omega) for ω\omega close to ω1\omega_{1}.

The branch point singularity arises from the second term in (50) with coefficient

(51) C3/2J=16𝒥′′′(z1)C13+𝒥′′(z1)C1C2.C_{3/2}^{J}=\frac{1}{6}\mathcal{J}^{\prime\prime\prime}(z_{1})C_{1}^{3}+\mathcal{J}^{\prime\prime}(z_{1})C_{1}C_{2}.

Finally, changing the expansion variable from ωω1\omega-\omega_{1} to yy with ω=ey\omega=e^{y} is done in the same way as in the proof of Proposition 3.1(ii). The transfer result now says that the Taylor coefficients AnA_{n} of the function (1z)3/2=n=0Anzn(1-z)^{3/2}=\sum_{n=0}^{\infty}A_{n}z^{n} satisfy

(52) An=1πn5(34+O(n1)).A_{n}=\frac{1}{\sqrt{\pi n^{5}}}\left(\frac{3}{4}+O(n^{-1})\right)\,.

Thus

(53) dJ,n=32C3/2J(ω1)3/2π[(y+)32n+(y)32n]n52+O(n72),d_{J,n}=\frac{3}{2}C_{3/2}^{J}\frac{(-\omega_{1})^{3/2}}{\sqrt{\pi}}[(y_{+})^{\frac{3}{2}-n}+(y_{-})^{\frac{3}{2}-n}]n^{-\frac{5}{2}}+O(n^{-\frac{7}{2}}),

whence (43) follows by substituting y±=ρxe±iθxy_{\pm}=\rho_{x}e^{\pm i\theta_{x}}.

The left plot in Figure 4.1 shows |dJ,n|1/n|d_{J,n}|^{1/n} vs 1/n1/n for the first 100 coefficients of the expansion in yny^{n}, which illustrates convergence to 1/ρx1/\rho_{x}, the inverse of the convergence radius in (42).

We study also the approximation error of the asymptotic expansion of the (signed) coefficients

(54) εJ,n=dJ,n/dJ,nasympt1O(1/n)\varepsilon_{J,n}=d_{J,n}/d_{J,n}^{\rm asympt}-1\sim O(1/n)

where we denoted dJ,nasymptd_{J,n}^{\rm asympt} the leading asymptotic expression for the coefficients in Eq. (43). The error is formally of order O(n1)O(n^{-1}), so we expect that εJ,n\varepsilon_{J,n} vs 1/n1/n approaches zero linearly.

The right plot in Figure 4.1 shows εJ,n\varepsilon_{J,n} vs 1/n1/n, which confirms the expected general trend towards zero of the error terms. This is seen in more detail in the left plot in Figure 4.2 which zooms into the region close to the horizontal axis.

We note also the presence of large error outliers. For example for n=97n=97 the relative error is |εJ,97|=5.54|\varepsilon_{J,97}|=5.54 such that the subleading correction to the asymptotic result can be comparable with the leading order coefficient. The right plot in Fig. 4.2 shows the error εn\varepsilon_{n} vs cos(θX(n32))\cos(\theta_{X}(n-\frac{3}{2})) which shows that large errors are mostly associated with small values of the cos factor, which leads to accidental suppression in the leading order contribution.

Refer to caption
Refer to caption
Figure 4.1. Left: plot of |dJ,n|1/n|d_{J,n}|^{1/n} vs 1/n1/n for n100n\leq 100. The horizontal line is at 1/ρx=0.2861/\rho_{x}=0.286. Right: The relative error εJ,n=dJ,n/dJ,nasympt1\varepsilon_{J,n}=d_{J,n}/d^{\rm asympt}_{J,n}-1 of the asymptotic coefficients vs 1/n1/n.
Refer to caption
Refer to caption
Figure 4.2. Left: zoomed-in plot of the error of the asymptotic coefficients εJ,n\varepsilon_{J,n}. Right: The relative error εJ,n\varepsilon_{J,n} of the asymptotic coefficients vs cos(θX(n32))\cos(\theta_{X}(n-\frac{3}{2})).

4.2. Asymptotics of the Hartman-Watson distribution

A closed form result for the joint distribution of the time integral of the gBM and its terminal value was obtained by Yor [34], see (1).

The numerical evaluation of the integral in (2) is challenging for small t1t\ll 1. Alternative ways of numerical evaluation of this distribution have been studied in terms of analytic approximations [1, 13].

An asymptotic expansion for the Hartman-Watson distribution θρ/t(t)\theta_{\rho/t}(t) as t0t\to 0 at fixed ρ=rt\rho=rt was derived in [28]. The leading order result is expressed by (3) and the real functions F(ρ),G(ρ)F(\rho),G(\rho) are defined as

(57) F(ρ)={12κ2κtanhκ+π22, 0<ρ<112λ2+πλtanλ+πλ,ρ>1\displaystyle F(\rho)=\left\{\begin{array}[]{cc}\frac{1}{2}\kappa^{2}-\frac{\kappa}{\tanh\kappa}+\frac{\pi^{2}}{2},&\,0<\rho<1\\ -\frac{1}{2}\lambda^{2}+\frac{\pi-\lambda}{\tan\lambda}+\pi\lambda,&\,\rho>1\\ \end{array}\right.

and

(60) G(ρ)={ρsinhκρcoshκ1, 0<ρ<1ρsinλ1+ρcosλ,ρ>1.\displaystyle G(\rho)=\left\{\begin{array}[]{cc}\frac{\rho\sinh\kappa}{\sqrt{\rho\cosh\kappa-1}},&\,0<\rho<1\\ \frac{\rho\sin\lambda}{\sqrt{1+\rho\cos\lambda}},&\,\rho>1.\\ \end{array}\right.

Here κ0\kappa\geq 0 is the positive solution of the equation

(61) ρsinhκκ=1\rho\frac{\sinh\kappa}{\kappa}=1

and λ(0,π)\lambda\in(0,\pi) is the solution of the equation

(62) λ+ρsinλ=π.\lambda+\rho\sin\lambda=\pi\,.

The constraints on κ\kappa and λ\lambda ensure that G(ρ)G(\rho) is positive for positive real ρ\rho.

We will study in this section the series expansions of FF and GG around ρ=1\rho=1, which are relevant for the computation of the distribution (4) around its maximum at a=v=1a=v=1.

We mention that the functions F(ρ),JBS(a)F(\rho),J_{\rm BS}(a) are related as JBS(a)=4infρ0(F(ρ)+1+a2ρ22aπ22)J_{\rm BS}(a)=4\inf_{\rho\geq 0}(F(\rho)+\frac{1+a^{2}\rho^{2}}{2a}-\frac{\pi^{2}}{2}). This relation follows from Proposition 6 in [28] and the relation J(a)=14JBS(a)J(a)=\frac{1}{4}J_{\rm BS}(a). However it does not seem easy to use this relation to connect the expansions of these two functions, so we treat them separately.

The functions κ(ρ),λ(ρ)\kappa(\rho),\lambda(\rho) are related to the function h(ω)h(\omega) studied in Section 3 as κ(ρ)=h(1/ρ)\kappa(\rho)=\sqrt{h(1/\rho)} for 0<ρ10<\rho\leq 1 and πλ(ρ)=ih(1/ρ)\pi-\lambda(\rho)=i\sqrt{h(1/\rho)} for ρ1\rho\geq 1, with the usual square root function from +\mathbb{R}_{+} to +\mathbb{R}_{+}.

Proposition 4.2.

(i) The function F(eτ)F(e^{-\tau}) is expanded around τ=0\tau=0 as F(eτ)=n=0dF,nτnF(e^{-\tau})=\sum_{n=0}^{\infty}d_{F,n}\tau^{n}. The first few terms are

(63) F(eτ)=π221τ+τ2+215τ3+O(τ4).F(e^{-\tau})=\frac{\pi^{2}}{2}-1-\tau+\tau^{2}+\frac{2}{15}\tau^{3}+O(\tau^{4}).

The large nn asymptotics of the coefficients is

(64) dF,n=dFρxncos(θx(n32))n52+O(n72)d_{F,n}=d_{F}\rho_{x}^{-n}\cos\left(\theta_{x}(n-\frac{3}{2})\right)n^{-\frac{5}{2}}+O(n^{-\frac{7}{2}})

with dF23.4047d_{F}\approx-23.4047.

ii) The function G(eτ)G(e^{-\tau}) has the expansion around τ=0\tau=0 as G(eτ)=n=0dG,nτnG(e^{-\tau})=\sum_{n=0}^{\infty}d_{G,n}\tau^{n}. The first few terms are

(65) G(eτ)=3(115τ170τ2+11050τ3+O(τ4)).G(e^{-\tau})=\sqrt{3}\left(1-\frac{1}{5}\tau-\frac{1}{70}\tau^{2}+\frac{1}{1050}\tau^{3}+O(\tau^{4})\right).

The large nn asymptotics of the coefficients is

(66) dG,n=dGρxnsin(θx(n+14))n34+O(n54)d_{G,n}=d_{G}\rho_{x}^{-n}\sin\left(\theta_{x}(n+\frac{1}{4})\right)n^{-\frac{3}{4}}+O(n^{-\frac{5}{4}})

with dG0.719253d_{G}\approx 0.719253.

Both series expansions converge for |τ|<ρx=|log|ω1|+iπ|3.49295|\tau|<\rho_{x}=|\log|\omega_{1}|+i\pi|\approx 3.49295.

Proof.

(i) The function F(ρ)F(\rho) can be written as F(ρ)=(h(1/ρ))F(\rho)=\mathcal{F}(h(1/\rho)) with (z)=12zztanhz+π22\mathcal{F}(z)=\frac{1}{2}z-\frac{\sqrt{z}}{\tanh\sqrt{z}}+\frac{\pi^{2}}{2}. Now the situation (and hence the proof) is similar to Proposition 4.1.

The function (z)\mathcal{F}(z) has poles at zk=k2π2z_{k}=-k^{2}\pi^{2}, kk\in\mathbb{Z} and is analytic everywhere else. Solving the equation h(1/ρ)=k2π2{h(1/\rho)}=-k^{2}\pi^{2} we find that the unique solution is ρ=\rho=\infty. Thus the function F~(τ):=F(eτ)\tilde{F}(\tau):=F(e^{-\tau}) is analytic on Bπ(0)B_{\pi}(0) and can be extended analytically to Bρx(0)B_{\rho_{x}}(0) as in Proposition 4.1.

In order to find the large nn asymptotics of dF,nd_{F,n} we study again the asymptotics of F(ey)F(e^{-y}) around its branch points at y±y_{\pm}. This requires the Taylor expansion of (z)\mathcal{F}(z) around z1=η12z_{1}=-\eta_{1}^{2} which is a regular point for this function. The analysis closely parallels that in Prop. 4.1(ii). It turns out that (z1)=0\mathcal{F}^{\prime}(z_{1})=0, just as for 𝒥(z)\mathcal{J}(z) and so the expansion of (z)\mathcal{F}(z) around z1z_{1} reads

(67) (z)=(z1)+12′′(z1)(zz1)2+16′′′(z1)(zz1)3+O((zz1)4).\mathcal{F}(z)=\mathcal{F}(z_{1})+\frac{1}{2}\mathcal{F}^{\prime\prime}(z_{1})(z-z_{1})^{2}+\frac{1}{6}\mathcal{F}^{\prime\prime\prime}(z_{1})(z-z_{1})^{3}+O((z-z_{1})^{4}).

Substituting here the Puiseux expansion (34) gives that the leading singularity is a branch point at ω1\omega_{1} arising from the second term in the expansion

(68) (h(ω))(h(ω1))=C1F(ωω1)+C3/2F(ωω1)3/2+O((ωω1)2),\mathcal{F}(h(\omega))-\mathcal{F}(h(\omega_{1}))=C_{1}^{F}(\omega-\omega_{1})+C_{3/2}^{F}(\omega-\omega_{1})^{3/2}+O((\omega-\omega_{1})^{2}),

where

(69) C3/2F=16′′′(z1)C13+′′(z1)C1C2.C_{3/2}^{F}=\frac{1}{6}\mathcal{F}^{\prime\prime\prime}(z_{1})C_{1}^{3}+\mathcal{F}^{\prime\prime}(z_{1})C_{1}C_{2}\,.

Changing variables to τ=logω=logρ\tau=\log\omega=-\log\rho gives a similar Puiseux expansion in fractional powers of ττ±\tau-\tau_{\pm}. The asymptotics of the coefficients dF,nd_{F,n} follows again by the transfer result

(70) dF,n=32C3/2F(ω1)3/2π[(y+)32n+(y)32n]n52+O(n72)d_{F,n}=\frac{3}{2}C_{3/2}^{F}\frac{(-\omega_{1})^{3/2}}{\sqrt{\pi}}[(y_{+})^{\frac{3}{2}-n}+(y_{-})^{\frac{3}{2}-n}]n^{-\frac{5}{2}}+O(n^{-\frac{7}{2}})

and has the form (64) with

(71) dF=32(ω1ρx)3πC3/2F23.4047.d_{F}=\frac{3}{2}\sqrt{\frac{(-\omega_{1}\rho_{x})^{3}}{\pi}}C_{3/2}^{F}\approx-23.4047\,.

(ii) The function G(ρ)G(\rho) can be written as G(ρ)=𝒢(h(1/ρ))G(\rho)=\mathcal{G}(h(1/\rho)), where we defined

(72) 𝒢(z):=zztanhz1.\mathcal{G}(z):=\frac{\sqrt{z}}{\sqrt{\frac{\sqrt{z}}{\tanh\sqrt{z}}-1}}\,.

Writing the numerator in this form ensures that G(ρ)G(\rho) is positive for real and positive ρ\rho, which is the case required for the application to the asymptotics of the Hartman-Watson distribution (3). (This condition was ensured in the original expression by imposing the constraints κ0\kappa\geq 0 and λ(0,π)\lambda\in(0,\pi).) This condition replaces the even property of (z)\mathcal{F}(z) which was used to resolve the sign ambiguity before.

The function 𝒢(z)\mathcal{G}(z) has branch points on the negative real axis at zk=ηk2z_{k}=-\eta_{k}^{2} where the denominator in (72) vanishes. The branch point closest to the origin z=0z=0 is at z1z_{1} and will be relevant for our purpose as z1z_{1} is mapped to ω1=1/ρ\omega_{1}=1/\rho, the branch point of hh.

In the neighborhood of z1z_{1} the function 𝒢(z)\mathcal{G}(z) diverges as

(73) 𝒢(z)=2z1(zz1)12+O((zz1)12).\mathcal{G}(z)=\sqrt{2z_{1}}(z-z_{1})^{-\frac{1}{2}}+{O((z-z_{1})^{\frac{1}{2}})}\,.

The asymptotics of 𝒢(h(1/ρ))\mathcal{G}(h(1/\rho)) around 1/ρ=ω11/\rho=\omega_{1} are obtained by substituting the Puiseux series (34) into (73). Denoting x=1/ρx=1/\rho for simplicity, we have as xω1x\to\omega_{1}

(74) 𝒢(h(x))=2z1C1(xω1)14+O((xω1)14).\mathcal{G}(h(x))=\sqrt{\frac{2z_{1}}{C_{1}}}(x-\omega_{1})^{-\frac{1}{4}}+{O((x-\omega_{1})^{\frac{1}{4}})}\,.

This asymptotics can be transferred to the large nn asymptotics of the coefficients dG,nd_{G,n} as

(75) dG,n=1Γ(1/4)n3/42η12C1(ω1)14[(y+)14n(y)14n]+O(n5/4).d_{G,n}=\frac{1}{\Gamma(1/4)n^{3/4}}\sqrt{\frac{-2\eta_{1}^{2}}{C_{1}}}(-\omega_{1})^{-\frac{1}{4}}[(y_{+})^{-\frac{1}{4}-n}-(y_{-})^{-\frac{1}{4}-n}]+O(n^{-5/4})\,.

We chose the branches of the (yy±)1/4(y-y_{\pm})^{-1/4} functions such that they add up to a real function along the positive real axis. This introduces a minus sign between the two terms. This reproduces the stated result, with

(76) dG=1Γ(1/4)2η12C1(ω1ρx)140.719253.d_{G}=\frac{1}{\Gamma(1/4)}\sqrt{\frac{2\eta_{1}^{2}}{C_{1}}}(-\omega_{1}\rho_{x})^{-\frac{1}{4}}\approx 0.719253\,.

We present next some numerical tests of the asymptotic expansions for F(ρ)F(\rho) and G(ρ)G(\rho). Figures 4.3 and 4.4 illustrate the performance of the asymptotic expansions for these two functions. They are similar, so we comment in detail only the plots for F(ρ)F(\rho).

The left plot in Figure 4.3 shows |dF,n|1/n|d_{F,n}|^{1/n} vs 1/n1/n, which shows good convergence to 1/ρx1/\rho_{x}, which is the inverse of the convergence radius for the expansion (63). The right plot shows the approximation error of the asymptotic expansion of the coefficients εF,n=dF,n/dF,nasympt1O(1/n)\varepsilon_{F,n}=d_{F,n}/d_{F,n}^{\rm asympt}-1\sim O(1/n) vs 1/n1/n, where dF,nasymptd_{F,n}^{\rm asympt} is the first term on the right hand side of (64). The numerical results confirm a general decreasing trend of the approximation error with nn as expected.

Refer to caption
Refer to caption
Figure 4.3. Left: plot of |dF,n|1/n|d_{F,n}|^{1/n} vs 1/n1/n for n100n\leq 100. The horizontal line is at 1/ρx=0.2861/\rho_{x}=0.286. Right: The relative error dF,n/dF,nasympt1d_{F,n}/d^{\rm asympt}_{F,n}-1 of the asymptotic coefficients vs 1/n1/n.
Refer to caption
Refer to caption
Figure 4.4. Left: plot of |dG,n|1/n|d_{G,n}|^{1/n} vs 1/n1/n (blue points) and the asymptotic coefficients |dG,nasympt|1/n|d^{\rm asympt}_{G,n}|^{1/n} (red points) vs 1/n1/n for n50n\leq 50. The horizontal line is at 1/ρx=0.2861/\rho_{x}=0.286. Right: The relative error dG,n/dG,nasympt1d_{G,n}/d^{\rm asympt}_{G,n}-1 of the asymptotic coefficients vs 1/n1/n.

5. Application: Asian options pricing in the Black-Scholes model

Here, we provide a simple application of the expansions studied in this paper, namely the pricing of an Asian option in the Black-Scholes model. Assume that the asset price follows a geometric Brownian motion

(77) dSt=rStdt+σStdWtdS_{t}=rS_{t}dt+\sigma S_{t}dW_{t}

started at S0S_{0}, or equivalently, by Itô’s formula, St=S0e(rσ2/2)t+σWtS_{t}=S_{0}e^{(r-\sigma^{2}/2)t+\sigma W_{t}}. Of course, in practice one has to consider the issue of mis-specification of the price process, which introduces model risk.

Asian option prices are given by expectations in the risk-neutral measure

(78) CA(K,T)=erT𝔼[(1T0TSt𝑑tK)+],\displaystyle C_{A}(K,T)=e^{-rT}\mathbb{E}\Big{[}\Big{(}\frac{1}{T}\int_{0}^{T}S_{t}dt-K\Big{)}^{+}\Big{]}\,,
(79) PA(K,T)=erT𝔼[(K1T0TSt𝑑t)+].\displaystyle P_{A}(K,T)=e^{-rT}\mathbb{E}\Big{[}\Big{(}K-\frac{1}{T}\int_{0}^{T}S_{t}dt\Big{)}^{+}\Big{]}\,.

CC and PP stand for call and put options and the subscript AA denotes Asian.

Using the self-similar property of the Brownian motion, the Asian option prices can be expressed in terms of expectations of the standardized integral of gBM At(μ)=0te2(Bs+2μs)𝑑sA_{t}^{(\mu)}=\int_{0}^{t}e^{2(B_{s}+2\mu s)}ds as

(80) CA(K,T)=erTS0cA(k,τ),PA(K,T)=erTS0pA(k,τ)C_{A}(K,T)=e^{-rT}S_{0}c_{A}(k,\tau)\,,\quad P_{A}(K,T)=e^{-rT}S_{0}p_{A}(k,\tau)

where

(81) cA(k,τ):=𝔼[(1τAτ(μ)k)+],pA(k,τ):=𝔼[(k1τAτ(μ))+].c_{A}(k,\tau):=\mathbb{E}\Big{[}\Big{(}\frac{1}{\tau}A_{\tau}^{(\mu)}-k\Big{)}^{+}\Big{]}\,,\quad p_{A}(k,\tau):=\mathbb{E}\Big{[}\Big{(}k-\frac{1}{\tau}A_{\tau}^{(\mu)}\Big{)}^{+}\Big{]}\,.

and the expectations in (81) are expressed in terms of reduced parameters

(82) τ=14σ2T,μ=2rσ21,k=KS0,\tau=\frac{1}{4}\sigma^{2}T\,,\qquad\mu=\frac{2r}{\sigma^{2}}-1\,,\qquad k=\frac{K}{S_{0}}\,,

see e.g. [12]. Denote the density of the standardized time-average of the gBM

(83) (1tAt(μ)da)=f(a,t)daa.\mathbb{P}\Big{(}\frac{1}{t}A_{t}^{(\mu)}\in da\Big{)}=f(a,t)\frac{da}{a}\,.

The leading t0t\to 0 asymptotics (4) gives an approximation for this density

(84) f(a,t)=f0(a,t)(1+O(t)),f0(a,t):=1n(t)12πte12μ2taμ0ρμG(ρ)e1tI(a,aρ)dρρf(a,t)=f_{0}(a,t)(1+O(t))\,,\quad f_{0}(a,t):=\frac{1}{n(t)}\frac{1}{2\pi t}e^{-\frac{1}{2}\mu^{2}t}a^{\mu}\int_{0}^{\infty}\rho^{\mu}G(\rho)e^{-\frac{1}{t}I(a,a\rho)}\frac{d\rho}{\rho}

where n(t)n(t) is a normalization integral, defined such as to ensure the normalization condition 0f0(a,t)daa=1\int_{0}^{\infty}f_{0}(a,t)\frac{da}{a}=1 for all t>0t>0.

Recalling (5), the expectations (81) can be approximated as integrals over the f0(a,t)f_{0}(a,t) density as

(85) cA(0)(k,τ)=k(ak)f0(a,t)𝑑a=\displaystyle c_{A}^{(0)}(k,\tau)=\int_{k}^{\infty}(a-k)f_{0}(a,t)da=
=1n(τ)12πte12μ2t0ρμG(ρ)e1t[F(ρ)π22](kaμ(ak)e1+a2ρ22atdaa)dρρ\displaystyle=\frac{1}{n(\tau)}\frac{1}{2\pi t}e^{-\frac{1}{2}\mu^{2}t}\int_{0}^{\infty}\rho^{\mu}G(\rho)e^{-\frac{1}{t}[F(\rho)-\frac{\pi^{2}}{2}]}\left(\int_{k}^{\infty}a^{\mu}(a-k)e^{-\frac{1+a^{2}\rho^{2}}{2at}}\frac{da}{a}\right)\frac{d\rho}{\rho}

and analogously for pA(k,τ)p_{A}(k,\tau).

The normalization factor n(τ)n(\tau) can be expressed as a one-dimensional integral

(86) n(τ)=1πτe12μ2τ0G(ρ)Kμ(ρ/τ)e1τ(F(ρ)π22)dρρ.n(\tau)=\frac{1}{\pi\tau}e^{-\frac{1}{2}\mu^{2}\tau}\int_{0}^{\infty}G(\rho)K_{-\mu}(\rho/\tau)e^{-\frac{1}{\tau}(F(\rho)-\frac{\pi^{2}}{2})}\frac{d\rho}{\rho}\,.

5.1. Details of implementation

For the purpose of evaluation of the integrals in (85) we represent the functions F(ρ),G(ρ)F(\rho),G(\rho) as follows:

i) within the convergence domain of their expansions (63) and (65) we use the series expansions derived in Proposition 4.2, truncated to an appropriate order NF,NGN_{F},N_{G}.

ii) outside the convergence domain we use the asymptotic expansions derived in points (i),(ii) Propositions 4 and 5 of [28] for F(ρ),G(ρ)F(\rho),G(\rho) as ρ0\rho\to 0 and ρ\rho\to\infty, respectively.

This gives the following approximation for the functions F(ρ),G(ρ)F(\rho),G(\rho), defined by

(87) F¯(ρ):={FL(ρ)logρ<ρLk=0NFdF,klogk(1/ρ)logρ[ρL,ρR]FR(ρ)logρ>ρR\bar{F}(\rho):=\left\{\begin{array}[]{cc}F_{L}(\rho)&\log\rho<\rho_{L}\\ \sum_{k=0}^{N_{F}}d_{F,k}\log^{k}(1/\rho)&\log\rho\in[\rho_{L},\rho_{R}]\\ F_{R}(\rho)&\log\rho>\rho_{R}\\ \end{array}\right.

and

(88) G¯(ρ):={GL(ρ)logρ<ρL3k=0NGdG,klogk(1/ρ)logρ[ρL,ρR]GR(ρ)logρ>ρR\bar{G}(\rho):=\left\{\begin{array}[]{cc}G_{L}(\rho)&\log\rho<\rho_{L}\\ \sqrt{3}\sum_{k=0}^{N_{G}}d_{G,k}\log^{k}(1/\rho)&\log\rho\in[\rho_{L},\rho_{R}]\\ G_{R}(\rho)&\log\rho>\rho_{R}\\ \end{array}\right.

where [ρL,ρR][ρx,ρx][\rho_{L},\rho_{R}]\subset[-\rho_{x},\rho_{x}] with ρx=3.49295\rho_{x}=3.49295 the convergence radius of the series expansions. FL(ρ),GL(ρ)F_{L}(\rho),G_{L}(\rho) denote the ρ0\rho\to 0 asymptotic expansions given in equations (49) and (58) of [28] and FR(ρ),GR(ρ)F_{R}(\rho),G_{R}(\rho) denote the ρ\rho\to\infty asymptotic expansions given in equations (50), (59) of [28]. The first few coefficients dF,k,dG,kd_{F,k},d_{G,k} are tabulated in Table 2. The approximation F¯(ρ),G¯(ρ)\bar{F}(\rho),\bar{G}(\rho) can be made arbitrarily precise by increasing the truncation orders NF,GN_{F,G} and by including higher order terms in the tail asymptotics.

Table 2. The first 10 coefficients of the series expansions of F(ρ),G(ρ)F(\rho),G(\rho) in powers of log(1/ρ)\log(1/\rho). The leading terms of these series expansions are dF,0=π221d_{F,0}=\frac{\pi^{2}}{2}-1 and dG,0=1d_{G,0}=1.
kk dF,kd_{F,k} dG,kd_{G,k} kk dF,kd_{F,k} dG,kd_{G,k}
1 1 15\frac{1}{5} 6 47423,031,875\frac{4742}{3,031,875} 107,74910,032,750,000-\frac{107,749}{10,032,750,000}
2 1 170-\frac{1}{70} 7 43,636197,071,875-\frac{43,636}{197,071,875} 27,333,6191,876,124,250,000\frac{27,333,619}{1,876,124,250,000}
3 215-\frac{2}{15} 11050-\frac{1}{1050} 8 146,2876,897,515,625\frac{146,287}{6,897,515,625} 308,907,281,743109,790,791,110,000,000-\frac{308,907,281,743}{109,790,791,110,000,000}
4 19525\frac{19}{525} 299323,400\frac{299}{323,400} 9 68,14657,984,609,375-\frac{68,146}{57,984,609,375} 1,589,498,602,0634,940,585,599,950,000,000-\frac{1,589,498,602,063}{4,940,585,599,950,000,000}
5 222,625-\frac{22}{2,625} 96,917525,525,000-\frac{96,917}{525,525,000} 10 6,740,719,06638,598,324,999,609,375\frac{6,740,719,066}{38,598,324,999,609,375} 28,340,195,926,465,733103,406,456,606,953,500,000,000\frac{28,340,195,926,465,733}{103,406,456,606,953,500,000,000}

5.2. Numerical examples

We illustrate the application of the method on the seven test cases introduced in [10] and commonly used in the literature as benchmark tests for Asian options pricing. Table 3 shows the option prices following from the method proposed here for each of these scenarios, comparing with the precise evaluation in Linetsky [20] obtained using the spectral expansion method.

Table 3. Numerical tests for pricing Asian options in the Black-Scholes model on the seven scenarios commonly considered in the literature [10, 20]. All scenarios have K=2.0K=2.0. The results from the method proposed here are shown in the column CA(K,T)C_{A}(K,T) which are compared with the precise benchmarks from the spectral expansion method from Linetsky [20].
Scenario S0S_{0} rr σ\sigma TT μ\mu τ\tau spectral [20] cA(k,τ)c_{A}(k,\tau) n(τ)n(\tau) CA(K,T)C_{A}(K,T)
1 2.0 0.02 0.10 1 3 0.0025 0.055986 0.028543 1.00004 0.055954
2 2.0 0.18 0.30 1 3 0.0225 0.218387 0.130771 1.00032 0.218388
3 2.0 0.0125 0.25 2 -0.6 0.03125 0.172269 0.088354 1.00045 0.172269
4 1.9 0.05 0.50 1 -0.6 0.0625 0.193174 0.106978 1.00089 0.193174
5 2.0 0.05 0.50 1 -0.6 0.0625 0.246416 0.12964 1.00089 0.246415
6 2.1 0.05 0.50 1 -0.6 0.0625 0.306220 0.153432 1.00089 0.306220
7 2.0 0.05 0.50 2 -0.6 0.125 0.350095 0.193799 1.00177 0.350093

We used an approximation for F¯(ρ),G¯(ρ)\bar{F}(\rho),\bar{G}(\rho) keeping NF=NG=6N_{F}=N_{G}=6 terms in the series expansion. Adding more terms has no impact on the numerical values shown. The series expansion was used in the range ρ[0.04,32.88]\rho\in[0.04,32.88] which is included in the convergence domain of the series expansions. The impact of the tails region on the results in Table 3 is negligible.

For the numerical evaluation we changed the integration variable ρ\rho to z=logρz=\log\rho in both integrals in (85) and (86). The 2-dimensional integration was performed in Mathematica using NIntegrate with Method -> NewtonCotesRule and MaxIterations -> 100.

The results in Table 3 show good agreement with the precise benchmark values of [20], the difference being less than 1% in all cases. For most scenarios, the impact of including terms beyond quadratic order in the joint distribution (4) is larger than the impact of varying the volatility parameter by Δσ0.2%\Delta\sigma\sim 0.2\%, which makes them relevant in practical applications where σ\sigma is known with precision of this order of magnitude.

Acknowledgments. We thank two anonymous referees for helpful comments and suggestions. The research of PN was partially sponsored by NSF DMS 1952876 and the Charles Simonyi Endowment at the Institute for Advanced Study, Princeton, NJ.

References

  • [1] P. Barrieu, A. Rouault and M. Yor, A study of the Hartman-Watson distribution motivated by numerical problems related to the pricing of Asian options, J. Appl. Probab. 41, 1049-1058 (2004).
  • [2] R. Brignone, Moment based approximations for arithmetic averages with applications in derivatives pricing, credit risk and Monte Carlo simulation, PhD thesis, University of Milano-Bicocca, 2019.
  • [3] N. Cai, Y. Song and N. Chen, Exact simulation of the SABR model, Operations Research 65(4), 931-951 (2017).
  • [4] D. Dufresne, The integral of geometric Brownian motion, Adv. Appl. Probab. 33, 223-241 (2001)
  • [5] D. Dufresne, The log-normal approximation in financial and other computations, Adv. Appl. Probab. 36, 747-773 (2004)
  • [6] D. Dufresne and H.-B. Li, Pricing Asian options: Convergence of Gram-Charlier series, Actuarial Research Clearing House 2016.2
  • [7] P. Flajolet and A. Odlyzko, Singularity analysis of generating functions, SIAM J. Disc. Math. 3(2), 216-240 (1990)
  • [8] P. Flajolet and R. Sedgewick, Analytic combinatorics, Cambridge University Press, Cambridge, 2008
  • [9] M. Forde and A. Jacquier, Small-time asymptotics for the implied volatility under the Heston model, IJTAF 12(6), 861-876 (2009)
  • [10] M. Fu, D. Madan and T. Wang, Pricing continuous time Asian options: a comparison of Monte Carlo and Laplace transform inversion methods, J. Comput. Finance 2 49-74 (1998)
  • [11] J. Gatheral, The volatility surface Wiley, 2006.
  • [12] H. Geman and M. Yor, Bessel processes, Asian options and perpetuities, Mathematical Finance 3, 349-375 (1993)
  • [13] S. Gerhold, The Hartman-Watson distribution revisited: Asymptotics for pricing Asian options, J. Appl. Probab. 48, 892-899 (2011).
  • [14] P. Hartman and G.S. Watson, ”Normal” distribution functions on spheres and the modified Bessel functions, Ann. Probab. 2, 593-607 (1974)
  • [15] J. Hull and A. White, The pricing of options on assets with stochastic volatilities, Journal of Finance 42(2), 281-300 (1987)
  • [16] B.Y. Levin, Lectures on Entire Functions Translations of Mathematical Monographs 150, AMS (1996).
  • [17] E. Levy, Pricing European average rate currency options, J. Internat. Money Finance 11, 474-491 (1992)
  • [18] A. Lewis, Option valuation under stochastic volatility, Finance Press, 2000.
  • [19] A. Lewis, Option valuation under stochastic volatility, vol. 2. Finance Press, 2016.
  • [20] V. Linestky, Spectral expansions for Asian (Average price) options, Operations Research 52 856-867 (2004)
  • [21] H. Matsumoto and M. Yor, Exponential functionals of Brownian motion: I. Probability laws at fixed time, Prob. Surveys 2, 312-347 (2005)
  • [22] W.A. McGhee, “An Efficient Implementation of Stochastic Volatility by the Method of Conditional Integration with Application to the exponential Ornstein-Uhlenbeck stochastic volatility and SABR models”, RBS Internal Paper, December 2010.
  • [23] W. A. McGhee. “An Efficient Implementation of Stochastic Volatility by the Method of Conditional Integration”. ICBI Global Derivatives and Risk Management, April 2011.
  • [24] P.M. Morse and H. Feshbach, Methods of Theoretical Physics, Part I. New York: McGraw-Hill, 1953.
  • [25] R. Nevanlinna, Analytic functions, Springer, Berlin-Heidelberg - New York, 1970.
  • [26] D. Pirjol and L. Zhu, Short maturity Asian options in the local volatility model, SIAM J.  Financial Math. 7(1), 947-992 (2016); arXiv:1609.07559[q-fin.PR]
  • [27] D. Pirjol, J. Wang and L. Zhu, Short maturity forward start Asian options in the local volatility model, Applied Mathematical Finance 26(3), 187-221 (2019); arxiv:1710.03160[q-fin.PR]
  • [28] D. Pirjol, Asymptotic expansion for the Hartman-Watson distribution, Methodology and Computing in Applied Probability (2020), https://doi.org/10.1007/s11009-020-09827-5; arXiv:2001.09579[math.PR]
  • [29] D. Pirjol and L. Zhu, Asymptotics of the time-discretized log-normal SABR model: The implied volatility surface, Probability in Engineering and Computational Sciences (2020); arXiv:2001.09850[q-fin.MF]
  • [30] B. Simon, A Comprehensive Course in Analysis, II.A Basic Complex Analysis, American Mathematical Society 2017.
  • [31] E.M. Stein and R. Shakarchi, Complex Analysis, Princeton Lectures in Analysis II, Princeton University Press, 2003
  • [32] E.T. Whittaker and G.N. Watson, A course of modern analysis, Cambridge University Press, Fourth Edition 1964
  • [33] G.A. Willard, Calculating prices and sensitivities for path-independent derivative securities in multifactor models, Journal of Derivatives 5(1), 54-61 (1997)
  • [34] M. Yor, On some exponential functionals of the Brownian motion, J. Appl. Prob. 24(3), 509-531 (1992)