This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Arbitrary order of convergence for Riesz fractional derivative via central difference method

P.H. Lam [email protected] H.C. So [email protected] C.F. Chan Department of Electrical Engineering, City University of Hong Kong, Kowloon, Hong Kong SAR
Abstract

We propose a novel method to compute a finite difference stencil for Riesz derivative for artibitrary speed of convergence. This method is based on applying a pre-filter to the Grünwald-Letnikov type central difference stencil. The filter is obtained by solving for the inverse of a symmetric Vandemonde matrix and exploiting the relationship between the Taylor’s series coefficients and fast Fourier transform. The filter costs O(N2)O\left(N^{2}\right) operations to evaluate for O(hN)O\left(h^{N}\right) of convergence, where hh is the sampling distance. The higher convergence speed should more than offset the overhead with the requirement of the number of nodal points for a desired error tolerance significantly reduced. The benefit of progressive generation of the stencil coefficients for adaptive grid size for dynamic problems with the Grünwald-Letnikov type difference scheme is also kept because of the application of filtering. The higher convergence rate is verified through numerical experiments.

keywords:
Non-local operator, Riesz derivative, Finite difference
journal: Journal of Computational Mathematics

1 Introduction

Riesz derivative is the symmetrical version of the fractional Riemann–Liouville derivative. Fractional calculus is an important topic because fractional derivatives can account for non-local effect, which is present in many physics problems. For example, the Riesz derivative has been applied to model wave propagation in viscoelastic material and fractional electrical impedance in human skin [1, 2]. One of the most efficient methods to approximate spatial fractional derivatives is based on Grünwald-Letnikov derivatives. The development began with the derivation of the central difference scheme to approximate the Riesz potential in [3]. The order of accuracy was found to be of second order later in [4]. In [5, 6], the order of convergence was increased to 4 by applying another finite difference on the operator such that the second order term in the Maclaurin series is eliminated. Because the matrix system resulting from the fractional difference operation is not sparse, there is practically no additional cost in using a highest order approximation in a dynamic problem aside from the potential overhead in the evaluation of the coefficients for the operator. This motivates us to study methods to increase the order of approximation to an arbitrary order so that the convergence is faster, leading to reduced number of nodes required to achieve accurate results. In this paper, we propose a prefilter approach to generalise the canceling of error terms in [5, 6]. The result is a filter which maximizes the flatness of the fractional power of sinc function at the origin.

In order to understand our approach, we first discuss the development of higher order finite difference approximation using the Grünwald-Letnikov derivatives for Riesz derivative. Starting from the very basic, Grünwald-Letnikov derivatives are obtained by applying fundamental theorem of calculus recursively as

f(x)\displaystyle f^{\prime}\left(x\right) =limh0f(x+h)f(x)h,\displaystyle=\lim_{h\rightarrow 0}\frac{f\left(x+h\right)-f\left(x\right)}{h}, (1)
f′′(x)\displaystyle f^{\prime\prime}\left(x\right) =limh0f(x+h)f(x)h,\displaystyle=\lim_{h\rightarrow 0}\frac{f^{\prime}\left(x+h\right)-f^{\prime}\left(x\right)}{h},
=limh0f(x+2h)2f(x+h)+f(x)h2,\displaystyle=\lim_{h\rightarrow 0}\frac{f\left(x+2h\right)-2f\left(x+h\right)+f\left(x\right)}{h^{2}}, (2)
f′′(x)\displaystyle f^{\prime\prime}\left(x\right) =limh0f(x+2h)2f(x+h)+f(x)h2,\displaystyle=\lim_{h\rightarrow 0}\frac{f^{\prime}\left(x+2h\right)-2f^{\prime}\left(x+h\right)+f^{\prime}\left(x\right)}{h^{2}},
=limh0f(x+3h)3f(x+2h)+3f(x+h)f(x)h3,\displaystyle=\lim_{h\rightarrow 0}\frac{f\left(x+3h\right)-3f\left(x+2h\right)+3f\left(x+h\right)-f\left(x\right)}{h^{3}}, (3)
\displaystyle\vdots

and so on. Shifting the difference to the centre, we have

f(x)\displaystyle f^{\prime}\left(x\right) =limh0f(x+h2)f(xh2)h,\displaystyle=\lim_{h\rightarrow 0}\frac{f\left(x+\frac{h}{2}\right)-f\left(x-\frac{h}{2}\right)}{h}, (4)
f′′(x)\displaystyle f^{\prime\prime}\left(x\right) =limh0f(x+h2)f(xh2)h\displaystyle=\lim_{h\rightarrow 0}\frac{f^{\prime}\left(x+\frac{h}{2}\right)-f^{\prime}\left(x-\frac{h}{2}\right)}{h}
=limh0f(x+h)2f(x)+f(xh)h2,\displaystyle=\lim_{h\rightarrow 0}\frac{f\left(x+h\right)-2f\left(x\right)+f\left(x-h\right)}{h^{2}}, (5)
f′′′(x)\displaystyle f^{\prime\prime\prime}\left(x\right) =limh0f(x+h)2f(x)+f(xh)h2\displaystyle=\lim_{h\rightarrow 0}\frac{f^{\prime}\left(x+h\right)-2f^{\prime}\left(x\right)+f^{\prime}\left(x-h\right)}{h^{2}}
=limh0f(x+3h2)3f(x+h2)+3f(xh2)f(x3h2)h3,\displaystyle=\lim_{h\rightarrow 0}\frac{f\left(x+\frac{3h}{2}\right)-3f\left(x+\frac{h}{2}\right)+3f\left(x-\frac{h}{2}\right)-f\left(x-\frac{3h}{2}\right)}{h^{3}}, (6)
\displaystyle\vdots

Without the limits, the first Grünwald-Letnikov derivative is equivalent to first order accurate finite difference. Because of this, the subsequent derivatives are also first order accurate while the central difference version has second order of accuracy.

Grünwald-Letnikov derivatives of a function can be written as a discrete convolution of the function evaluated at the discrete points with a stencil. For the central difference, the derivatives can be expressed as

Dh1f\displaystyle\mathrm{D}_{h}^{1}f =S12{f}1h[11],\displaystyle=\mathrm{S}_{\frac{1}{2}}\left\{f\right\}\ast\frac{1}{h}\begin{bmatrix}1&-1\end{bmatrix}, (7)
Dh2f\displaystyle\mathrm{D}_{h}^{2}f =Δh1{Dh1f}\displaystyle=\Delta_{h}^{1}\left\{\mathrm{D}_{h}^{1}f\right\}
=S1{f}1h[11]1h[11],\displaystyle=\mathrm{S}_{1}\left\{f\right\}\ast\frac{1}{h}\begin{bmatrix}1&-1\end{bmatrix}\ast\frac{1}{h}\begin{bmatrix}1&-1\end{bmatrix}, (8)
\displaystyle\vdots

where the discrete convolution operator is defined as (fg)[n]mf[nm]g[m]\left(f\ast g\right)\left[n\right]\triangleq\sum_{m}f\left[n-m\right]g\left[m\right], f[n]=f(nh)f\left[n\right]=f\left(nh\right), St\mathrm{S}_{t} is the shift operator defined as St{f}[n]=f[n+t]\mathrm{S}_{t}\left\{f\right\}\left[n\right]=f\left[n+t\right], and the array is 0 indexed. Now, let HkH_{k} be the stencil of kk-th derivative. Then the kk-th derivative can be written as

Dhkf=1hk(Sk2{f}Hk),\mathrm{D}_{h}^{k}f=\frac{1}{h^{k}}\left(S_{\frac{k}{2}}\left\{f\right\}\ast H_{k}\right), (9)

with the recursive relationship

Hk\displaystyle H_{k} =Hk1[11],\displaystyle=H_{k-1}\ast\begin{bmatrix}1&-1\end{bmatrix}, (10)
H1\displaystyle H_{1} =[11].\displaystyle=\begin{bmatrix}1&-1\end{bmatrix}. (11)

Applying discrete time Fourier transform to HkH_{k}, we have

{Hk}\displaystyle\mathcal{F}\left\{H_{k}\right\} =(1eiω)k=m=0k(km)(1)meiωm.\displaystyle=\left(1-\mathrm{e}^{-\mathrm{i}\omega}\right)^{k}=\sum_{m=0}^{k}\binom{k}{m}\left(-1\right)^{m}\mathrm{e}^{-\mathrm{i}\omega m}. (12)

This leads to the solution {Hk}m=(1)m(km)\{H_{k}\}_{m}=\left(-1\right)^{m}\binom{k}{m}. To understand the error convergence, we replace mm with mh2m\frac{h}{2} in (12) to obtain the Fourier transform of the kk-th derivative of ff

{Dhkf}\displaystyle\mathcal{F}\left\{\mathrm{D}_{h}^{k}f\right\} =(eiωh2eiωh2)kf~(ω)hk=(iωsinc(ωh2))kf~(ω).\displaystyle=\left(\mathrm{e}^{\mathrm{i}\frac{\omega h}{2}}-\mathrm{e}^{-\mathrm{i}\frac{\omega h}{2}}\right)^{k}\frac{\tilde{f}\left(\omega\right)}{h^{k}}=\left(\mathrm{i}\omega\operatorname{\mathrm{sinc}}\left(\frac{\omega h}{2}\right)\right)^{k}\tilde{f}\left(\omega\right). (13)

Since sinc(x)\operatorname{\mathrm{sinc}}\left(x\right) can be expanded as a Maclaurin series as 1+O(x2)1+O\left(x^{2}\right) and it would converge for xx from π-\pi to π\pi, this approximation is second order accurate, assuming ff is sufficiently smooth such that the magnitude of its frequency response drops significantly below the difference between 1 and the sinc function in its main lobe.

The binomial series (12) can be generalised to non-integer kk for fractional derivatives, as seen in [3, 7], because the magnitude of the variable in the series is not bigger than 1. Consider the Riesz derivative defined as

αf(x)|x|αCα(Dxαa+xDbα)f(x),\frac{\partial^{\alpha}f\left(x\right)}{\partial\left|x\right|^{\alpha}}\triangleq C_{\alpha}\left({}_{a}D_{x}^{\alpha}+_{x}D_{b}^{\alpha}\right)f\left(x\right), (14)

where f(x)=0f\left(x\right)=0 everywhere except x[a,b]x\in\left[a,b\right], Cα=12cosπα2C_{\alpha}=-\frac{1}{2\cos\frac{\pi\alpha}{2}}, and the left and right Riemann-Liouville fractional derivatives are defined as

Dxαaf(x){}_{a}D_{x}^{\alpha}f\left(x\right) 1Γ(mα)mxmaxf(ξ)(xξ)α+1mdξ,\displaystyle\triangleq\frac{1}{\Gamma\left(m-\alpha\right)}\frac{\partial^{m}}{\partial x^{m}}\int_{a}^{x}\frac{f\left(\xi\right)}{\left(x-\xi\right)^{\alpha+1-m}}\,\mathrm{d}\xi, (15)
Dbαxf(x){}_{x}D_{b}^{\alpha}f\left(x\right) 1Γ(mα)mxmxbf(ξ)(ξx)α+1mdξ,\displaystyle\triangleq\frac{1}{\Gamma\left(m-\alpha\right)}\frac{\partial^{m}}{\partial x^{m}}\int_{x}^{b}\frac{f\left(\xi\right)}{\left(\xi-x\right)^{\alpha+1-m}}\,\mathrm{d}\xi, (16)

where m=αm=\left\lceil\alpha\right\rceil. Since ff can be scaled by introducing dimensionless variables, we consider a=0a=0, b=1b=1 for the Riesz derivative throughout the paper. The Riesz derivative has the frequency response |ω|α-\left|\omega\right|^{\alpha}. Therefore, for a stencil which approximates the Riesz derivative to second order of accuracy, we seek a solution of the inverse transform of |ωsinc(ωh2)|α\left|\omega\operatorname{\mathrm{sinc}}\left(\frac{\omega h}{2}\right)\right|^{\alpha}. Let the fractional convolution kernel be KαK_{\alpha} such that

{Kα}=|ωsinc(ωh2)|α.\mathcal{F}\left\{K_{\alpha}\right\}=\left|\omega\operatorname{\mathrm{sinc}}\left(\frac{\omega h}{2}\right)\right|^{\alpha}. (17)

Then the solution is the Fourier series coefficient of its transform given by

Kα(|n|h)=2αh2πhα02πhsinα(ωh2)eiω|n|hdω.K_{\alpha}\left(\left|n\right|h\right)=\frac{2^{\alpha}h}{2\pi h^{\alpha}}\int_{0}^{\frac{2\pi}{h}}\sin^{\alpha}\left(\frac{\omega h}{2}\right)\mathrm{e}^{-\mathrm{i}\omega\left|n\right|h}\,\mathrm{d}\omega. (18)

The coefficients are symmetrical because the frequency response is real and symmetric about the origin. Therefore, it does not matter whether the exponent is negative or not but the negative choice will be apparent later. Owning to the binomial series and Cauchy residue theorem, each biomial coefficent can be expressed in the integral form as

(ab)=Res((1+z)azb+1,0)=12πππ(1+eiω)aeiωbdω,\binom{a}{b}=\mathrm{Res}\left(\frac{\left(1+z\right)^{a}}{z^{b+1}},0\right)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\left(1+\mathrm{e}^{\mathrm{i}\omega}\right)^{a}\mathrm{e}^{-\mathrm{i}\omega b}\,\mathrm{d}\omega, (19)

for any real aa and b>1b>-1. Eq. (18) can be rewritten in the same form to arrive at the solution as

Kα(|n|h)\displaystyle K_{\alpha}\left(\left|n\right|h\right) =(1)n2πhαππ(1+eiω)αeiω(α2+n)dω=(1)nhα(αα2+n).\displaystyle=\frac{\left(-1\right)^{n}}{2\pi h^{\alpha}}\int_{-\pi}^{\pi}\left(1+\mathrm{e}^{\mathrm{i}\omega}\right)^{\alpha}\mathrm{e}^{-\mathrm{i}\omega\left(\frac{\alpha}{2}+n\right)}\,\mathrm{d}\omega=\frac{\left(-1\right)^{n}}{h^{\alpha}}\binom{\alpha}{\frac{\alpha}{2}+n}. (20)

This method to solve binomial related problems is known as the Egorychev method[8]. Therefore, to increase the convergence order, we propose a filter with a frequency response GαG_{\alpha} such that GαKαG_{\alpha}\cdot K_{\alpha} now has a response of |ω|α(1+O(hN))\left|\omega\right|^{\alpha}\left(1+O\left(h^{N}\right)\right), where NN is the order of convergence.

This paper is organized as follows. In Section 2, we start with presenting our method, which is then followed by the details of the algorithms. In Section 3, we discuss the convergence behaviour of the method, which shows that the convergence is indeed O(hN)O\left(h^{N}\right) subject to the smoothness of the function, and we comment on the filter’s positivity and the eigenvalue problem. Numerical results are presented in Section 4 to validate the error prediction in the previous section before the conclusion in the final section.

2 Method & Algorithm

In this section, we derive the method to reduce the error of approximation of the Riesz derivative and algorithms to compute the stencil for this purpose. As discussed previously, we seek a filter to improve the flatness of the sinc function in the neighbourhood of the origin such that resulting frequency response of the finite difference operator is closer to |ω|α-\left|\omega\right|^{\alpha}. We expand the fractional power of the sinc function in (18) as the Maclaurin series as

sinc(x2)α=1+n=1Nhanx2n+O(xN),\operatorname{\mathrm{sinc}}\left(\frac{x}{2}\right)^{\alpha}=1+\sum_{n=1}^{N_{h}}a_{n}x^{2n}+O\left(x^{N}\right), (21)

where NN is the choice of convergence order, Nh=N21N_{h}=\left\lceil\frac{N}{2}\right\rceil-1, an=1(2n)!dsinc(x2)αdx|x=0a_{n}=\frac{1}{\left(2n\right)!}\frac{\mathrm{d}\operatorname{\mathrm{sinc}}\left(\frac{x}{2}\right)^{\alpha}}{\mathrm{d}x}|_{x=0}. The odd coefficients are ignored because it is obvious that they are 0 for an even function. Because this series converges up to |x|<π\left|x\right|<\pi, we know that each higher order term is always smaller in magnitude than the respective 1 lower order term below the Nyquist frequency. Then it is clear that by cancelling out the lower order ones, we will improve the flatness of this function, provided that the filter does not raise the magnitude of the coefficients of the higher order terms. Now, let us consider only the problem of eliminating the lower order terms first. Let the filter be GαG_{\alpha} such that

{Gα}=G~α=g0α+2m=1Nhgmαcos(mx).\mathcal{F}\left\{G_{\alpha}\right\}=\tilde{G}_{\alpha}=g_{0}^{\alpha}+2\sum_{m=1}^{N_{h}}g_{m}^{\alpha}\cos\left(mx\right). (22)

The transform is a cosine series because the filter is obviously real symmetric, and there are no odd terms in the sinc expansion to cancel. Expanding each cos(nx)\cos\left(nx\right) as Maclaurin series, we have

cos(mωh)=1+n=1Nh1cn,mx2k+O(xN),\cos\left(m\omega h\right)=1+\sum_{n=1}^{N_{h}-1}c_{n,m}x^{2k}+O\left(x^{N}\right), (23)

where cn,m=(1)kn2k(2k)!.c_{n,m}=\frac{\left(-1\right)^{k}n^{2k}}{\left(2k\right)!}. Let bnb_{n} be the series coefficients of G~α\tilde{G}_{\alpha}. They can be expressed in matrix form as

[1110c1,1c1,Nh0cNh,1cNh,Nh][100020002][g0αg1αgNhα]=[1b1bNh].\begin{bmatrix}1&1&\cdots&1\\ 0&c_{1,1}&\cdots&c_{1,N_{h}}\\ \vdots&\vdots&\ddots&\vdots\\ 0&c_{N_{h},1}&\cdots&c_{N_{h},N_{h}}\end{bmatrix}\begin{bmatrix}1&0&\cdots&0\\ 0&2&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&0&\cdots&2\end{bmatrix}\begin{bmatrix}g_{0}^{\alpha}\\ g_{1}^{\alpha}\\ \vdots\\ g_{N_{h}}^{\alpha}\end{bmatrix}=\begin{bmatrix}1\\ b_{1}\\ \vdots\\ b_{N_{h}}\end{bmatrix}. (24)

The coefficients of the series from the product of the two polynomials can be obtained through discrete convolution, and we set the coefficients for n>0n>0 up to 2Nh2N_{h} be 0 to eliminate the lower order terms. This can be expressed in matrix form as

[100a110aNhaNh11][1b1bNh]=[100]=𝐞0.\begin{bmatrix}1&0&\cdots&0\\ a_{1}&1&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ a_{N_{h}}&a_{N_{h}-1}&\cdots&1\end{bmatrix}\begin{bmatrix}1\\ b_{1}\\ \vdots\\ b_{N_{h}}\end{bmatrix}=\begin{bmatrix}1\\ 0\\ \vdots\\ 0\end{bmatrix}=\mathbf{e}_{0}. (25)

The matrix involving cn,mc_{n,m} is not convenient to solve but it gives us insight of the actual problem size. We rewrite it as a larger problem, in terms of the expansion of the exponential function, leading to

[(i)00!000(i)11!000(i)2Nh(2Nh)!]𝐕T({Nh,,Nh})[gNhαgNh1αg0αgNhα]=[10b10bNh],\begin{bmatrix}\frac{\left(-\mathrm{i}\right)^{0}}{0!}&0&\cdots&0\\ 0&\frac{\left(-\mathrm{i}\right)^{1}}{1!}&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&0&\cdots&\frac{\left(-\mathrm{i}\right)^{2N_{h}}}{\left(2N_{h}\right)!}\end{bmatrix}\mathbf{V}^{\mathrm{T}}\left(\left\{-N_{h},\ldots,N_{h}\right\}\right)\begin{bmatrix}g_{N_{h}}^{\alpha}\\ g_{N_{h}-1}^{\alpha}\\ \vdots\\ g_{0}^{\alpha}\\ \vdots\\ g_{N_{h}}^{\alpha}\end{bmatrix}=\begin{bmatrix}1\\ 0\\ b_{1}\\ \vdots\\ 0\\ b_{N_{h}}\end{bmatrix}, (26)

where 𝐕({x0,x1,,xN})\mathbf{V}\left(\left\{x_{0},x_{1},\ldots,x_{N}\right\}\right) is the Vandermonde matrix defined as

𝐕({x0,x1,,xN})=[x00x01x0Nx10x11x1NxN0xN1xNN].\mathbf{V}\left(\left\{x_{0},x_{1},\ldots,x_{N}\right\}\right)=\begin{bmatrix}x_{0}^{0}&x_{0}^{1}&\cdots&x_{0}^{N}\\ x_{1}^{0}&x_{1}^{1}&\cdots&x_{1}^{N}\\ \vdots&\vdots&\cdots&\vdots\\ x_{N}^{0}&x_{N}^{1}&\cdots&x_{N}^{N}\end{bmatrix}. (27)

Let vi,jN={𝐕1({x0,x1,,xN})}i,jv_{i,j}^{N}=\left\{\mathbf{V}^{-1}\left(\left\{x_{0},x_{1},\ldots,x_{N}\right\}\right)\right\}_{i,j}, the inverse of the Vandermonde matrix can be solved using the following recurrence equations [9]

vi,kk\displaystyle v_{i,k}^{k} =n=0k2(xk1xn)n=0k1(xkxn)(vi1,k1k1xk1vi,k1k1),for 0<ik,\displaystyle=\frac{\prod_{n=0}^{k-2}\left(x_{k-1}-x_{n}\right)}{\prod_{n=0}^{k-1}\left(x_{k}-x_{n}\right)}\left(v_{i-1,k-1}^{k-1}-x_{k-1}v_{i,k-1}^{k-1}\right),\ \text{for }0<i\leq k, (28)
v0,kk\displaystyle v_{0,k}^{k} =n=0k2(xk1xn)n=0k1(xkxn)xk1v0,k1k1,\displaystyle=-\frac{\prod_{n=0}^{k-2}\left(x_{k-1}-x_{n}\right)}{\prod_{n=0}^{k-1}\left(x_{k}-x_{n}\right)}x_{k-1}v_{0,k-1}^{k-1}, (29)

and for 0j<k0\leq j<k

vi,jk\displaystyle v_{i,j}^{k} =xkvi,jk1vi1,jk1xkxj,for 0<ik,\displaystyle=\frac{x_{k}v_{i,j}^{k-1}-v_{i-1,j}^{k-1}}{x_{k}-x_{j}},\ \text{for }0<i\leq k, (30)
v0,jk\displaystyle v_{0,j}^{k} =xkxkxjv0,jk1.\displaystyle=\frac{x_{k}}{x_{k}-x_{j}}v_{0,j}^{k-1}. (31)

where kk is looped over k=0,,Nk=0,\ldots,N. All vi,j0v_{i,j}^{0} are 0 initialised except for v0,00=1v_{0,0}^{0}=1. However, for our special case, there are some properties which can be exploited for further optimisation. In this special case, kk is even, and xk2+n=xk2n=nx_{\frac{k}{2}+n}=x_{\frac{k}{2}-n}=n for n=0,,k2n=0,\,\ldots,\,\frac{k}{2}. Since we are only concerned about even kk, we map kk2k\mapsto\frac{k}{2} and let the size grow 2 per step. To find the new recurrence relationship for each step, we write out the Lagrange polynomial basis as

Ljk(x)=(1)kjn=k,njk(xn)(k+j)!(kj)!=i=02kvi,N+jkxi.L_{j}^{k}\left(x\right)=\frac{\left(-1\right)^{k-j}\prod_{n=-k,\,n\neq j}^{k}\left(x-n\right)}{\left(k+j\right)!\left(k-j\right)!}=\sum_{i=0}^{2k}v_{i,N+j}^{k}x^{i}. (32)

Each basis has the following symmetry

Ljk(x)=Ljk(x)=i=02kvi,Njk(1)ixi,L_{j}^{k}\left(x\right)=L_{-j}^{k}\left(-x\right)=\sum_{i=0}^{2k}v_{i,N-j}^{k}\left(-1\right)^{i}x^{i}, (33)

which leads to v2i,N+jk=v2i,Njkv_{2i,N+j}^{k}=v_{2i,N-j}^{k}. This relationship also arises from the fact that gjg_{j} is symmetric. Therefore, we need not consider the left half of the Vandermonde matrix inverse, and we can shift the indexing to the left by NN.

The second property is

v2i1,jkv2i,jk=j,for i>0, 0jk,\frac{v_{2i-1,j}^{k}}{v_{2i,j}^{k}}=j,\ \text{for }i>0,\,0\leq j\leq k, (34)

so the odd terms are not required for the evaluation of future coefficients. To prove this, we write out the recurrence relationship for this special case first. For j=kj=k, we have

Lkk(x)=(x+k)(xk+1)(2k)(2k1)Lk1k1(x),L_{k}^{k}\left(x\right)=\frac{\left(x+k\right)\left(x-k+1\right)}{\left(2k\right)\left(2k-1\right)}L_{k-1}^{k-1}\left(x\right), (35)

which leads to the recurrence relationship of the coefficient after substituting in (32),

vi,kk\displaystyle v_{i,k}^{k} =vi2,k1k1+vi1,k1k1k(k1)vi,k1k1(2k)(2k1),for i=2,,2k\displaystyle=\frac{v_{i-2,k-1}^{k-1}+v_{i-1,k-1}^{k-1}-k\left(k-1\right)v_{i,k-1}^{k-1}}{\left(2k\right)\left(2k-1\right)},\ \text{for }i=2,\ldots,2k (36)
v1,kk\displaystyle v_{1,k}^{k} =v0,k1k1k(k1)v1,k1k1(2k)(2k1),v0,kk=0,for k>0,\displaystyle=\frac{v_{0,k-1}^{k-1}-k\left(k-1\right)v_{1,k-1}^{k-1}}{\left(2k\right)\left(2k-1\right)},\ v_{0,k}^{k}=0,\ \text{for }k>0, (37)

and for j<kj<k,

Ljk(x)=x2k2j2k2Ljk1(x1),L_{j}^{k}\left(x\right)=\frac{x^{2}-k^{2}}{j^{2}-k^{2}}L_{j}^{k-1}\left(x-1\right), (38)

leads to

vi,jk\displaystyle v_{i,j}^{k} =k2vi,jk1vi2,jk1(k+j)(kj),for i=2,,2k,\displaystyle=\frac{k^{2}v_{i,j}^{k-1}-v_{i-2,j}^{k-1}}{\left(k+j\right)\left(k-j\right)},\ \text{for }i=2,\ldots,2k, (39)
vi,jk\displaystyle v_{i,j}^{k} =k2vi,jk1(k+j)(kj),for i=0,1.\displaystyle=\frac{k^{2}v_{i,j}^{k-1}}{\left(k+j\right)\left(k-j\right)},\ \text{for }i=0,1. (40)

One can verify that the property (34) is true for k=1k=1 using the recurrence relationships. We assume this is also true for other kk up to N1N-1. Then for k=Nk=N, j=Nj=N, we substitute (36) into (34), we get

v2i1,NNv2i,NN=v2i3,N1N1+v2i2,N1N1N(N1)v2i1,N1N1v2i2,N1N1+v2i1,N1N1N(N1)v2i,N1N1=N(v2i2,N1N1(N1)2v2i,N1N1)(v2i2,N1k1(N1)2v2i,N1N1)=N.\frac{v_{2i-1,N}^{N}}{v_{2i,N}^{N}}=\frac{v_{2i-3,N-1}^{N-1}+v_{2i-2,N-1}^{N-1}-N\left(N-1\right)v_{2i-1,N-1}^{N-1}}{v_{2i-2,N-1}^{N-1}+v_{2i-1,N-1}^{N-1}-N\left(N-1\right)v_{2i,N-1}^{N-1}}=\frac{N\left(v_{2i-2,N-1}^{N-1}-\left(N-1\right)^{2}v_{2i,N-1}^{N-1}\right)}{\left(v_{2i-2,N-1}^{k-1}-\left(N-1\right)^{2}v_{2i,N-1}^{N-1}\right)}=N. (41)

This is similarly proven by induction for the case j<Nj<N as such

v2i1,jNv2i,jN=k2v2i1,jN1v2i3,jN1k2v2i,jN1v2i2,jN1=j(k2v2i,jN1v2i2,jN1)(k2v2i,jN1v2i2,jN1)=j.\frac{v_{2i-1,j}^{N}}{v_{2i,j}^{N}}=\frac{k^{2}v_{2i-1,j}^{N-1}-v_{2i-3,j}^{N-1}}{k^{2}v_{2i,j}^{N-1}-v_{2i-2,j}^{N-1}}=\frac{j\left(k^{2}v_{2i,j}^{N-1}-v_{2i-2,j}^{N-1}\right)}{\left(k^{2}v_{2i,j}^{N-1}-v_{2i-2,j}^{N-1}\right)}=j. (42)

For the case i=1i=1, one can see it is also true because from (37) and (40), we have v0,jk=0v_{0,j}^{k}=0 for all j>0j>0 and v0,0k=1v_{0,0}^{k}=1. Therefore, by induction, (34) is true for all k>0k>0. All together, these vi,jNv_{i,j}^{N} at even ii and right half of the Vandermonde matrix inverse constitute the inverse of the matrix product involving cn,mc_{n,m} and the diagonal matrix on the left hand side of (24).

Next, we discuss the evaluation of aja_{j}, the derivatives of the fractional power of sinc function at 0. This function is a composite function and its derivatives are given by Faà di Bruno’s formula. However, it is not convenient for use because of the involvement of combinatorics. Instead, one can simply find the corresponding Maclaurin series coefficients. They may be efficiently recovered from the inverse transform of fractional power of the fast Fourier transform of the Maclaurin series coefficients of the sinc function given by

1(2k)!d2kdx2ksinc(x2)\displaystyle\frac{1}{\left(2k\right)!}\frac{\mathrm{d}^{2k}}{\mathrm{d}x^{2k}}\operatorname{\mathrm{sinc}}\left(\frac{x}{2}\right) |x=0=(1)k(2k+1)!122k,\displaystyle|_{x=0}=\frac{\left(-1\right)^{k}}{\left(2k+1\right)!}\frac{1}{2^{2k}}, (43)

while the odd cases are all 0. To prove this, one can simply apply Cauchy residue theorem again similar to (19) for the infinite case. And because the Maclaurin series converges about the unit circle for the sinc function, the result is valid. Then to prove that this is also true for the finite case, one just has to realise that the fractional number can be split up into an integer numerator pp and an integer denominator qq. Then there exists some transform coefficients such that their qq power gives the original transform coefficients because the integer power of transform coefficients is just convolution of the coefficients with themselves an integer number of times. Therefore, the same coefficients from the infinite case can be recovered using only the finite transform coefficients. Since we would already have the transform of those coefficients, (25) can be solved by directly applying the inverse transform to the reciprocal of the transform coefficients. All in all, let aˇn\check{a}_{n} be the 2n2n-th Maclaurin series coefficients of the sinc function given by (43), bnb_{n} in (25) are given by

a~m\displaystyle\tilde{a}_{m} =n=0Nhaˇneiωmn,\displaystyle=\sum_{n=0}^{N_{h}}\check{a}_{n}\mathrm{e}^{-\mathrm{i}\omega_{m}n}, (44)
bn\displaystyle b_{n} =12N+1m=02Nha~mαeiωmn=12N+1(a~0α+2m=1N{a~mαeiωmn}),\displaystyle=\frac{1}{2N+1}\sum_{m=0}^{2N_{h}}\tilde{a}_{m}^{\alpha}\mathrm{e}^{\mathrm{i}\omega_{m}n}=\frac{1}{2N+1}\left(\tilde{a}_{0}^{\alpha}+2\sum_{m=1}^{N}\Re\left\{\tilde{a}_{m}^{\alpha}\mathrm{e}^{\mathrm{i}\omega_{m}n}\right\}\right), (45)

where ωm=2πm2N+1\omega_{m}=\frac{2\pi m}{2N+1}.

To summarise, we provide the pseudo code to obtain the stencil for higher order central difference scheme for the Riesz derivative here. Algorithm 1 describes the procedure to invert the transpose of the matrix product on the left hand side of (24). The columns are scaled to better balance the scaling of the sinc Maclaurin series coefficients. Algorithm 2 outputs the right half of the stencil for NxN_{x} number of elements between 0 and 1. There are Nx1N_{x}-1 number of elements in the output because the derivatives at the end points are not needed in dynamic problems as the boundary condition is set there. Let 𝐤\mathbf{k} be the vector returned from Algorithm 2, a (Nx2)×(Nx2)\left(N_{x}-2\right)\times\left(N_{x}-2\right) Toeplitz matrix 𝐃\mathbf{D} can be constructed from 𝐤\mathbf{k} with {𝐃}n,m=\left\{\mathbf{D}\right\}_{n,m}={𝐤}|n+1m|\left\{\mathbf{k}\right\}_{\left|n+1-m\right|} to approximate the derivative as

αf(x)|x|α|x=nNx1m=1Nx2{𝐃}n1,m1f(xm)+{𝐤}nf(0)+{𝐤}Nx1nf(1)for n=1,,Nx2.\frac{\partial^{\alpha}f\left(x\right)}{\partial\left|x\right|^{\alpha}}|_{x=\frac{n}{N_{x}-1}}\sim\sum_{m=1}^{N_{x}-2}\left\{\mathbf{D}\right\}_{n-1,m-1}f\left(x_{m}\right)+\left\{\mathbf{k}\right\}_{n}f\left(0\right)+\left\{\mathbf{k}\right\}_{N_{x}-1-n}f\left(1\right)\ \text{for }n=1,\ldots,N_{x}-2. (46)

If adaptive grid size is required, one can save the last 2Nh2N_{h} elements of 𝐤\mathbf{k} before the convolution as well as the filter coefficients of 𝐠\mathbf{g} for the generation of elements in higher indices. The generation of the coefficients can then be resumed, after scaling the coefficients for the larger stencil, from line 14 of Algorithm 2.

1Input: NN
2 Initialise matrix {𝐕}n,m0\left\{\mathbf{V}\right\}_{n,m}\leftarrow 0 for n=0N,m=0n=0\,\ldots\,N,\ m\,\ldots\,=0, and then {𝐕}0,01\left\{\mathbf{V}\right\}_{0,0}\leftarrow 1
3 for k=1,,Nk=1,\,\ldots,\,N
4      {𝐕}n,k((2n){𝐕}n1,k1+(k1){𝐕}n,k1)/(2k(2k1))\left\{\mathbf{V}\right\}_{n,k}\leftarrow\left(\left(2n\right)\cdot\left\{\mathbf{V}\right\}_{n-1,k-1}+\left(k-1\right)\cdot\left\{\mathbf{V}\right\}_{n,k-1}\right)/\left(2k\left(2k-1\right)\right) for n=1kn=1\,\ldots\,k
5       {𝐕}n,k{𝐕}n,kk(k1)2k(2k1){𝐕}n,k1\left\{\mathbf{V}\right\}_{n,k}\leftarrow\left\{\mathbf{V}\right\}_{n,k}-\frac{k\left(k-1\right)}{2k\left(2k-1\right)}\cdot\left\{\mathbf{V}\right\}_{n,k-1} for n=1k1n=1\,\ldots\,k-1
6       {𝐕}k,m2k(km)(k+m){𝐕}k1,m\left\{\mathbf{V}\right\}_{k,m}\leftarrow-\frac{2k}{\left(k-m\right)\left(k+m\right)}\cdot\left\{\mathbf{V}\right\}_{k-1,m} for m=0k1m=0\,\ldots\,k-1
7       {𝐕}n,m(k2{𝐕}n,m(2n){𝐕}n1,m)/((km)(k+m))\left\{\mathbf{V}\right\}_{n,m}\leftarrow\left(k^{2}\cdot\left\{\mathbf{V}\right\}_{n,m}-\left(2n\right)\cdot\left\{\mathbf{V}\right\}_{n-1,m}\right)/\left(\left(k-m\right)\left(k+m\right)\right) for n=1k1n=1\,\ldots\,k-1, m=0k1m=0\,\ldots\,k-1
8
9endfor
10 return 𝐕\mathbf{V}
Algorithm 1 Inversion of integer symmetric Vandermonde matrix with each row nn scaled by k=1nk(k12)\prod_{k=1}^{n}k\left(k-\frac{1}{2}\right)
1 Input: α,Nh,Nx\alpha,N_{h},N_{x}
2 Evaluate aˇn\check{a}_{n} for n=0,,Nhn=0,\,\ldots,\,N_{h} using (43)
3 𝐚~\tilde{\mathbf{a}}\leftarrowFFT(𝐚ˇ)\left(\check{\mathbf{a}}\right)
4 𝐛IFFT(𝐚~α)\mathbf{b}\leftarrow\text{IFFT}\left(\tilde{\mathbf{a}}^{\alpha}\right), superscript for element-wise power
5 S1S\leftarrow-1
6 for n=0Nhn=0\,\ldots\,N_{h}
7      {𝐛}nS{𝐛}n1\left\{\mathbf{b}\right\}_{n}\leftarrow S\cdot\left\{\mathbf{b}\right\}_{n-1}
8       S4SS\leftarrow-4S
9
10endfor
11 𝐕\mathbf{V}\leftarrow Algorithm 1(Nh)\left(N_{h}\right)
12 𝐠𝐕𝐛\mathbf{g}\leftarrow\mathbf{V}\mathbf{b}
13 Initialize {𝐤}0=(Nx1)αΓ(α+1)Γ(α2+1)2\left\{\mathbf{k}\right\}_{0}=\left(N_{x}-1\right)^{\alpha}\frac{\Gamma\left(\alpha+1\right)}{\Gamma\left(\frac{\alpha}{2}+1\right)^{2}}
14 for k=1,,Nx+Nh2k=1,\,\ldots,\,N_{x}+N_{h}-2
15      {𝐤}kα2k+1α2+k{𝐤}k1\left\{\mathbf{k}\right\}_{k}\leftarrow-\frac{\frac{\alpha}{2}-k+1}{\frac{\alpha}{2}+k}\left\{\mathbf{k}\right\}_{k-1}
16
17endfor
18 𝐤𝐤mirrored𝐠\mathbf{k}\leftarrow\mathbf{k}\ast_{\text{mirrored}}\mathbf{g}, ‘mirrored’ implies the convolution has 𝐤\mathbf{k} and 𝐠\mathbf{g} mirrored to the left about position 0 as negative index elements
19 return 𝐤\mathbf{k} for elements at index 0,,Nx20,\,\ldots,\,N_{x}-2
Algorithm 2 Riesz derivative higher order central difference stencil

3 Theory

First, we study the error convergence. We follow a similar approach given in [5, 6]. A lemma for the smoothness requirement of the function is given before the error analysis. Because of this lemma, which can be considered an extension to the Riemann-Lebesgue lemma, the order requirement for derivative continuity is lower than the ones given in [5, 6].

Lemma 1.

There exists a constant C0C_{0} such that if a function f:f:\mathbb{R}\rightarrow\mathbb{R} is Cn\mathrm{C}^{n},

|{f}(ω)||ω|(n+2)C0.\left|\mathcal{F}\left\{f\right\}\left(\omega\right)\right|\leq\left|\omega\right|^{-\left(n+2\right)}C_{0}. (47)
Proof.

Since the kk-th derivative of ff is L1L^{1} integrable for k=1,,n+1k=1,\,\ldots,\,n+1, according to Riemann–Lebesgue lemma, {f(k)}(ω)0\mathcal{F}\left\{f^{\left(k\right)}\right\}\left(\omega\right)\rightarrow 0 as ω\omega\rightarrow\infty. Applying integration by parts to each Fourier transform of the derivatives of ff as

{f(k)}(ω)\displaystyle\mathcal{F}\left\{f^{\left(k\right)}\right\}\left(\omega\right) =f(k)(x)eiωxdx\displaystyle=\int_{\mathbb{R}}f^{\left(k\right)}\left(x\right)\mathrm{e}^{-\mathrm{i}\omega x}\,\mathrm{d}x
=1iω(f(k+1)(x)eiωxdxlima[f(k)(x)eiωx]aa).\displaystyle=\frac{1}{\mathrm{i}\omega}\left(\int_{\mathbb{R}}f^{\left(k+1\right)}\left(x\right)\mathrm{e}^{-\mathrm{i}\omega x}\,\mathrm{d}x-\lim_{a\rightarrow\infty}\left[f^{\left(k\right)}\left(x\right)\mathrm{e}^{-\mathrm{i}\omega x}\right]_{-a}^{a}\right). (48)

Clearly the boundary part vanishes for k=1,,n+1k=1,\,\ldots,\,n+1, so we have

|{f}(ω)|=|ω|(n+1)|{f(n+1)}(ω)|.\left|\mathcal{F}\left\{f\right\}\left(\omega\right)\right|=\left|\omega\right|^{-\left(n+1\right)}\left|\mathcal{F}\left\{f^{\left(n+1\right)}\right\}\left(\omega\right)\right|. (49)

However, if we integrate by parts once more, because f(n+1)f^{\left(n+1\right)} is not continuous, the integral is broken up into a finite set of continuous regions as

{f(n)}(ω)=1(iω)2(mbmbm+1f(n+2)(x)eiωxdx[f(n+1)(x)eiωx]bmbm+1),\mathcal{F}\left\{f^{\left(n\right)}\right\}\left(\omega\right)=\frac{1}{\left(\mathrm{i}\omega\right)^{2}}\left(\sum_{m}\int_{b_{m}}^{b_{m+1}}f^{\left(n+2\right)}\left(x\right)\mathrm{e}^{-\mathrm{i}\omega x}\,\mathrm{d}x-\left[f^{\left(n+1\right)}\left(x\right)\mathrm{e}^{-\mathrm{i}\omega x}\right]_{b_{m}}^{b_{m+1}}\right), (50)

where each of the boundary parts does not vanish. One can see that f(n+1)f^{\left(n+1\right)} is finite from the fact that f(n)f^{\left(n\right)} is continuous at every point. Moreover, each part of the integrals of f(n+2)f^{\left(n+2\right)} is also finite because all those parts are L1L^{1} integrals. As a result, such a constant C0C_{0} exists. ∎

Theorem 1.

Let {Df}n\left\{\mathrm{D}f\right\}_{n} be the approximation of Riesz derivative given by (46) at 0<xn<10<x_{n}<1. There exists a constant C0C_{0} such that if a function ff is CN\mathrm{C}^{N}, for 0<α<20<\alpha<2, the error of approximation is bounded as

|en|=|{Df}nαf|x|α|x=xn|C0hN.\left|e_{n}\right|=\left|\left\{\mathrm{D}f\right\}_{n}-\frac{\partial^{\alpha}f}{\partial\left|x\right|^{\alpha}}|_{x=x_{n}}\right|\leq C_{0}h^{N}. (51)
Proof.

Let f~\tilde{f} be the Fourier transform of ff. According to (21), the frequency response of Df\mathrm{D}f near the origin is given by

{nδ(xxn){Df}n}(ω)=|ω|α(1+O((ωh)N))f~.\mathcal{F}\left\{\sum_{n}\delta\left(x-x_{n}\right)\left\{\mathrm{D}f\right\}_{n}\right\}\left(\omega\right)=-\left|\omega\right|^{\alpha}\left(1+O\left(\left(\omega h\right)^{N}\right)\right)\tilde{f}. (52)

Then there exists a constant C1C_{1} such that the error of approximation in frequency domain

e~(ω)={Dfαf|x|α}(ω)\tilde{e}\left(\omega\right)=\mathcal{F}\left\{\mathrm{D}f-\frac{\partial^{\alpha}f}{\partial\left|x\right|^{\alpha}}\right\}\left(\omega\right) (53)

satisfies

|e~(ω)|\displaystyle\left|\tilde{e}\left(\omega\right)\right| C1|ω|α+NhN|f~(ω)|.\displaystyle\leq C_{1}\left|\omega\right|^{\alpha+N}h^{N}\left|\tilde{f}\left(\omega\right)\right|. (54)

Now we can apply Lemma 1, and plug (47) into (54) such that the right hand side now uniformly decays with respect to |ω|\left|\omega\right|. Inverse transform leads to (51). ∎

In Theorem 1, we have limited α\alpha to be a positive number smaller than 2, which is typical for physics problems, but it can also be generalised to higher order. The smoothness requirement means that this method does require 0 boundary condition. Because the algorithm and analysis is based on the expansion around the origin, the convergence order drops off as we approach Nyquist frequency. For demonstration, we plot the frequency response of the central difference operator divided by the response of Riesz operator given by sincα(x2)G~α(x)\operatorname{\mathrm{sinc}}^{\alpha}\left(\frac{x}{2}\right)\tilde{G}_{\alpha}\left(x\right) and the rate of convergence with respect to frequency given by

r(x)=log21sincα(x2)G~α(x)1sincα(x4)G~α(x2),r\left(x\right)=\log_{2}\frac{1-\operatorname{\mathrm{sinc}}^{\alpha}\left(\frac{x}{2}\right)\tilde{G}_{\alpha}\left(x\right)}{1-\operatorname{\mathrm{sinc}}^{\alpha}\left(\frac{x}{4}\right)\tilde{G}_{\alpha}\left(\frac{x}{2}\right)}, (55)

where G~α\tilde{G}_{\alpha} is given by (22), for x[0,π]x\in\left[0,\pi\right] and α=1.3\alpha=1.3. The plots are similar for various α\alpha, therefore we only plot them for a single value. We see in Figure 1(a) that the flatness of the curve around the origin is improved with increasing NN. From Figure 1(b), we observe that the higher the convergence order parameter NN is, the greater the drop in the convergence order with respect to the normalized angular frequency. This decrease will limit the convergence rate depending on the bandwidth of the function. Despite this, a higher NN still results in a greater convergence order as long as the bandwidth of the function is less than 2 times the Nyquist frequency.

Because the Vandermonde matrix can be decomposed into lower and upper triangular matrices without any zero elements in the diagonal entries [10], solutions exist for any NN. However, because the matrix system (25) does not give us control over derivatives beyond NN-th order, there is no guarantee that this method ensures that there is no overshoot in the resulting response and the response is always ‘flatter’ than the result of a lower order one. How close the bound (54) is depends on the constant C1C_{1}, related to the coefficient of (ωh)N\left(\omega h\right)^{N} in the expansion (52), which the formulation does not have explicit control. Nonetheless, numerical results show that there is improvement on the frequency response up to N=46N=46 without extra constraints imposed on the magnitude of higher order Maclaurin series coefficients. The proposed algorithm does become unstable at higher NN using double precision floating point.

The Riesz derivative is a negative definite operator, so 𝐠\mathbf{g} has to be positive definite for the correct results. We do not provide a proof for this but numerical results suggest |𝐠0|>n>0|𝐠n|\left|\mathbf{g}_{0}\right|>\sum_{n>0}\left|\mathbf{g}_{n}\right|, which is sufficient for the filter to be positive definite because of Gershgorin circle theorem. Therefore, positivity can be checked with minimal overhead during application of the method. Moreover, because the filter response is symmetric about the point π\pi, which also means that point is an extremum, and also because the filter should be increasing from the origin to cancel out the downward slope of the sinc function, we should expect that the filter has a response greater than 1 with a maximum at π\pi. As for the magnitude of this maximum, which leads to the upper bound of the eigenvalue of the Toeplitz matrix in (46), if we assume that 0<|sinc(x2)|αG~α(x)10<\left|\operatorname{\mathrm{sinc}}\left(\frac{x}{2}\right)\right|^{\alpha}\tilde{G}_{\alpha}\left(x\right)\leq 1 for all NN then the upper bound for the maxima is πα\pi^{\alpha} and so the magnitude of the eigenvalue of the Toeplitz matrix is bounded by (πNx)α\left(\pi N_{x}\right)^{\alpha}.

4 Numerical Experiment

To verify the error analysis, we apply the derivative operator to

u(x)={xq(1x)q,for x[0,1],0,otherwise.u\left(x\right)=\begin{cases}x^{q}\left(1-x\right)^{q},&\text{for }x\in\left[0,1\right],\\ 0,&\text{otherwise}.\end{cases} (56)

This function belongs to Cq1\mathrm{C}^{q-1} and its Riesz derivative is given in closed-form as

dαu(x)d|x|α=Cαn=0q(qn)Γ(q+n+1)Γ(q+n+1α)(xq+nα+(1x)q+nα).\frac{\mathrm{d}^{\alpha}u\left(x\right)}{\mathrm{d}\left|x\right|^{\alpha}}=C_{\alpha}\sum_{n=0}^{q}\binom{q}{n}\frac{\Gamma\left(q+n+1\right)}{\Gamma\left(q+n+1-\alpha\right)}\left(x^{q+n-\alpha}+\left(1-x\right)^{q+n-\alpha}\right). (57)

We define the error and convergence rate respectively as

EiN\displaystyle E_{i}^{N} =j=1Nmin1|{𝐮~N}NiNminjdαu(x)d|x|α|x=jNmin|,\displaystyle=\sum_{j=1}^{N_{\min}-1}\left|\left\{\mathbf{\tilde{u}}_{N}\right\}_{\frac{N_{i}}{N_{\min}}j}-\frac{\mathrm{d}^{\alpha}u\left(x\right)}{\mathrm{d}\left|x\right|^{\alpha}}|_{x=\frac{j}{N_{\min}}}\right|, (58)
RiN\displaystyle R_{i}^{N} =log2EiNEi+1N,\displaystyle=\log_{2}\frac{E_{i}^{N}}{E_{i+1}^{N}}, (59)

where 𝐮~N\mathbf{\tilde{u}}_{N} is the NN order accurate approximation of the Riesz derivative using (46), Nmin=11,N_{\min}=11,and Ni=2i(Ni1)+1N_{i}=2^{i}\left(N_{i}-1\right)+1 is the number of nodes. The results are tabulated in Tables 1-4. From the results for q=6q=6, we see that the approximation converges at a rate close to O(hN)O\left(h^{N}\right) for all tested NN even though the smoothness requirement is not always met. This can be explained by the bandwidth of the polynomial being within the Nyquist frequency and tapers sufficiently fast. This allows the error from within the Nyquist frequency region to dominate and so the error reduction is as expected. We do see that with smaller NiN_{i}, the convergence order is more suppressed with more of the main lobe of frequency response of uu being in the higher end region. For q=10q=10, we see that the results converge similarly with RiNR_{i}^{N} being even more limited at higher NN and lower NiN_{i}. This is because at higher qq, uu oscillates more and so the frequency response has a higher bandwidth. It appears that the error converges to a small value that is independent of the filter order and number of nodes but only dependent on α\alpha and qq. Therefore, it is unlikely that this convergence barrier is a result of the error from the region in higher frequencies than Nyquist frequency either because the operator response in those areas is lifted by the filter we introduced, which would translate into lower error for higher NN. We also cannot attribute this to the numerical errors in evaluating the stencil or the filter since the error would be multiplied by NiαN_{i}^{\alpha}. It is improbable that this error barrier will become a significant factor in application but it may be investigated in future work if problems arise.

Because of the unexpected convergence for the previous problem, we look at a problem with discontinunity at the boundary. This time we let u(x)=cos(2πfx)u\left(x\right)=\cos\left(2\pi fx\right), f=11f=11 with compact support in x[0,1]x\in\left[0,1\right]. The solution is given in closed-form as

dαcos(2πfx)d|x|α\displaystyle\frac{\mathrm{d}^{\alpha}\cos\left(2\pi fx\right)}{\mathrm{d}\left|x\right|^{\alpha}} =Cα(Dxα0f(x)+0Dxαf(x)|x=1x),\displaystyle=C_{\alpha}\left({}_{0}D_{x}^{\alpha}f\left(x\right)+_{0}D_{x}^{\alpha}f\left(x\right)|_{x=1-x}\right), (60)
Dxα0f(x){}_{0}D_{x}^{\alpha}f\left(x\right) =xα22απ((α2)(α1)1F~2(1;3α2,4α2;(πfx)2)\displaystyle=\frac{x^{-\alpha}}{2^{2-\alpha}\sqrt{\pi}}\left((\alpha-2)(\alpha-1)\,_{1}\tilde{F}_{2}\left(1;\frac{3-\alpha}{2},\frac{4-\alpha}{2};-\left(\pi fx\right)^{2}\right)\right.
+(2πfx)2(α52)1F~2(2;5α2,6α2;(πfx)2)+8(πfx)14F~2(3;7α2,8α2;(πfx)2)).\displaystyle\left.+\left(2\pi fx\right)^{2}(\alpha-\frac{5}{2})\,_{1}\tilde{F}_{2}\left(2;\frac{5-\alpha}{2},\frac{6-\alpha}{2};-\left(\pi fx\right)^{2}\right)+8\left(\pi fx\right)^{4}\,_{1}\tilde{F}_{2}\left(3;\frac{7-\alpha}{2},\frac{8-\alpha}{2};-\left(\pi fx\right)^{2}\right)\right). (61)

The numerical results are given in Table 5 and 6. The transform of the truncated cosine function is given by the sum of two sinc functions centered at ω=±2πf\omega=\pm 2\pi f. This means that when Ni<2fN_{i}<2f, the main lobes are beyond Nyquist frequency. Therefore, we can see that the convergence is much more limited at lower NiN_{i}. Convergence rate recovers once the main lobe goes within the Nyquist frequency region. However, due to the decay rate of the sinc function being at ω1\omega^{-1}, convergence is limited to O(Nx1)O\left(N_{x}^{-1}\right). We can conclude that for non-zero boundary condition, polynomial type approximation is more suitable as seen in [11, 12].

5 Conclusion

An efficient method to develop stencil from central difference for Riesz difference for even higher order of convergence than 4 is presented. Numerical results verify the greater convergence rates as predicted in Section 3. The error analysis is performed in the frequency domain, and because of the sinc function, Nyquist rate applies to the convergence of the method. There is an extra cost of O(N2)O\left(N^{2}\right) operations for the evaluation of the filter and O(NNx)O\left(NN_{x}\right) operations for the convolution. However, the higher convergence speed at O(NxN)O\left(N_{x}^{-N}\right) will significantly reduce the amount of nodal points required to achieve a particular error tolerance as demonstrated in numerical experiments such that the overhead will end up saving computational time and memory resources.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

The work described in this paper was fully supported by a grant from the City University of Hong Kong (Project No. 7004827).

Tables

ii\NN α=0.2\alpha=0.2 α=1.8\alpha=1.8
4 6 8 10 4 6 8 10
0 8.492e-07 2.28e-07 8.90e-08 4.697e-08 0.0006732 0.0002419 0.0001197 7.57e-05
1 5.639e-08 4.50e-09 5.939e-10 8.798e-11 5.104e-05 6.092e-06 9.398e-07 1.009e-07
2 3.573e-09 7.417e-11 2.269e-12 7.651e-14 3.323e-06 9.783e-08 1.983e-09 2.946e-10
3 2.225e-10 1.176e-12 1.088e-14 2.908e-15 2.046e-07 1.532e-09 8.309e-12 2.108e-13
4 1.351e-11 2.053e-14 2.966e-15 2.958e-15 1.147e-08 2.36e-11 1.428e-13 1.199e-13
5 7.416e-13 3.007e-15 2.959e-15 2.959e-15 3.772e-10 2.333e-13 1.388e-13 1.366e-13
Table 1: EdkE_{d}^{k} for q=6q=6
ii\NN α=0.2\alpha=0.2 α=1.8\alpha=1.8
4 6 8 10 4 6 8 10
0 3.91 5.66 7.23 9.06 3.72 5.31 6.99 9.55
1 3.98 5.92 8.03 10.2 3.94 5.96 8.89 8.42
2 4.01 5.98 7.70 4.72 4.02 6 7.90 10.4
3 4.04 5.84 1.88 -0.0247 4.16 6.02 5.86 0.814
4 4.19 2.77 0.00356 -0.000493 4.93 6.66 0.0403 -0.188
Table 2: RdkR_{d}^{k} for q=6q=6
ii\NN α=0.2\alpha=0.2 α=1.8\alpha=1.8
4 6 8 10 4 6 8 10
0 6.193e-09 2.707e-09 1.502e-09 9.186e-10 7.223e-06 3.866e-06 2.309e-06 1.488e-06
1 4.250e-10 5.900e-11 1.100e-11 2.509e-12 6.054e-07 1.070e-07 2.327e-08 6.374e-09
2 2.721e-11 9.958e-13 5.871e-14 2.793e-14 4.073e-08 1.962e-09 1.233e-10 1.042e-11
3 1.731e-12 4.026e-14 2.686e-14 2.674e-14 2.562e-09 3.119e-11 1.589e-12 1.414e-12
4 1.322e-13 2.695e-14 2.674e-14 2.674e-14 1.518e-10 1.719e-12 1.418e-12 1.419e-12
5 3.302e-14 2.674e-14 2.674e-14 2.674e-14 7.425e-12 1.418e-12 1.419e-12 1.419e-12
Table 3: EdkE_{d}^{k} for q=10q=10
ii\NN α=0.2\alpha=0.2 α=1.8\alpha=1.8
4 6 8 10 4 6 8 10
0 3.87 5.52 7.09 8.52 3.58 5.18 6.63 7.87
1 3.97 5.89 7.55 6.49 3.89 5.77 7.56 9.26
2 3.97 4.63 1.13 0.0626 3.99 5.97 6.28 2.88
3 3.71 0.579 0.00646 -3.55e-05 4.08 4.18 0.164 -0.00437
4 2.00 0.0110 -3.90e-06 -6.71e-08 4.35 0.278 -0.000345 -0.000128
Table 4: RdkR_{d}^{k} for q=10q=10
ii\NN α=1.2\alpha=1.2 α=1.8\alpha=1.8
4 6 8 10 4 6 8 10
0 831.9 832.2 832.4 832.6 11060 11060 11070 11070
1 345.2 305.3 281.9 266.4 5915 5314 4946 4695
2 38.98 15.37 7.111 3.835 757.6 296.4 127.5 58.70
3 3.380 1.014 0.9050 0.9107 58.04 7.690 2.094 1.514
4 0.5387 0.4642 0.4651 0.4651 4.367 0.8183 0.7831 0.7837
5 0.2328 0.2348 0.2348 0.2348 0.5690 0.3968 0.3971 0.3971
Table 5: EdkE_{d}^{k} for cos(2πfx)\cos\left(2\pi fx\right), f=11f=11
ii\NN α=1.2\alpha=1.2 α=1.8\alpha=1.8
4 6 8 10 4 6 8 10
0 1.27 1.45 1.56 1.64 0.903 1.06 1.16 1.24
1 3.15 4.31 5.31 6.12 2.96 4.16 5.28 6.32
2 3.53 3.92 2.97 2.07 3.71 5.27 5.93 5.28
3 2.65 1.13 0.960 0.969 3.73 3.23 1.42 0.950
4 1.21 0.983 0.986 0.986 2.94 1.04 0.980 0.981
Table 6: RdkR_{d}^{k} for cos(2πfx)\cos\left(2\pi fx\right), f=11f=11

Figures

Refer to caption
(a) Relative frequency response of the central difference operator against that of Riesz derivative
Refer to caption
(b) Spectral convergence rate given by (55)
Figure 1: Frequency response and spectral convergence rate of higher order fractional central difference method

References