This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Application of the Adams-Bashfort-Mowlton Method to the Numerical Study of Linear Fractional Oscillators Models

Parovik Roman [email protected] Institute of Cosmophysical Research and Radio Wave Propagation, Far Eastern Branch of the Russian Academy of Sciences (use complete addresses, including country name or code).
Abstract

The paper presents a numerical analysis of the class of mathematical models of linear fractional oscillators, which is the Cauchy problem for a differential equation with derivatives of fractional orders in the sense of Gerasimov-Caputo. A method based on an explicit nonlocal finite-difference scheme (ENFDS) and the Adams-Bashfort-Moulton (ABM) method are considered as tools for numerical analysis. An analysis of the errors of the methods is carried out, it is shown that the ABM method is more accurate and converges faster to an exact solution than the ENFDS method.

I Introduction

Models of oscillatory systems (oscillators) are used in various fields of knowledge from mechanics to economics and biology [1, 2, 3]. From the point of view of mathematics, these models are usually described using ordinary differential equations of the second order and the corresponding initial conditions, i.e. the Cauchy problem is posed [1]. It should be noted that such mathematical models cannot take into account the properties of the environment, for example, heredity (heredity) or memory effects. This effect is characterized by the fact that the oscillating medium “remembers” the impact on it for a long time.

For the first time, the model of the hereditary oscillator was presented in his work by the Italian mathematician V. Volterra [4]. He proposed to take into account heredity in the linear oscillator model using an integro-differential equation with a difference kernel, which he later called the function of heredity or memory. It should be noted that V. Volterra derived the total energy conservation law for this generalized oscillator, in the formula of which an additional term appeared, which is responsible for the dissipation of its energy. This important result was confirmed in subsequent works on this topic.

If we choose a power-law memory function, then we can, using the mathematical apparatus of fractional calculus, go to other model equations that contain derivatives of fractional orders. In this case, the orders of fractional derivatives, as shown by the results of [5, 6], will be responsible for the intensity of energy dissipation and are related to the Q-factor of the oscillator. Oscillators with such a description are usually called fractional.

Research methods for mathematical models of fractional oscillators can be divided into exact and numerical. The exact methods, for example, include integral transformations [7] or decomposition methods [8], and numerical methods include the theory of finite-difference schemes [9], variational-iterative methods [8].

In this paper, we will carry out a numerical analysis of mathematical models of fractional linear oscillators (FLO) using elements of the theory of finite-difference schemes. As methods of numerical analysis, we will choose a method based on an explicit nonlocal finite-difference scheme studied in the author’s work [10] and the fractional ABM method, which was investigated in [11, 12, 13]. Let us analyze the errors of the methods using Runge’s rule.

II Some reduction from the theory of fractional calculus

Here we will consider the main definitions from the theory of fractional calculus, in more detail its aspects can be studied in the books [14, 15, 16].

Definition 1. Fractional Riemann-Liouville integral of order α\alpha:

I0tαx(τ)=1Γ(α)0tx(τ)dτ(tτ)1α,α>0,t>0,I_{0t}^{\alpha}x\left(\tau\right)=\frac{1}{{\Gamma\left(\alpha\right)}}\int\limits_{0}^{t}{\frac{{x\left(\tau\right)d\tau}}{{{{\left({t-\tau}\right)}^{1-\alpha}}}}},\alpha>0,t>0, (1)

here Γ(.)\Gamma\left(.\right)  is the gamma function.

The operator (1) has the following properties:

I0t0x(τ)=x(t),I0tαI0βx(τ)=I0tα+βx(τ),I0tαI0βx(τ)=I0βI0tαx(τ).I_{0t}^{0}x\left(\tau\right)=x\left(t\right),I_{0t}^{\alpha}I_{0}^{\beta}x\left(\tau\right)=I_{0t}^{\alpha+\beta}x\left(\tau\right),I_{0t}^{\alpha}I_{0}^{\beta}x\left(\tau\right)=I_{0}^{\beta}I_{0t}^{\alpha}x\left(\tau\right).

Definition 2. The fractional Gerasimov-Caputo derivative of order α\alpha has the form:

0tαx(τ)={1Γ(mα)0tx(m)(τ)dτ(tτ)α+1m,0m1<α<m,dmx(t)dtm,mN.\partial_{0t}^{\alpha}x\left(\tau\right)=\left\{\begin{array}[]{l}\dfrac{1}{{\Gamma\left({m-\alpha}\right)}}\int\limits_{0}^{t}{\dfrac{{{x^{(m)}}\left(\tau\right)d\tau}}{{{{\left({t-\tau}\right)}^{\alpha+1-m}}}},0\leq m-1<\alpha<m,}\\ \dfrac{{{d^{m}}x\left(t\right)}}{{d{t^{m}}}},m\in N.\end{array}\right. (2)

The operator (2) has the following properties[14]:

0tαI0tαx(τ)=x(t),I0tα0tαx(τ)k=0i1x(k)(0)tkk!,t>0.\partial_{0t}^{\alpha}I_{0t}^{\alpha}x\left(\tau\right)=x\left(t\right),I_{0t}^{\alpha}\partial_{0t}^{\alpha}x\left(\tau\right)-\sum\limits_{k=0}^{i-1}{\frac{{{x^{\left(k\right)}}\left(0\right){t^{k}}}}{{k!}}},t>0.

III Statement of the problem

Consider the following Cauchy problem:

0tβx(τ)+λ0tγx(τ)+ω(t)x(t)=f(t),x(0)=α1,x˙(0)=α2,\partial_{0t}^{\beta}x\left(\tau\right)+\lambda\partial_{0t}^{\gamma}x\left(\tau\right)+\omega\left(t\right)x\left(t\right)=f\left(t\right),x\left(0\right)={\alpha_{1}},\dot{x}\left(0\right)={\alpha_{2}}, (3)

where x(t)C2[0,T]x\left(t\right)\in{C^{2}}\left[{0,T}\right] is the solution (displacement) function; t[0,T]t\in\left[{0,T}\right] is the coordinate, responsible for time, T>0T>0 is the constant, simulation time; ω(t)\omega\left(t\right) is the function responsible for the frequency of free oscillations and determines the type of FLO; λ\lambda is the coefficient of friction; f(t)f\left(t\right) is the function responsible for external influence; α1,α2{\alpha_{1}},{\alpha_{2}} is the given constants defining the initial condition, fractional differentiation operators are understood in the Gerasimov-Caputo sense (2) of orders 1<β<21<\beta<2 and 0<γ<10<\gamma<1.

Remark 1. The Cauchy problem (3) describes a wide class of FLO and in the case of β=2\beta=2 and γ=1\gamma=1 becomes the class of ordinary linear oscillators [1].

Definition 3. The Cauchy problem (3) will be called the fractional linear oscillator model (FLO).

IV Methods of solution

Consider two numerical methods for solving the Cauchy problem (3): ENFDS and ABM method. The ABM method uses a combination of the explicit Adams-Bashforth method and the implicit Adams-Moulton method.

Explicit non-local finite-difference scheme. Let x(t)C3[0,T]x\left(t\right)\in C^{3}\left[0,T\right] to achieve the required smoothness. Divide the time interval [0,T][0,T] into NN equal parts with a constant step τ=T/N\tau={T\mathord{\left/{\vphantom{TN}}\right.\kern-1.2pt}N}. The solution function x(t)x\left(t\right) will go to the grid function x(tk)=xkx\left({{t_{k}}}\right)={x_{k}}, k=1,,N1k=1,\ldots,\,N-1.

The Gerasimov-Caputo operators in the equation (3) are approximated as follows. For the second operator from (3):

0tγx(τ)=1Γ(1γ)0tx˙(τ)dτ(tτ)γ1Γ(1γ)j=0kjτ(j+1)τx˙(τ)dτ(tk+1τ)γ=\partial_{0t}^{\gamma}x\left(\tau\right)=\frac{1}{{\Gamma\left({1-\gamma}\right)}}\int\limits_{0}^{t}{\frac{{\dot{x}\left(\tau\right)d\tau}}{{{{\left({t-\tau}\right)}^{\gamma}}}}}\approx\frac{1}{{\Gamma\left({1-\gamma}\right)}}\sum\limits_{j=0}^{k}{\int\limits_{j\tau}^{\left({j+1}\right)\tau}{\frac{{\dot{x}\left(\tau\right)d\tau}}{{{{\left({{t_{k+1}}-\tau}\right)}^{\gamma}}}}}}=
=1Γ(1γ)j=0kx(tj+1)x(tj)τjτ(j+1)τdτ(tk+1τ)γ==\frac{1}{{\Gamma\left({1-\gamma}\right)}}\sum\limits_{j=0}^{k}{\frac{{x\left({{t_{j+1}}}\right)-x\left({{t_{j}}}\right)}}{\tau}\int\limits_{j\tau}^{\left({j+1}\right)\tau}{\frac{{d\tau}}{{{{\left({{t_{k+1}}-\tau}\right)}^{\gamma}}}}}}= (4)
1Γ(1γ)j=0kx(tj+1)x(tj)τ(kj)τ(kj+1)τdηηγ=1Γ(1γ)j=0kx(tkj+1)x(tkj)τjτ(j+1)τdηηγ=\frac{1}{{\Gamma\left({1-\gamma}\right)}}\sum\limits_{j=0}^{k}{\frac{{x\left({{t_{j+1}}}\right)-x\left({{t_{j}}}\right)}}{\tau}\int\limits_{\left({k-j}\right)\tau}^{\left({k-j+1}\right)\tau}{\frac{{d\eta}}{{{\eta^{\gamma}}}}}}=\frac{1}{{\Gamma\left({1-\gamma}\right)}}\sum\limits_{j=0}^{k}{\frac{{x\left({{t_{k-j+1}}}\right)-x\left({{t_{k-j}}}\right)}}{\tau}\int\limits_{j\tau}^{\left({j+1}\right)\tau}{\frac{{d\eta}}{{{\eta^{\gamma}}}}}}=
=τγΓ(2γ)j=0k((j+1)2βj2β)(x(tkj+1)x(tkj)).=\frac{{{\tau^{-\gamma}}}}{{\Gamma\left({2-\gamma}\right)}}\sum\limits_{j=0}^{k}{\left({{{\left({j+1}\right)}^{2-\beta}}-{j^{2-\beta}}}\right)\left({x\left({{t_{k-j+1}}}\right)-x\left({{t_{k-j}}}\right)}\right)}.

Therefore, we have according to (4):

0tγx(τ)Bj=0k1bj(xkj+1xkj1),bj=(j+1)1γj1γ,B=λτγΓ(2γ),\partial_{0t}^{\gamma}x\left(\tau\right)\approx B\sum\limits_{j=0}^{k-1}{{b_{j}}\left({{x_{k-j+1}}-{x_{k-j-1}}}\right)},{b_{j}}={\left({j+1}\right)^{1-\gamma}}-{j^{1-\gamma}},B=\frac{{\lambda{\tau^{-\gamma}}}}{{\Gamma\left({2-\gamma}\right)}}, (5)

Similarly, for the first operator from (3) we will have:

0tβx(η)Aj=0k1aj(xkj+12xkj+xkj1),aj=(j+1)2βj2β,\partial_{0t}^{\beta}x\left(\eta\right)\approx A\sum\limits_{j=0}^{k-1}{{a_{j}}\left({{x_{k-j+1}}-2{x_{k-j}}+{x_{k-j-1}}}\right)},{a_{j}}={\left({j+1}\right)^{2-\beta}}-{j^{2-\beta}}, (6)

Substituting the approximations (5) and (6) in the original equation (3), we arrive at the following discrete Cauchy problem:

Aj=0k1aj(xkj+12xkj+xkj1)+Bj=0k1bj(xkj+1xkj)+ωkxk=fk,A\sum\limits_{j=0}^{k-1}{{a_{j}}\left({{x_{k-j+1}}-2{x_{k-j}}+{x_{k-j-1}}}\right)}+B\sum\limits_{j=0}^{k-1}{{b_{j}}\left({{x_{k-j+1}}-{x_{k-j}}}\right)}+{\omega_{k}}{x_{k}}={f_{k}}, (7)
x0=α1,x1=α2+τα1.{x_{0}}={\alpha_{1}},{x_{1}}={\alpha_{2}}+\tau{\alpha_{1}}.

For the discrete Cauchy problem (7), the following lemma holds.

Lemma 1. The coefficients of the discrete Cauchy problem (7) have the following properties:

1)j=0k1aj=k2β,j=0k1bj=k1γ, 2) 1=a0>a1>>0,1=b0>b1>>0, 3)A0,B01)\sum\limits_{j=0}^{k-1}{{a_{j}}={k^{2-\beta}}},\,\sum\limits_{j=0}^{k-1}{{b_{j}}={k^{1-\gamma}}},\,2)\,1=a_{0}>a_{1}>\ldots>0,1=b_{0}>b_{1}>\ldots>0,\,3)\,A\geq 0,B\geq 0 (8)

Proof. The first property follows from the definition:

j=0k1aj=j=0k1[(j+1)2βj2β]=10+22β1+32β22β++\sum\limits_{j=0}^{k-1}{{a_{j}}=\sum\limits_{j=0}^{k-1}{\left[{{{\left({j+1}\right)}^{2-\beta}}-{j^{2-\beta}}}\right]}}=1-0+{2^{2-\beta}}-1+{3^{2-\beta}}-{2^{2-\beta}}+...+
+(k1)2β+k2β(k1)2β=k2β.+{\left({k-1}\right)^{2-\beta}}+{k^{2-\beta}}-{\left({k-1}\right)^{2-\beta}}={k^{2-\beta}}.
j=0k1bj=j=0k1[(j+1)1γj1γ]=10+21γ1+31γ21γ++\sum\limits_{j=0}^{k-1}{{b_{j}}=\sum\limits_{j=0}^{k-1}{\left[{{{\left({j+1}\right)}^{1-\gamma}}-{j^{1-\gamma}}}\right]}}=1-0+{2^{1-\gamma}}-1+{3^{1-\gamma}}-{2^{1-\gamma}}+...+
+(k1)1γ+k1γ(k1)1γ=k1γ.+{\left({k-1}\right)^{1-\gamma}}+{k^{1-\gamma}}-{\left({k-1}\right)^{1-\gamma}}={k^{1-\gamma}}.

We prove the second condition as follows. Let’s introduce the following functions: ϕ(z)=(z+1)2βz2β\phi\left(z\right)=(z+1)^{2-\beta}-z^{2-\beta}, η(z)=(z+1)1γz1γ\eta\left(z\right)=(z+1)^{1-\gamma}-z^{1-\gamma} where z>0z>0. These functions are monotonically decreasing. Indeed, let us take the derivatives with respect to zz of these functions. We get:

ϕ(z)=(2β)(z+1)1βz1β,η(z)=(1γ)(z+1)1γz1γ\phi^{\prime}\left(z\right)=\left(2-\beta\right)(z+1)^{1-\beta}-z^{1-\beta},\,\eta^{\prime}\left(z\right)=\left(1-\gamma\right)(z+1)^{1-\gamma}-z^{1-\gamma}

The third property follows from the definition of the gamma function. It is known that the gamma function Γ(z)>0\Gamma\left(z\right)>0 for z\forall z. Since τ>0,λ>0\tau>0,\lambda>0, then we come to the conclusion that A0A\geq 0, B0B\geq 0. \square

Let us now investigate the order of approximation of fractional operators 0tβx(τ)\partial_{0t}^{\beta}x\left(\tau\right) and 0tγx(τ)\partial_{0t}^{\gamma}x\left(\tau\right). Let ¯0tβx(τ)\overline{\partial}_{0t}^{\beta}x\left(\tau\right) and ¯0tγx(τ)\overline{\partial}_{0t}^{\gamma}x\left(\tau\right) is the approximation operators. Then we have the following lemma.

Lemma 2. Approximations ¯0tβx(τ)\overline{\partial}_{0t}^{\beta}x\left(\tau\right) and ¯0tγx(τ)\overline{\partial}_{0t}^{\gamma}x\left(\tau\right) Gerasimov-Caputo operators (2) 0tβx(τ)\partial_{0t}^{\beta}x\left(\tau\right) and 0tγx(τ)\partial_{0t}^{\gamma}x\left(\tau\right) satisfy the following estimates:

0tβx(τ)=¯0tβx(τ)+O(τ4β),0tγx(τ)=¯0tγx(τ)+O(τ2γ),\partial_{0t}^{\beta}x\left(\tau\right)=\overline{\partial}_{0t}^{\beta}x\left(\tau\right)+O\left(\tau^{4-\beta}\right),\partial_{0t}^{\gamma}x\left(\tau\right)=\overline{\partial}_{0t}^{\gamma}x\left(\tau\right)+O\left(\tau^{2-\gamma}\right), (9)

where O()O\left(\cdot\right) is the Landau symbol.

Proof. Using the first property (8) of Lemma 1 and the relations (5) and (6), we obtain

¯0tβx(τ)=τ2βΓ(3β)j=0k1aj[x¨(tjτ)+O(τ2)]=τ2βΓ(3β)j=0k1ajx¨(tjτ)+k2βO(τ4β)Γ(3β)=\overline{\partial}_{0t}^{\beta}x\left(\tau\right)=\frac{\tau^{2-\beta}}{\Gamma\left(3-\beta\right)}\sum\limits_{j=0}^{k-1}a_{j}\left[\ddot{x}\left(t-j\tau\right)+O\left(\tau^{2}\right)\right]=\frac{\tau^{2-\beta}}{\Gamma\left(3-\beta\right)}\sum\limits_{j=0}^{k-1}a_{j}\ddot{x}\left(t-j\tau\right)+\frac{k^{2-\beta}O\left(\tau^{4-\beta}\right)}{\Gamma\left(3-\beta\right)}=
=τ2βΓ(3β)j=0k1ajx¨(tjτ)+O(τ4β).=\frac{\tau^{2-\beta}}{\Gamma\left(3-\beta\right)}\sum\limits_{j=0}^{k-1}{{a_{j}}\ddot{x}\left({t-j\tau}\right)+O\left({{\tau^{4-\beta}}}\right)}.
0tβx(τ)=τ2βΓ(3β)j=0k1aj[x¨(tξj)],ξj[jτ,(j+1)τ].\partial_{0t}^{\beta}x\left(\tau\right)=\frac{{{\tau^{2-\beta}}}}{{\Gamma\left({3-\beta}\right)}}\sum\limits_{j=0}^{k-1}{{a_{j}}\left[{\ddot{x}\left({t-{\xi_{j}}}\right)}\right]},{\xi_{j}}\in\left[{j\tau,\left({j+1}\right)\tau}\right].
|¯0tβx(τ)0tβx(τ)|=|τ2βΓ(3β)j=0k1aj[x¨(tjτ)x¨(tξj)]+O(τ4β)|=|\overline{\partial}_{0t}^{\beta}x\left(\tau\right)-\partial_{0t}^{\beta}x\left(\tau\right)|=\left|\frac{\tau^{2-\beta}}{\Gamma\left(3-\beta\right)}\sum\limits_{j=0}^{k-1}a_{j}\left[\ddot{x}\left(t-j\tau\right)-\ddot{x}\left(t-\xi_{j}\right)\right]+O\left(\tau^{4-\beta}\right)\right|=
=|τ2βΓ(3β)j=0k1ajO(τ2)+O(τ4β)|=|k2βO(τ4β)Γ(3β)+O(τ4β)|=O(τ4β).=\left|\frac{\tau^{2-\beta}}{\Gamma\left(3-\beta\right)}\sum\limits_{j=0}^{k-1}a_{j}O\left(\tau^{2}\right)+O\left(\tau^{4-\beta}\right)\right|=\left|\frac{k^{2-\beta}O\left(\tau^{4-\beta}\right)}{\Gamma\left(3-\beta\right)}+O\left(\tau^{4-\beta}\right)\right|=O\left(\tau^{4-\beta}\right).

Likewise, we can show the second estimate (9). The lemma is proved. \square

Remark 2. In the case when in relations (9) β=2\beta=2 and γ=1\gamma=1, then we obtain approximations of the derivatives of the first and second orders with the corresponding first and second orders of approximation.

Remark 3. Note that at the internal nodes of the computational grid, taking into account Lemma 2, the explicit nonlocal finite-difference scheme (7) approximates the differential Cauchy problem with order 2γ2-\gamma, however, the general order of approximation of scheme (7) is the first due to the approximation of the initial conditions.

We rewrite the discrete Cauchy problem (7) as follows:

{x0=α1,j=0,x1=α1+τα2,j=1,xk+1=fk+(2ABωk)xkAxk1A+BAA+Bj=1k1aj(xkj+12xkj+xkj1)BA+Bj=1k1bj(xkj+1xkj),j=2,,N1.\left\{\begin{array}[]{l}{x_{0}}={\alpha_{1}},\,j=0,\\ {x_{1}}={\alpha_{1}}+\tau{\alpha_{2}},\,j=1,\\ {x_{k+1}}=\dfrac{{{f_{k}}+\left({2A-B-{\omega_{k}}}\right){x_{k}}-A{x_{k-1}}}}{{A+B}}-\\ -\dfrac{A}{{A+B}}\sum\limits_{j=1}^{k-1}{{a_{j}}\left({{x_{k-j+1}}-2{x_{k-j}}+{x_{k-j-1}}}\right)}-\\ -\dfrac{B}{{A+B}}\sum\limits_{j=1}^{k-1}{{b_{j}}\left({{x_{k-j+1}}-{x_{k-j}}}\right)},\,j=2,\ldots,\,N-1.\end{array}\right. (10)

or in matrix form:

Xk+1=MXk+Fk,{X_{k+1}}=M{X_{k}}+{F_{k}}, (11)
Xk+1=(x1,x2,,xk)T,Xk=(x0,x1,,xk1)T,{X_{k+1}}={\left({{x_{1}},{x_{2}},\ldots,{x_{k}}}\right)^{T}},\,{X_{k}}={\left({{x_{0}},{x_{1}},\ldots,{x_{k-1}}}\right)^{T}},
Fk=(f0,f1A+λ1B,,fk1A+λk1B)T,f0=τα2,{F_{k}}={\left({{f_{0}},\frac{{{f_{1}}}}{{A+{\lambda_{1}}B}},\ldots,\frac{{{f_{k-1}}}}{{A+{\lambda_{k-1}}B}}}\right)^{T}},\,{f_{0}}=\tau{\alpha_{2}},

where the Hessenberg matrix MM in (11) has the form:

mij={0,ji+1,A(2a1)Bλi1b1A+λi1B,j=i=3,,N1,A(aij+12aij+aij1)Bλi1(bij+1bij1)A+λi1B,ji1,{m_{ij}}=\left\{\begin{array}[]{l}0,\,j\geq i+1,\\ \dfrac{{A\left({2-{a_{1}}}\right)-B{\lambda_{i-1}}{b_{1}}}}{{A+{\lambda_{i-1}}B}},\,j=i=3,\ldots,\,N-1,\\ \dfrac{{-A\left({{a_{i-j+1}}-2{a_{i-j}}+{a_{i-j-1}}}\right)-B{\lambda_{i-1}}\left({{b_{i-j+1}}-{b_{i-j-1}}}\right)}}{{A+{\lambda_{i-1}}B}},\,j\leq i-1,\end{array}\right. (12)
m1,1=1,m2,2=2AA+λ1B,mi,1=Bλi1bi2Aai2A+λi1B,i=2,,N1,{m_{1,1}}=1,\,{m_{2,2}}=\dfrac{{2A}}{{A+{\lambda_{1}}B}},\,{m_{i,1}}=\dfrac{{B{\lambda_{i-1}}{b_{i-2}}-A{a_{i-2}}}}{{A+{\lambda_{i-1}}B}},\,i=2,\ldots,\,N-1,
mi,2=A(2ai2ai3)+λi1Bbi3A+λi1B,i=3,,N1.{m_{i,2}}=\dfrac{{A\left({2{a_{i-2}}-{a_{i-3}}}\right)+{\lambda_{i-1}}B{b_{i-3}}}}{{A+{\lambda_{i-1}}B}},\,i=3,\ldots,\,N-1.

The following theorems are true.

Theorem 1 [10]. A nonlocal explicit finite-difference scheme (10) converges with the first order if the following condition is satisfied:

ττ0=min(1,(Γ(2γ)λΓ(3β))1βγ).\tau\leq{\tau_{0}}=\min\left({1,{{\left({\dfrac{{\Gamma\left({2-\gamma}\right)}}{{\lambda\Gamma\left({3-\beta}\right)}}}\right)}^{\dfrac{1}{{\beta-\gamma}}}}}\right). (13)

Let Xk,Yk{X_{k}},{Y_{k}} be two different solutions of the matrix equation (11) with the initial conditions X0,Y0{X_{0}},{Y_{0}}. Then the scheme stability theorem is valid.

Theorem 2 [10]. A nonlocal explicit finite-difference scheme (10) is conditionally stable if the condition (13) is satisfied and the estimate |YkXk|C|Y0X0|\left|{{Y_{k}}-{X_{k}}}\right|\leq C\left|{{Y_{0}}-{X_{0}}}\right| for any kk, where C>0C>0 is a constant independent of the step τ\tau.

The proof of Theorems 1 and 2 is based on the results of Lemmas 1 and 2.

Adams-Bashfort-Moulton method. The ABM method is a type of numerical predictor-corrector method for solving differential equations. It has been studied and discussed in detail in the papers [11, 12, 13]. Let’s generalize this method for solving the Cauchy problem (3). Taking into account Definitions 1 and 2, as well as the corresponding properties of the operators of fractional integration and differentiation, we write it in the form of a system:

{0tσ1x(τ)=y(t),0tσ2y(τ)=f(t)λy(t)ω(t)x(t),x(0)=α1,y(0)=α2\left\{\begin{array}[]{l}\partial_{0t}^{{\sigma_{1}}}x\left(\tau\right)=y\left(t\right),\\ \partial_{0t}^{{\sigma_{2}}}y\left(\tau\right)=f\left(t\right)-\lambda y\left(t\right)-\omega\left(t\right)x\left(t\right),\\ x\left(0\right)={\alpha_{1}},y\left(0\right)={\alpha_{2}}\end{array}\right. (14)

where σ1=γ{\sigma_{1}}=\gamma, σ2=βγ{\sigma_{2}}=\beta-\gamma.

Let 0<σ210<{\sigma_{2}}\leq 1, i.e. {β}<{γ}\left\{\beta\right\}<\left\{\gamma\right\},   {}\left\{\cdot\right\} is the fractional part of the number.

Remark 4. Note that if the system (14) {β}>{γ}\left\{\beta\right\}>\left\{\gamma\right\}, i.e. 1<σ2<21<{\sigma_{2}}<2, then we can always transform it by adding one more equation.

On a uniform grid, we introduce the functions xn+1p,yn+1px_{n+1}^{p},\,y_{n+1}^{p}, n=0,,N1n=0,\ldots,\,N-1, which will be determined by the Adams-Bashforth formula (predictor):

{xn+1p=x0+τσ1Γ(σ1+1)j=0nθj,n+11yj,yn+1p=y0+τσ2Γ(σ2+1)j=0nθj,n+12(λyjωjxj+fj),θj,n+1i=(nj+1)σi(nj)σi,i=1,2.\left\{\begin{array}[]{l}x_{n+1}^{p}={x_{0}}+\dfrac{{{\tau^{{\sigma_{1}}}}}}{{\Gamma\left({{\sigma_{1}}+1}\right)}}\sum\limits_{j=0}^{n}{\theta_{j,n+1}^{1}{y_{j}},}\\ y_{n+1}^{p}={y_{0}}+\dfrac{{{\tau^{{\sigma_{2}}}}}}{{\Gamma\left({{\sigma_{2}}+1}\right)}}\sum\limits_{j=0}^{n}{\theta_{j,n+1}^{2}\left({-\lambda{y_{j}}-{\omega_{j}}{x_{j}}+{f_{j}}}\right),}\\ \theta_{j,n+1}^{i}={\left({n-j+1}\right)^{{\sigma_{i}}}}-{\left({n-j}\right)^{{\sigma_{i}}}},i=1,2.\end{array}\right. (15)

Then, using the Adams-Moulton formula for the corrector, we get:

{xn+1=x0+τσ1Γ(σ1+2)(yn+1p+j=0nρj,n+11yj),yn+1=y0+τσ2Γ(σ2+2)(λyn+1pωn+1xn+1p+fn+1+j=0nρj,n+12(λyjωjxj+fj)),\left\{\begin{array}[]{l}{x_{n+1}}={x_{0}}+\dfrac{{{\tau^{{\sigma_{1}}}}}}{{\Gamma\left({{\sigma_{1}}+2}\right)}}\left({y_{n+1}^{p}+\sum\limits_{j=0}^{n}{\rho_{j,n+1}^{1}{y_{j}}}}\right),\\ {y_{n+1}}={y_{0}}+\dfrac{{{\tau^{{\sigma_{2}}}}}}{{\Gamma\left({{\sigma_{2}}+2}\right)}}\left({-\lambda y_{n+1}^{p}-{\omega_{n+1}}x_{n+1}^{p}+{f_{n+1}}+\sum\limits_{j=0}^{n}{\rho_{j,n+1}^{2}\left({-\lambda{y_{j}}-{\omega_{j}}{x_{j}}+{f_{j}}}\right)}}\right),\end{array}\right. (16)

where the weight factors in (16) are determined by the formula:

ρj,n+1i={nσi+1(nσi)(n+1)σi,j=0,(nj+2)σi+1+(nj)σi+12(nj+1)σi+1, 1jn,1,j=n+1,i=1,2.\rho_{j,n+1}^{i}=\left\{\begin{array}[]{l}{n^{{\sigma_{i}}+1}}-\left({n-{\sigma_{i}}}\right){\left({n+1}\right)^{{\sigma_{i}}}},\,j=0,\\ {\left({n-j+2}\right)^{{\sigma_{i}}+1}}+{\left({n-j}\right)^{{\sigma_{i}}+1}}-2{\left({n-j+1}\right)^{{\sigma_{i}}+1}},\,1\leq j\leq n,\\ 1,\,j=n+1,\\ i=1,2.\end{array}\right.

Theorem 3 [12]. If 0tσixi(τ)C2[0,T],(x1=x(t),x2=y(t),i=1,2)\partial_{0t}^{{\sigma_{i}}}{x_{i}}\left(\tau\right)\in{C^{2}}\left[{0,T}\right],\left({{x_{1}}=x\left(t\right),\,{x_{2}}=y\left(t\right),\,i=1,2}\right), then

max1jn|xi(tj)xi,j|=O(h1+miniσi).\mathop{\max}\limits_{1\leq j\leq n}\left|{{x_{i}}\left({{t_{j}}}\right)-{x_{i,j}}}\right|=O\left({{h^{1+\mathop{\min}\limits_{i}{\sigma_{i}}}}}\right). (17)

V Computational Accuracy Analysis

Let us examine how the computational accuracy of the methods behaves. To do this, we will use the double recalculation method (Runge’s rule) to estimate the error using the formula:

ξ=maxi|xix2i|2μ1,\xi=\frac{{\mathop{\max}\limits_{i}\left|{{x_{i}}-{x_{2i}}}\right|}}{{{2^{\mu}}-1}}, (18)

where μ\mu is the order of approximation of the numerical method, x2i{x_{2i}} is the numerical solution at the step τ/2{\tau\mathord{\left/{\vphantom{\tau 2}}\right.\kern-1.2pt}2}. The computational accuracy of pp is determined from the formula:

p=logτ1τ2ξ1ξ2,p={\log_{\frac{{{\tau_{1}}}}{{{\tau_{2}}}}}}\frac{{{\xi_{1}}}}{{{\xi_{2}}}}, (19)

τ1,τ2=τ1/2{\tau_{1}},\,{\tau_{2}}={{{\tau_{1}}}\mathord{\left/{\vphantom{{{\tau_{1}}}2}}\right.\kern-1.2pt}2}   — grid steps, ξ1,ξ2{\xi_{1}},\,{\xi_{2}}   — errors at step τ1{\tau_{1}} and at step τ2{\tau_{2}}.

Example 1. Consider the Cauchy problem (3) with homogeneous initial conditions, choosing the following functions and parameter values: ω(t)=ω0β\omega\left(t\right)=\omega_{0}^{\beta},  f(t)=ω0βt3+6t3βΓ(4β)+6λt3γΓ(4γ),λ=0.1,β=1.8,γ=0.9,ω0=1,t[0,1]f\left(t\right)=\omega_{0}^{\beta}{t^{3}}+\dfrac{{6{t^{3-\beta}}}}{{\Gamma\left({4-\beta}\right)}}+\dfrac{{6\lambda{t^{3-\gamma}}}}{{\Gamma\left({4-\gamma}\right)}},\,\lambda=0.1,\,\beta=1.8,\,\gamma=0.9,\,{\omega_{0}}=1,\,t\in\left[{0,1}\right].

0t1.8x(τ)+0.10t0.9x(τ)+x(t)=t3+6t1.2Γ(2.2)+0.6t2.1Γ(3.1),x(0)=x˙(0)=0.\partial_{0t}^{1.8}x\left(\tau\right)+0.1\partial_{0t}^{0.9}x\left(\tau\right)+x\left(t\right)={t^{3}}+\frac{{6{t^{1.2}}}}{{\Gamma\left({2.2}\right)}}+\frac{{0.6{t^{2.1}}}}{{\Gamma\left({3.1}\right)}},x\left(0\right)=\dot{x}\left(0\right)=0. (20)

The solution to the Cauchy problem (20), as you can see, is the function:

x(t)=t3.x\left(t\right)={t^{3}}. (21)

Remark 5. The values of the parameters were chosen in such a way that the condition of Theorems 1 and 2 would be satisfied.

Due to the fact that the exact solution (21) of the Cauchy problem (20) is known, we can calculate the errors of numerical methods from the following relations:

ξFEX=maxj(|x(tj)xjF|),ξPCEX=maxj(|x(tj)xjPC|),\xi_{F}^{EX}=\mathop{\max}\limits_{j}\left({\left|{x\left({{t_{j}}}\right)-x_{j}^{F}}\right|}\right),\xi_{PC}^{EX}=\mathop{\max}\limits_{j}\left({\left|{x\left({{t_{j}}}\right)-x_{j}^{PC}}\right|}\right), (22)

where ξFEX,ξPCEX\xi_{F}^{EX},\xi_{PC}^{EX}   — errors of ENFDS and ABM methods, orders of computational accuracy   — pFp_{F} and pPCp_{PC}, which calculated by the formula (19).

Table 1: Errors analysis of numerical methods for β=1.8,γ=0.9\beta=1.8,\gamma=0.9, taking into account the exact solution (21)
NN τ\tau ξFEX\xi_{F}^{EX} ξPCEX\xi_{PC}^{EX} pFp_{F} pPCp_{PC}
10 1/10 0.0561528 0.0039560 - -
20 1/20 0.0340440 0.0009317 0.721956135 2.0861050
40 1/40 0.0195531 0.0002362 0.800003094 1.9798565
80 1/80 0.0108341 0.0000628 0.851815427 1.9100427
160 1/160 0.0058603 0.0000171 0.886538801 1.8748556
320 1/320 0.0031176 0.0000047 0.910518254 1.8577979
640 1/640 0.0016380 0.0000014 0.92851899 1.7975624

From Table 1 shows that with an increase in the nodes of the computational grid, the errors of numerical methods decrease. It can be seen that the ABM method converges faster than the JANCRS method. Based on (9), the ENFDS approximation is of the first order. According to the relation (17) for Example 1, the order of the ABM method.

From Table 1, we also see that with an increase in the nodes of the computational grid, the computational accuracy increases and tends to unity, and pPCp_{PC}   — almost immediately reached the value 1.9. On the one hand, this confirms the validity of Lemma 3 and Theorem 3, and on the other hand, the faster convergence and high accuracy of the ABM method in comparison with the ENFDS method.

As a comparison with the results from Table 1, we calculate the errors and orders of magnitude of the computational accuracy of numerical methods taking into account the Runge rule (Table 2).

Table 2: Errors analysis of numerical methods for β=1.8,γ=0.9\beta=1.8,\gamma=0.9 using the double recalculation method (18)
NN τ\tau ξFEX\xi_{F}^{EX} ξPCEX\xi_{PC}^{EX} pFp_{F} pPCp_{PC}
10 1/10 0.025634 0.0011076 - -
20 1/20 0.015479 0.0002546 0.727709917 2.120946173
40 1/40 0.008987 0.0000635 0.784362481 2.004652815
80 1/80 0.005045 0.0000167 0.832949245 1.923159876
160 1/160 0.002761 0.0000045 0.86953863 1.881091919
320 1/320 0.001484 0.0000013 0.895926947 1.860912083
640 1/640 0.000786 0.0000004 0.916645745 1.78891439

From Table 2 that the double recalculation method based on the Runge rule is in good agreement with the results from Table 1.

In the classical case, when the ABM method has the second order, and the ENFDS is still the same first order of approximation. Numerical analysis of errors for Example 1 is given in Table 3.

Table 3: Errors analysis of numerical methods for β=2,γ=1\beta=2,\gamma=1 by the double recalculation method (18)
NN τ\tau ξF\xi_{F} ξPC\xi_{PC} pFp_{F} pPCp_{PC}
10 1/10 0.00758 0.001049 - -
20 1/20 0.00253 0.000232 1.585196 2.179457
40 1/40 0.00093 0.000054 1.445025 2.093444
80 1/80 0.00038 0.000013 1.296494 2.046768
160 1/160 0.00017 0.000003 1.177333 2.023954
320 1/320 0.00008 0.0000008007 1.100742 2.012096
640 1/640 0.00003739 0.0000001994 1.058277 2.005656

From Table 3 that the orders of computational accuracy and tend to the orders of approximation 1 and 2 for the corresponding numerical methods.

Refer to caption
Figure 1: Numerical results using the Adams-Bashfort-Moulton (ABM) and Explicit Nonlocal Finite Difference Scheme (ENFDS) methods compared to the exact solution (ES) for Example 1 at N=20N=20

Figure 1 illustrates the effectiveness of the ABM method in comparison with the ENFDS. It can be seen that, due to the fast convergence and higher accuracy, the ABM method gives the best result.

However, the ABM method does not always work better than explicit methods like ENFDS. Let us show this with the following example.

Example 2. Consider an analogue of the harmonic oscillator (AHO). To do this, in the Cauchy problem (3) choose:

ω(t)=ω0β,f(t)=μtδΓ(1+δ),λ=0.\omega\left(t\right)=\omega_{0}^{\beta},f\left(t\right)=\frac{{\mu{t^{\delta}}}}{{\Gamma\left({1+\delta}\right)}},\lambda=0.

Then we come to the following problem:

0tβx(τ)+ω0βx(t)=μtδΓ(1+δ),x(0)=α1,x˙(0)=α2.\partial_{0t}^{\beta}x\left(\tau\right)+\omega_{0}^{\beta}x\left(t\right)=\frac{{\mu{t^{\delta}}}}{{\Gamma\left({1+\delta}\right)}},x\left(0\right)={\alpha_{1}},\dot{x}\left(0\right)={\alpha_{2}}. (23)

The solution to the Cauchy problem (23) is the function:

x(t)=α1Eβ,1((ω0t)β)+α2tEβ,2((ω0t)β)+μtβ+gEβ,β+δ+1((ω0t)β),δ>β,x\left(t\right)={\alpha_{1}}{E_{\beta,1}}\left({-{{\left({{\omega_{0}}t}\right)}^{\beta}}}\right)+{\alpha_{2}}t{E_{\beta,2}}\left({-{{\left({{\omega_{0}}t}\right)}^{\beta}}}\right)+\mu{t^{\beta+g}}{E_{\beta,\,\beta+\delta+1}}\left({-{{\left({{\omega_{0}}t}\right)}^{\beta}}}\right),\delta>-\beta, (24)

where Eα,β(z)=k=0zkΓ(αk+β){E_{\alpha,\beta}}\left(z\right)=\sum\limits_{k=0}^{\infty}{\dfrac{{{z^{k}}}}{{\Gamma\left({\alpha k+\beta}\right)}}} — two-parameter Mittag-Leffler function [14].

Let’s consider the numerical methods ABM and ENFDS for solving the Cauchy problem (23) and compare them with the exact solution (24). First, consider the usual harmonic oscillator (β=2\beta=2).

The parameter values in the problem (24) are chosen as follows: δ=0.3,μ=0.5,ω0=2,α1=α2=1,t[0,1]\delta=0.3,\mu=0.5,{\omega_{0}}=2,{\alpha_{1}}={\alpha_{2}}=1,t\in\left[{0,1}\right].

Refer to caption
Figure 2: Numerical results using Adams-Bashfort-Moulton (ABM) and Explicit Nonlocal Finite Difference Scheme (ENFDS) versus Exact Solution (ES) for Example 2 at N=20N=20

We see in Figure 2 that the ABM method works in much the same way as the ENFDS method. The order of computational accuracy is shown in the following Table 4. In the case of AHO, the error analysis is given in Table 5.

Table 4: Errors analysis of numerical methods for Example 2 (β=2\beta=2).
NN τ\tau ξFEX\xi_{F}^{EX} ξPCEX\xi_{PC}^{EX} pFp_{F} pPCp_{PC}
10 1/10 0.0962543446 0.0974594278 - -
20 1/20 0.0483529972 0.0450644425 0.993246342 1.112812209
40 1/40 0.0243287977 0.0216596566 0.990940293 1.056979173
80 1/80 0.0122257234 0.0106471450 0.992745193 1.024543742
160 1/160 0.0061384979 0.0052867956 0.993962258 1.010001173
320 1/320 0.0030783418 0.0026378633 0.995732241 1.003023747
640 1/640 0.0015461838 0.0013190516 0.993441601 0.9998688
Table 5: Errors analysis of numerical methods for Example 2 (β=1.8\beta=1.8)
NN τ\tau ξFEX\xi_{F}^{EX} ξPCEX\xi_{PC}^{EX} pFp_{F} pPCp_{PC}
10 1/10 0.1220802888 0.0833919341 - -
20 1/20 0.0632933442 0.0376498054 0.947704578 1.147265441
40 1/40 0.0326304379 0.0179210448 0.955835448 1.070987659
80 1/80 0.0167613673 0.0087578726 0.961078508 1.033002381
160 1/160 0.0085801061 0.0043373867 0.966072448 1.013754391
320 1/320 0.0043785606 0.0021615271 0.970538809 1.004775148
640 1/640 0.0022275871 0.0010803221 0.974974836 1.000589405

We see that the order of the computational accuracy of the methods tends to unity, i.e. the accuracy of the ABM method is similar to that of the ENFDS method, although the convergence rate is higher.

VI Conclusion

In this work, a numerical analysis of the Cauchy problem for a model class of fractional linear oscillators was carried out. The methods of ENFDS and ABM were considered as numerical methods. It has been shown on test examples that the ABM method converges faster and is more accurate (not always) than the ENFDS method.

The study of FLO with variable memory is also of interest. In this case, derivatives of fractional variable orders appear in the model equation of the Cauchy problem (3). Some aspects of the numerical analysis of such a generalized Cauchy problem (3) were considered by the author in the work [17].

References

  • Andronov, Vitt, and Khaikin [2013] A. A. Andronov, A. A. Vitt,  and S. E. Khaikin, Theory of Oscillators: Adiwes International Series in Physics, Vol. 4 (Elsevier, 2013) p. 848.
  • Danylchuk, Kibalnyk, and Serdiuk [2019] H. Danylchuk, L. Kibalnyk,  and O. Serdiuk, “Study of critical phenomena in economic systems using a model of damped oscillations,” SHS Web Conf. 65, 06008 (2019), DOI:10.1051/shsconf/20196506008.
  • Li and Yang [2018] Z. Li and Q. Yang, “Systems and synthetic biology approaches in understanding biological oscillators,” Quantitative Biology 6, 1–14 (2018), DOI:10.1007/s40484-017-0120-7.
  • Volterra [1912] V. Volterra, “Sur les equations integro-differentielles et leurs applications,” Acta Mathematica 35, 295–356 (1912).
  • Pskhu and Rekhviashvili [2018] A. V. Pskhu and S. S. Rekhviashvili, “Analysis of forced oscillations of a fractional oscillator,” Tech. Phys. Lett. 44, 1218–1221 (2018), DOI:10.1134/S1063785019010164.
  • Parovik [2019] R. I. Parovik, “Amplitude-frequency and phase-frequency perfomances of forced oscillations of a nonlinear fractional oscillator,” Tech. Phys. Lett. 45, 660–663 (2019), DOI:10.1134/S1063785019070095.
  • Khalouta and Kadem [2019] A. Khalouta and A. A. Kadem, “New method to solve fractional differential equations: Inverse fractional shehu transform method,” Applications & Applied Mathematics 14, 926–941 (2019).
  • Momani and Odibat [2007] S. Momani and Z. Odibat, “Numerical comparison of methods for solving linear differential equations of fractional order,” Chaos, Solitons & Fractals 31, 1248–1255 (2007), DOI:10.1016/j.chaos.2005.10.068.
  • Diethelm et al. [2005] K. Diethelm, N. J. Ford, A. D. Freed,  and Y. Luchko, “Algorithms for the fractional calculus: a selection of numerical methods,” Computer methods in applied mechanics and engineering 194, 743–773 (2005), DOI:10.1016/j.cma.2004.06.006.
  • Parovik [2018] R. I. Parovik, “Mathematical model of a wide class memory oscillators,” Bulletin of the South Ural State University. Series Mathematical modeling and programming 11, 108–122 (2018), DOI:10.14529/mmp180209.
  • Garrappa [2018] R. Garrappa, “Numerical solution of fractional differential equations: A survey and a software tutorial,” Mathematics 6 (2018), DOI:10.3390/math6020016.
  • Yang and Liu [2006] C. Yang and F. Liu, “A computationally effective predictor-corrector method for simulating fractional-order dynamical control system,” ANZIAM J. 47, 168–184 (2006), DOI:10.21914/anziamj.v47i0.1037.
  • Diethelm, Ford, and Freed [2002] K. Diethelm, N. J. Ford,  and A. D. Freed, “A predictor-corrector approach for the numerical solution of fractional differential equations,” Nonlinear Dyn. 29, 3–22 (2002), DOI:10.1023/A:1016592219341.
  • Kilbas, Srivastava, and Trujillo [2006] A. A. Kilbas, H. M. Srivastava,  and J. J. Trujillo, Theory and Applications of Fractional Differential Equations (Elsevier, 2006) p. 523.
  • Oldham and Spanier [1974] K. B. Oldham and J. Spanier, The fractional calculus. Theory and applications of differentiation and integration to arbitrary order (Academic Press, 1974) p. 240.
  • Miller and B. [1993] K. S. Miller and R. B., An introduction to the fractional calculus and fractional differntial equations (A Wiley-Interscience publication, 1993) p. 384.
  • Parovik [2016] R. I. Parovik, “Explicit finite-difference scheme for the numerical solution of the model equation of nonlinear hereditary oscillator with variable-order fractional derivatives,” Archives of Control Sciences 26, 429–435 (2016), DOI:10.1515/acsc-2016-0023.