This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Optimization of Adams-type difference formulas in Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1)

Kh.M. Shadimetov1,2, R.S. Karimov1,3,∗

1V.I. Romanovskiy Institute of Mathematics, Uzbekistan Academy of Sciences, Tashkent, Uzbekistan
2Department of Informatics and computer graphics, Tashkent State Transport University, Tashkent, Uzbekistan
3Department of Mathematics and natural sciences, Bukhara Institute of Natural Resources Management, Bukhara, Uzbekistan


Abstract. In this paper, we consider the problem of constructing new optimal explicit and implicit Adams-type difference formulas for finding an approximate solution to the Cauchy problem for an ordinary differential equation in a Hilbert space. In this work, I minimize the norm of the error functional of the difference formula with respect to the coefficients, we obtain a system of linear algebraic equations for the coefficients of the difference formulas. This system of equations is reduced to a system of equations in convolution and the system of equations is completely solved using a discrete analog of a differential operator d2/dx21d^{2}/dx^{2}-1. Here we present an algorithm for constructing optimal explicit and implicit difference formulas in a specific Hilbert space. In addition, comparing the Euler method with optimal explicit and implicit difference formulas, numerical experiments are given. Experiments show that the optimal formulas give a good approximation compared to the Euler method.

Keywords. Hilbert space; initial-value problem; multistep method; the error functional; optimal explicit difference formula; optimal implicit difference formula.

footnotetext: Corresponding author. E-mail addresses: kholmatshadimetov@mail.ru (Kh.M. Shadimetov), roziq.s.karimov@gmail.com (R.S. Karimov). Received July 11, 2023; Accepted July 11, 2023.

1. Introduction

It is known that the solutions of many practical problems lead to solutions of differential equations or their systems. Although differential equations have so many applications and only a small number of them can be solved exactly using elementary functions and their combinations. Even in the analytical analysis of differential equations, their application can be inconvenient due to the complexity of the obtained solution. If it is very difficult to obtain or impossible to find an analytic solution to a differential equation, one can find an approximate solution.

In the present paper we consider the problem of approximate solution to the first order linear ordinary differential equation

y=f(x,y),x[0,1]y^{\prime}=f(x,y),\,\,x\in[0,1] (1.1)

with the initial condition

y(0)=y0.y(0)=y_{0}. (1.2)

We assume that f(x,y)f(x,y) is a suitable function and the differential equation (1.1) with the initial condition (1.2) has a unique solution on the interval [0,1][0,1].

For approximate solution of problem (1.1)-(1.2) we divide the interval [0,1][0,1] into NN pieces of the length h=1Nh=\frac{1}{N} and find approximate values yny_{n} of the function y(x)y(x) for n=0,1,,Nn=0,1,...,N at nodes xn=nhx_{n}=nh.

A classic method of approximate solution of the initial-value problem (1.1)-(1.2) is the Euler method. Using this method, the approximate solution of the differential equation is calculated as follows: to find an approximate value yn+1y_{n+1} of the function at the node xn+1x_{n+1}, it is used the approximate value yny_{n} at the node xnx_{n}:

yn+1=yn+hyn,y_{n+1}=y_{n}+hy_{n}^{\prime}, (1.3)

where yn=f(xn,yn)y_{n}^{\prime}=f(x_{n},y_{n}), so that yn+1y_{n+1} is a linear combination of the values of the unknown function y(x)y(x) and its first-order derivative at the node xn.x_{n}.

Everyone are known that there are many methods for solving the initial-value problem for ordinary differential equation (1.1). For example, the initial-value problem can be solved using the Euler, Runge-Kutta, Adams-Bashforth and Adams-Moulton formulas of varying degrees [1]. In [2] by Ahmad Fadly Nurullah Rasedee, et al., research they discussed the order and stepsize strategies of the variable order stepsize algorithm. The stability and convergence estimations of the method are also established. In the work [3] by Adekoya Odunayo M. and Z.O.Ogunwobi, it was shown that the Adam-Bashforth-Moulton method is better than the Milne Simpson method in solving a second-order differential equation. Some studies have raised the question of whether Nordsieck’s technique for changing the step size in the Adams-Bashforth method is equivalent to the explicit continuous Adams-Bashforth method. And in N.S.Hoang and R.B.Sidje’s work [4] they provided a complete proof that the two approaches are indeed equivalent. In the works [5] and [6] there were shown the potential superiority of semi-explicit and semi-implicit methods over conventional linear multi-step algorithms.

However, it is very important to choose the right one among these formulas to solve the Initial-value problem and it is not always possible to do this. Also, in this work, in contrast to the above-mentioned works, exact estimates of the error of the formula is obtained.

Our aim, in this paper, is to construct new difference formulas that are exact for exe^{-x} and optimal in the Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1). Also these formulas can be used to solve certain classes of problems with great accuracy.

The rest of the work is organized as follows. In the first paragraph, an algorithm for constructing an explicit difference formula in the space is given. The above algorithm is used to obtain an analytical formula for the optimal coefficients of an explicit difference formula. In the second section, the same algorithm is used to obtain an analytical formula for the optimal coefficients of the implicit difference formula. In the third and fourth sections, respectively, exact formulas are given for the square of the norm of the error functionals of explicit and implicit difference formulas. Numerical experiments are presented at the end of the work.

2. Optimal explicit difference formulas of Adams-Bashforth type in the Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1)

We consider a difference formula of the following form for the approximate solution of the problem (1.1)-(1.2) [7, 8]

β=0kC[β]φ[β]hβ=0k1C1[β]φ[β]0,\sum_{\beta=0}^{k}C\left[\beta\right]\varphi\left[\beta\right]-h\sum_{\beta=0}^{k-1}C_{1}\left[\beta\right]\varphi^{\prime}\left[\beta\right]\cong 0, (2.1)

where h=1N,h=\frac{1}{N}, NN is a natural number, C[β]C[\beta] and C1[β]{C}_{1}[\beta] are the coefficients, functions φ\varphi belong to the Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1). The space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) is defined as follows

W2(2,1)(0,1)={φ:[0,1]|φ is abs.contunuous, φ′′L2(0,1)}W_{2}^{(2,1)}(0,1)=\{\varphi:[0,1]\rightarrow\mathbb{R}|\varphi^{\prime}\,\mbox{ is abs.contunuous, }\varphi^{\prime\prime}\in L_{2}(0,1)\}

equipped with the norm [9, 10]

φ|W2(2,1)={01(φ′′(x)+φ(x))2dx}1/2.\|\varphi|W_{2}^{(2,1)}\|=\Big{\{}\int\limits_{0}^{1}\Big{(}\varphi^{\prime\prime}(x)+\varphi^{\prime}(x)\Big{)}^{2}dx\Big{\}}^{1/2}. (2.2)

The following difference between the sums given in the formula (2.1) is called the error of the formula (2.1) [11]

(,φ)=β=0kC[β]φ(hβ)hβ=0k1C1[β]φ(hβ).(\ell,\varphi)=\sum\limits_{\beta=0}^{k}{C[\beta]\varphi\left(h\beta\right)}-h\sum\limits_{\beta=0}^{k-1}{{{C}_{1}[\beta]}{{\varphi}^{\prime}}\left(h\beta\right)}.

To this error corresponds the error functional [12]

(x)=β=0kC[β]δ(xhβ)+hβ=0k1C1[β]δ(xhβ),\ell(x)=\sum\limits_{\beta=0}^{k}{C[\beta]\delta(x-h\beta)}+h\sum\limits_{\beta=0}^{k-1}{{{C}_{1}[\beta]}\delta^{\prime}(x-h\beta)}, (2.3)

where δ(x)\delta(x) is Dirac’s delta-function. We note that (,φ)(\ell,\varphi) is the value of the error functional \ell at a function φ\varphi and it is defined as [13, 14]

(,φ)=(x)φ(x)𝑑x.(\ell,\varphi)=\int\limits_{-\infty}^{\infty}\ell(x)\varphi(x)dx.

It should be also noted that since the error functional \ell is defined on the space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) it satisfies the following conditions

(,1)=0,(,ex)=0.(\ell,1)=0,\ \ (\ell,e^{-x})=0.

These give us the following equations with respect to coefficients C[β]C[\beta] and C1[β]C_{1}[\beta]:

β=0kC[β]=0,\sum\limits_{\beta=0}^{k}C[\beta]=0, (2.4)
β=0kC[β]ehβ+hβ=0k1C1[β]ehβ=0.\sum\limits_{\beta=0}^{k}C[\beta]e^{-h\beta}+h\sum\limits_{\beta=0}^{k-1}{C}_{1}[\beta]e^{-h\beta}=0. (2.5)

Based on the Cauchy-Schwartz inequality for the absolute value of the error of the formula (2.1) we have the estimation

|(,φ)|φ|W2(2,1)|W2(2,1).|(\ell,\varphi)|\leq\|\varphi|W_{2}^{(2,1)}\|\cdot\|\ell|W_{2}^{(2,1)*}\|.

Hence, the absolute error of the difference formula (2.1) in the space W2(2,1)W_{2}^{(2,1)} is estimated by the norm of the error functional \ell on the conjugate space W2(2,1)W_{2}^{(2,1)*}. From this we get the following[15].

Problem 1. Calculate the norm |W2(2,1)\|\ell|W_{2}^{(2,1)*}\| of the error functional \ell.

From the formula (2.3) one can see that the norm |W2(2,1)\|\ell|W_{2}^{(2,1)*}\| depends on the coefficients C[β]C[\beta] and C1[β]{C}_{1}[\beta].

Problem 2. Find such coefficients C1[β]=C̊1[β]{C}_{1}[\beta]=\mathring{C}_{1}[\beta] that satisfy the equality

̊|W2(2,1)=infC̊1[β]supφ|W2(2,1)0|(,φ)|φ|W2(2,1).\|\mathring{\ell}|W_{2}^{(2,1)*}\|=\inf_{\mathring{C}_{1}[\beta]}\sup_{\|\varphi|W_{2}^{(2,1)}\|\neq 0}\frac{|(\ell,\varphi)|}{\|\varphi|W_{2}^{(2,1)}\|}.

In this case C̊1[β]\mathring{C}_{1}[\beta] are called the optimal coefficients and the corresponding difference formula (2.1) is called the optimal difference formula.

A function ψ\psi_{\ell} satisfying the following equation is called the extremal function of the difference formula (2.1) [13]

(,ψ)=|W2(2,1)ψ|W2(2,1).(\ell,\psi_{\ell})=\|\ell|W_{2}^{(2,1)*}\|\cdot\|\psi_{\ell}|W_{2}^{(2,1)}\|. (2.6)

Since the space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) is a Hilbert space, then from the Riesz theorem on the general form of a linear continuous functional on a Hilbert space there is a function ψ\psi_{\ell} (which is the extremal function) that satisfies the following equation [16, 17]

(,φ)=φ,ψW2(2,1)(\ell,\varphi)=\langle\varphi,\psi_{\ell}\rangle_{W_{2}^{(2,1)}} (2.7)

and the equality |W2(2,1)=ψ|W2(2,1)\|\ell|W_{2}^{(2,1)*}\|=\|\psi_{\ell}|W_{2}^{(2,1)}\| holds, here φ,ψW2(2,1)\langle\varphi,\psi_{\ell}\rangle_{W_{2}^{(2,1)}} is the inner product in the space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) and is defined as follows [18]

Theorem 2.1.

The solution of equation (2.7) has the form

ψ(x)=(x)G2(x)+dex+p0\psi_{\ell}(x)=\ell(x)*G_{2}(x)+de^{-x}+p_{0} (2.8)

and it is an extremal function for the difference formula (2.1), where G2(x)=sgn(x)2(exex2x)G_{2}(x)=\frac{\mathrm{sgn}(x)}{2}\Big{(}\frac{{e^{x}-e^{-x}}}{2}-x\Big{)}, dd and p0p_{0} are real numbers.

According to the above mentioned Riesz’s theorem, the following equalities is fulfilled

|W2(2,1)2=(,ψ)=|W2(2,1)ψ|W2(2,1).\|\ell|W_{2}^{(2,1)*}\|^{2}=(\ell,\psi_{\ell})=\|\ell|W_{2}^{(2,1)*}\|\cdot\|\psi_{\ell}|W_{2}^{(2,1)}\|.

By direct calculation from the last equality for the norm of the error functional for the difference formula (2.1) we have the following result [18].

Theorem 2.2.

For the norm of the error functional of the difference formula (2.1) we have the following expression

|W2(2,1)2=γ=0kβ=0kC[γ]C[β]G2(hγhβ)2hγ=0k1C1[γ]β=0kC[β]G2(hγhβ)\|\ell|W_{2}^{(2,1)*}\|^{2}=\sum_{\gamma=0}^{k}\sum_{\beta=0}^{k}C[\gamma]C[\beta]G_{2}(h\gamma-h\beta)-2h\sum_{\gamma=0}^{k-1}{C}_{1}[\gamma]\sum_{\beta=0}^{k}C[\beta]G_{2}^{\prime}(h\gamma-h\beta)-
h2γ=0k1β=0k1C1[γ]C1[β]G2′′(hγhβ),-h^{2}\sum_{\gamma=0}^{k-1}\sum_{\beta=0}^{k-1}{C}_{1}[\gamma]{C}_{1}[\beta]G_{2}^{\prime\prime}(h\gamma-h\beta), (2.9)

where G2(x)=sgn(x)2(ex+ex21)G_{2}^{\prime}(x)=\frac{\mathrm{sgn}(x)}{2}\Big{(}\frac{{e^{x}+e^{-x}}}{2}-1\Big{)} and G2′′(x)=sgn(x)2(exex2)G_{2}^{\prime\prime}(x)=\frac{\mathrm{sgn}(x)}{2}\Big{(}\frac{{e^{x}-e^{-x}}}{2}\Big{)}.

It is known that stability in the Dahlquist sense, just like strong stability, is determined only by the coefficients C[β]C\left[\beta\right], β=0,k¯.\beta=\overline{0,k}. For this reason, our search for the optimal formula is only related to finding C1[β]C_{1}\left[\beta\right]. Therefore, in this subsection we consider difference formulas of the Adams-Bashforth type, i.e. C[k]=C[k1]=1C\left[k\right]=-C\left[k-1\right]=1 and C[ki]=0C\left[k-i\right]=0, i=2,k¯i=\overline{2,k}, [19, 20]. Then is easy to check, that the coefficients satisfy the condition (2.4).

In this work, we find the minimum of the norm (2.9) by the coefficients C1[β]C_{1}\left[\beta\right] under the condition (2.5) in the space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) [21]. Then using Lagrange method of undetermined multipliers we get the following system of linear equations with respect to the coefficients C1[β]C_{1}[\beta]:

hγ=0k1C1[γ]G2′′(hβhγ)+dehβ=γ=0kC[γ]G2(hβhγ),h\sum_{\gamma=0}^{k-1}C_{1}\left[\gamma\right]G^{\prime\prime}_{2}(h\beta-h\gamma)+de^{-h\beta}=-\sum_{\gamma=0}^{k}C\left[\gamma\right]G^{\prime}_{2}(h\beta-h\gamma), (2.10)

β=0,k1¯,\beta=\overline{0,k-1},

hγ=0k1𝐶1[γ]ehγ=γ=0kC[γ]ehγ.h\sum_{\gamma=0}^{k-1}{\mathop{C}}_{1}\left[\gamma\right]e^{-h\gamma}=-\sum_{\gamma=0}^{k}C\left[\gamma\right]e^{-h\gamma}. (2.11)

It is easy to prove that the solution of this system gives the minimum value to the expression (2.9) under the condition (2.5). Here dd is an unknown constant, 𝐶1[β]{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right] are optimal coefficient. Given that C[k]=1C\left[k\right]=1, C[k1]=1C\left[k-1\right]=-1, C[ki]=0C\left[k-i\right]=0, i=2,k¯i=\overline{2,k} the system (2.10),(2.11) is reduced to the form,

hγ=0k1𝐶1[γ]G2′′(hβhγ)+dehβ=f[β],β=0,k1¯h\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]G^{\prime\prime}_{2}(h\beta-h\gamma)+de^{-h\beta}=f\left[\beta\right],\,\,\,\,\,\,\,\,\,\,\beta=\overline{0,k-1} (2.12)
hγ=0k1𝐶1[γ]ehγ=g,h\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{-h\gamma}=g\,, (2.13)

where

f[β]=1eh4(ehβhkehβ+hkh),f\left[\beta\right]=\frac{1-e^{h}}{4}\left(e^{h\beta-hk}-e^{-h\beta+hk-h}\right), (2.14)
g=ehk+hehk.g=e^{-hk+h}-e^{-hk}. (2.15)

Assuming that C1[β]=0,C_{1}\left[\beta\right]=0, for β<0\beta<0 and β>k1\beta>k-1, we rewrite the system (2.12), (2.13) in the convolution form

{h𝐶1[β]G2′′(hβ)+dehβ=f[β]forβ=0,k1¯,hγ=0k1𝐶1[γ]ehγ=g.\left\{\begin{array}[]{ll}&h{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]*G_{2}^{\prime\prime}\left(h\beta\right)+de^{-h\beta}=f\left[\beta\right]\,\,for\,\,\beta=\overline{0,k-1},\\ &h\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{-h\gamma}=g.\end{array}\right. (2.16)

We denote first equation of the system (2.16) by UexpU_{exp}

Uexp[β]=h𝐶1[β]G2′′(hβ)+dehβ.U_{exp}\left[\beta\right]=h{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]*G^{\prime\prime}_{2}\left(h\beta\right)+de^{-h\beta}. (2.17)

(2.12) implies that

Uexp[β]=f[β]forβ=0,k1¯.U_{exp}\left[\beta\right]=f\left[\beta\right]\,for\,\beta=\overline{0,k-1}. (2.18)

Now calculating the convolution we have

Uexp[β]=𝐶1[β]G2′′(hβ)+dehβ=hγ=0k1𝐶1[γ]G2′′(hβhγ)+dehβ.U_{exp}\left[\beta\right]={\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]*G^{\prime\prime}_{2}\left(h\beta\right)+de^{-h\beta}=h\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]G^{\prime\prime}_{2}\left(h\beta-h\gamma\right)+de^{-h\beta}.

For β<0\beta<0 we get

Uexp[β]=hγ=0k1𝐶1sgn(hβhγ)2(ehβhγehβ+hγ2)+dehβU_{exp}\left[\beta\right]=h\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}\frac{sgn\left(h\beta-h\gamma\right)}{2}\left(\frac{e^{h\beta-h\gamma}-e^{-h\beta+h\gamma}}{2}\right)+de^{-h\beta}
=ehβ4hγ=0k1𝐶1[γ]ehγ+ehβ4hγ=0k1𝐶1[γ]ehγ+dehβ=ehβ4g+ehβ(d+b).=-\frac{e^{h\beta}}{4}h\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{-h\gamma}+\frac{e^{-h\beta}}{4}h\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{h\gamma}+de^{-h\beta}=-\frac{e^{h\beta}}{4}g+e^{-h\beta}\left(d+b\right).

For β>k1\beta>k-1

Uexp[β]=ehβ4g+ehβ(db).U_{exp}\left[\beta\right]=\frac{e^{h\beta}}{4}g+e^{-h\beta}\left(d-b\right).

Then d+=d+bd^{+}=d+b and d=dbd^{-}=d-b the function Uexp[β]U_{exp}\left[\beta\right] becomes

Uexp[β]={ehβ4g+ehβd+forβ>k1,f[β]forβ=0,k1¯,ehβ4g+ehβdforβ<0.U_{exp}\left[\beta\right]=\left\{\begin{array}[]{ll}-\frac{e^{h\beta}}{4}g+e^{-h\beta}d^{+}&for\,\,\beta>k-1,\\ f\left[\beta\right]&for\,\,\beta=\overline{0,k-1},\\ \frac{e^{h\beta}}{4}g+e^{-h\beta}d^{-}&for\,\,\,\,\beta<0.\end{array}\right. (2.19)

We use to find the unknowns d+d^{+} and dd^{-} from the discrete analogue of the differential operator d2dx2ddx\frac{d^{2}}{dx^{2}}-\frac{d}{dx} which is given below [22]

D1[β]=11e2h{2ehfor|β|=1,2(1+e2h)forβ=0,0for|β|2.D_{1}\left[\beta\right]=\frac{1}{1-e^{2h}}\left\{\begin{array}[]{ll}-2e^{h}&for\,\,\left|\beta\right|=1,\\ 2(1+e^{2h})&for\,\,\beta=0,\\ 0&for\,\,\left|\beta\right|\geq 2.\end{array}\right. (2.20)

The unknowns d+d^{+} and dd^{-} are determined from the conditions

𝐶1[β]=h1D1[β]Uexp[β]=0forβ<0andβ>k1.{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]=h^{-1}D_{1}\left[\beta\right]*U_{exp}\left[\beta\right]=0\,\,for\,\,\beta<0\,\,and\,\,\beta>k-1. (2.21)

Calculate the convolution

h1D1[β]Uexp[β]h^{-1}D_{1}\left[\beta\right]*U_{exp}\left[\beta\right]
=h1γ=1D1[β+γ]Uexp[γ]+h1γ=0k1D1[βγ]Uexp[γ]=h^{-1}\sum_{\gamma=1}^{\infty}D_{1}\left[\beta+\gamma\right]U_{exp}\left[-\gamma\right]+h^{-1}\sum_{\gamma=0}^{k-1}D_{1}\left[\beta-\gamma\right]U_{exp}\left[\gamma\right]
+h1γ=1D1[βkγ+1]Uexp[k+γ1].+h^{-1}\sum_{\gamma=1}^{\infty}D_{1}\left[\beta-k-\gamma+1\right]U_{exp}\left[k+\gamma-1\right].

From (2.19) with β=k\beta=k and β=1\beta=-1, we have

{h1D1[0]Uexp[1]+h1D1[1]Uexp[2]+h1D1[1]Uexp[0]=0,h1D1[0]Uexp[k]+h1D1[1]Uexp[k1]+h1D1[1]Uexp[k+1]=0.\left\{\begin{array}[]{ll}&h^{-1}D_{1}\left[0\right]U_{exp}\left[-1\right]+h^{-1}D_{1}\left[1\right]U_{exp}\left[-2\right]+h^{-1}D_{1}\left[-1\right]U_{exp}\left[0\right]=0,\\ &h^{-1}D_{1}\left[0\right]U_{exp}\left[k\right]+h^{-1}D_{1}\left[1\right]U_{exp}\left[k-1\right]+h^{-1}D_{1}\left[-1\right]U_{exp}\left[k+1\right]=0.\end{array}\right.

Hence, due to (2.21), we get

{2(1+e2h)[14ehg+ehd+]2eh[14e2hg+e2hd+]2ehf[0]=02(1+e2h)[14ehkg+ehkd]2eh[14ehk+hg+ehkhd]2ehf[hkh]=0.\left\{\begin{array}[]{ll}&2\left(1+e^{2h}\right)\left[-\frac{1}{4}e^{-h}g+e^{h}d^{+}\right]-2e^{h}\left[-\frac{1}{4}e^{-2h}g+e^{2h}d^{+}\right]-2e^{h}f[0]=0\\ &2\left(1+e^{2h}\right)\left[\frac{1}{4}e^{hk}g+e^{-hk}d^{-}\right]-2e^{h}\left[\frac{1}{4}e^{hk+h}g+e^{-hk-h}d^{-}\right]-2e^{h}f[hk-h]=0.\end{array}\right.

From the first equation d+d^{+} is equal to the following

d+=ehkehkh4.d^{+}=\frac{e^{hk}-e^{hk-h}}{4}.

From the second equation dd^{-} is equal to the following

d=ehk3ehkh+2ehk2h4.d^{-}=\frac{e^{hk}-3e^{hk-h}+2e^{hk-2h}}{4}.

so

d=d0++d02=ehk2ehkh+ehk2h4andb=d0+d02=ehkhehk2h4.d=\frac{d_{0}^{+}+d_{0}^{-}}{2}=\frac{e^{hk}-2e^{hk-h}+e^{hk-2h}}{4}\,\,\,\,\,\,\,\,and\,\,\,\,\,\,\,\,b=\frac{d_{0}^{+}-d_{0}^{-}}{2}=\frac{e^{hk-h}-e^{hk-2h}}{4}.

Now we calculate the optimal coefficients 𝐶1[β]{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]

𝐶1[β]=h1D1[β]Uexp[β]=h1γ=D1[βγ]Uexp[γ],β=0,k1¯.{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]=h^{-1}D_{1}\left[\beta\right]*U_{exp}\left[\beta\right]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}\left[\beta-\gamma\right]U_{exp}\left[\gamma\right],\,\,\beta=\overline{0,k-1}.

Let β=k1,\beta=k-1, then

𝐶1[k1]=h1γ=D1[k1γ]Uexp[γ]{\mathop{C}\limits^{\circ}}_{1}\left[k-1\right]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}\left[k-1-\gamma\right]U_{exp}\left[\gamma\right]
=h1{D1[0]Uexp[k1]+D1[1]Uexp[k2]+D1[1]Uexp[k]}=h^{-1}\left\{D_{1}\left[0\right]U_{exp}\left[k-1\right]+D_{1}\left[1\right]U_{exp}\left[k-2\right]+D_{1}\left[-1\right]U_{exp}\left[k\right]\right\}
=h1(1e2h){1eh+ehe2h}=eh1heh,=\frac{h^{-1}}{(1-e^{2h})}\cdot\left\{1-e^{-h}+e^{h}-e^{2h}\right\}=\frac{e^{h}-1}{he^{h}},

thus, 𝐶1[k1]=eh1heh{\mathop{C}\limits^{\circ}}_{1}\left[k-1\right]=\frac{e^{h}-1}{he^{h}} for β=k1.\beta=k-1.

Compute 𝐶1[0]{\mathop{C}\limits^{\circ}}_{1}\left[0\right]

𝐶1[0]=h1γ=D1[γ]Uexp[γ]{\mathop{C}\limits^{\circ}}_{1}\left[0\right]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}\left[-\gamma\right]U_{exp}\left[\gamma\right]
=h1{D1[0]Uexp[0]+D1[1]Uexp[1]+D1[1]Uexp[1]}=h^{-1}\left\{D_{1}\left[0\right]U_{exp}\left[0\right]+D_{1}\left[1\right]U_{exp}\left[-1\right]+D_{1}\left[-1\right]U_{exp}\left[1\right]\right\}
=h12(1e2h){(1+e2h)(ehkehkhehk+h+ehk)=\frac{h^{-1}}{2(1-e^{2h})}\cdot\big{\{}(1+e^{2h})(e^{-hk}-e^{hk-h}-e^{-hk+h}+e^{hk})
eh(ehk+ehkhehk+hehk)}-e^{-h}(-e^{-hk}+e^{-hk-h}-e^{hk+h}-e^{hk})\big{\}}
h12(1e2h){eh(ehk+hehk2hehk+2h+ehkh)}=h12(1e2h)0=0,-\frac{h^{-1}}{2(1-e^{2h})}\cdot\left\{e^{-h}(e^{-hk+h}-e^{hk-2h}-e^{-hk+2h}+e^{hk-h})\right\}=\frac{h^{-1}}{2(1-e^{2h})}\cdot 0=0,

hence, 𝐶1[0]=0{\mathop{C}\limits^{\circ}}_{1}\left[0\right]=0 for β=0.\beta=0.

Now calculate 𝐶1[β]{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right] for β=1,k2¯\beta=\overline{1,k-2}

𝐶1[β]=h1γ=D1[γ]Uexp[γ]{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}\left[-\gamma\right]U_{exp}\left[\gamma\right]
=h1{D1[0]Uexp[β]+D1[1]Uexp[β1]+D1[1]Uexp[β+1]}=h^{-1}\left\{D_{1}\left[0\right]U_{exp}\left[\beta\right]+D_{1}\left[1\right]U_{exp}\left[\beta-1\right]+D_{1}\left[-1\right]U_{exp}\left[\beta+1\right]\right\}
=h12(1e2h){(1+e2h)(1eh)(ehk+hβehkhβh)}=\frac{h^{-1}}{2(1-e^{2h})}\cdot\left\{(1+e^{2h})(1-e^{h})(e^{-hk+h\beta}-e^{hk-h\beta-h})\right\}
h12(1e2h){eh(1eh)(ehk+hβhehkhβ)}-\frac{h^{-1}}{2(1-e^{2h})}\cdot\left\{e^{-h}(1-e^{h})(e^{-hk+h\beta-h}-e^{hk-h\beta})\right\}
h12(1e2h){eh(1eh)(ehk+hβ+hehkhβ2h)}=h12(1e2h)0=0,-\frac{h^{-1}}{2(1-e^{2h})}\cdot\left\{e^{-h}(1-e^{h})(e^{-hk+h\beta+h}-e^{hk-h\beta-2h})\right\}=\frac{h^{-1}}{2(1-e^{2h})}\cdot 0=0,

thereby, 𝐶1[β]=0{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]=0 for β=1,k2¯.\beta=\overline{1,k-2}.

Finally, we have proved the following theorem.

Theorem 2.3.

In the Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) there is a unique optimal explicit difference formula of the Adams-Bashforth type whose coefficients are determined by following expressions

C[β]={1forβ=k,1forβ=k1,0forβ=0,k2¯,C\left[\beta\right]=\left\{\begin{array}[]{ll}1&for\,\,\,\,\beta=k,\\ -1&for\,\,\,\beta=k-1,\\ 0&for\,\,\,\beta=\overline{0,k-2},\end{array}\right. (2.22)
𝐶1[β]={eh1hehforβ=k1,0forβ=0,k2¯.{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]=\left\{\begin{array}[]{ll}\frac{e^{h}-1}{he^{h}}&for\,\,\,\,\beta=k-1,\\ 0&for\,\,\,\beta=\overline{0,k-2}.\end{array}\right. (2.23)

Thus, the optimal explicit difference formula in W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) has the form

φn+k=φn+k1+eh1ehφn+k1,\varphi_{n+k}=\varphi_{n+k-1}+\frac{e^{h}-1}{e^{h}}\varphi^{\prime}_{n+k-1},\,\, (2.24)

where n=0,1,,Nk,k1.n=0,1,...,N-k,\,\,k\geq 1.

3. Optimal implicit difference formulas of Adams-Moulton type in the Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1)

Consider an implicit difference formula of the form

β=0kC[β]φ[β]hβ=0kC1[β]φ[β]0\sum_{\beta=0}^{k}C[\beta]\varphi[\beta]-h\sum_{\beta=0}^{k}C_{1}[\beta]\varphi^{\prime}[\beta]\cong 0 (3.1)

with the error function

(x)=β=0kC[β]δ(xhβ)+hβ=0kC1[β]δ(xhβ)\ell(x)=\sum_{\beta=0}^{k}C[\beta]\delta(x-h\beta)+h\sum_{\beta=0}^{k}C_{1}[\beta]\delta^{\prime}(x-h\beta) (3.2)

in the space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1).
In this section, we also consider the case C[k]=C[k1]=1,C[k]=-C[k-1]=1, and C[ki]=0,C[k-i]=0, i=2,k¯,i=\overline{2,k}\,, i.e. Adams-Moulton type formula. Minimizing the norm of the error functional (3.2) of an implicit difference formula of the form (3.1) with respect to the coefficients C1[β],C_{1}[\beta]\,, β=0,k¯\beta=\overline{0,k} in the space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) we obtain a system of linear algebraic equations

{hγ=0k𝐶1[γ]G2′′(hβhγ)+dehβ=f[β],β=0,k¯hγ=0k𝐶1[γ]ehγ=g.\left\{\begin{array}[]{c}{h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]G^{\prime\prime}_{2}(h\beta-h\gamma)+de^{-h\beta}=f\left[\beta\right],\,\,\,\,\,\,\,\,\,\,\beta=\overline{0,k}}\\ {h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{-h\gamma}=g.}\end{array}\right.

Here 𝐶1[β]{\mathop{C}\limits^{\circ}}_{1}[\beta]\, are unknowns coefficients of the implicit difference formulas (3.1), β=0,k¯\beta=\overline{0,k} and dd is an unknown constant,

f[β]=G2(hβhk+h)G2(hβhk)f\left[\beta\right]=G^{\prime}_{2}(h\beta-hk+h)-G^{\prime}_{2}(h\beta-hk)
={14(1eh)(ehk+hβehkhβh),β=0,k1¯,14(eh+eh2),β=k,=\left\{\begin{array}[]{ll}\frac{1}{4}\left(1-e^{h}\right)\left(e^{-hk+h\beta}-e^{hk-h\beta-h}\right),&\beta=\overline{0,k-1},\\ \frac{1}{4}\left(e^{h}+e^{-h}-2\right),&\beta=k,\end{array}\right. (3.3)
g=ehk+hehk.g=e^{-hk+h}-e^{-hk}. (3.4)

Assuming, in general, that

𝐶1[β]=0,forβ<0andβ>k,{\mathop{C}\limits^{\circ}}_{1}[\beta]=0\,\,\,,for\,\,\beta<0\,\,and\,\,\beta>k, (3.5)

rewrite the system in the convolution form

{h𝐶1[β]G2′′(hβ)+dehβ=f[β],β=0,k¯,hγ=0k𝐶1[γ]ehγ=g.\left\{\begin{array}[]{c}{h{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]*G^{\prime\prime}_{2}\left(h\beta\right)+de^{-h\beta}=f\left[\beta\right],\,\,\,\,\,\,\,\,\,\beta=\overline{0,k},}\\ {h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{-h\gamma}=g.}\end{array}\right.

Denote by Uimp[β]=h𝐶1[β]G2′′(hβ)+dehβ.U_{imp}[\beta]=h{\mathop{C}\limits^{\circ}}_{1}\left[\beta\right]*G^{\prime\prime}_{2}\left(h\beta\right)+de^{-h\beta}. Shows that

Uimp[β]=f[β]forβ=0,k¯U_{imp}[\beta]=f[\beta]\,\,for\,\,\beta=\overline{0,k} (3.6)

Now we find Uimp[β]U_{imp}[\beta] for β<0\beta<0 and β>k\beta>k. Let β<0\beta<0, then

Uimp[β]=hγ=0k𝐶1[γ]sgn(hβhγ)2(ehβhγehβ+hγ2)+dehβU_{imp}\left[\beta\right]=h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]\frac{sgn\left(h\beta-h\gamma\right)}{2}\left(\frac{e^{h\beta-h\gamma}-e^{-h\beta+h\gamma}}{2}\right)+de^{-h\beta}
=ehβ4hγ=0k𝐶1[γ]ehγ+ehβ4hγ=0k𝐶1[γ]ehγ+dehβ.=-\frac{e^{h\beta}}{4}h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{-h\gamma}+\frac{e^{-h\beta}}{4}h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{h\gamma}+de^{-h\beta}.

Here d+d^{+} is defined by the equality

d+=ehβ4hγ=0k𝐶1[γ]ehγ+dehβ.d^{+}=\frac{e^{-h\beta}}{4}h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{h\gamma}+de^{-h\beta}. (3.7)

Similarly, for β>k\beta>k we have

Uimp[β]=hγ=0k𝐶1[γ]sgn(hβhγ)2(ehβhγehβ+hγ2)+dehβU_{imp}\left[\beta\right]=h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]\frac{sgn\left(h\beta-h\gamma\right)}{2}\left(\frac{e^{h\beta-h\gamma}-e^{-h\beta+h\gamma}}{2}\right)+de^{-h\beta}
=ehβ4hγ=0k𝐶1[γ]ehγehβ4hγ=0k𝐶1[γ]ehγ+dehβ.=\frac{e^{h\beta}}{4}h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{-h\gamma}-\frac{e^{-h\beta}}{4}h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{h\gamma}+de^{-h\beta}.

Here dd^{-} is defined by the equality

d=ehβ4hγ=0k𝐶1[γ]ehγ+dehβ.d^{-}=-\frac{e^{-h\beta}}{4}h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}\left[\gamma\right]e^{h\gamma}+de^{-h\beta}. (3.8)

(3.7) and (3.8) immediately imply that

d=d++d2.d=\frac{d^{+}+d^{-}}{2}\,. (3.9)

So Uimp[β]U_{imp}[\beta] for any βZ\beta\in Z is defined by the formula

Uimp[β]={ehβ4g+ehβd+forβ>k,f[β]forβ=0,k¯,ehβ4g+ehβdforβ<0.U_{imp}\left[\beta\right]=\left\{\begin{array}[]{ll}-\frac{e^{h\beta}}{4}g+e^{-h\beta}d^{+}&for\,\,\beta>k,\\ f\left[\beta\right]&for\,\,\beta=\overline{0,k},\\ \frac{e^{h\beta}}{4}g+e^{-h\beta}d^{-}&for\,\,\,\beta<0.\end{array}\right. (3.10)

If we operate operator (2.20) on expression Uimp[β]U_{imp}[\beta], we get

𝐶1[β]=h1D1[β]Uimp[β],βZ.{\mathop{C}\limits^{\circ}}_{1}[\beta]\,=h^{-1}D_{1}[\beta]*U_{imp}[\beta]\,\,,\,\,\beta\in Z. (3.11)

Assuming that 𝐶1[β]=0{\mathop{C}\limits^{\circ}}_{1}[\beta]\,=0 for β<0\beta<0 and β>k\beta>k, we get a system of linear equations for finding the unknowns d+d^{+} and dd^{-} in the formula (3.10). Indeed, calculating the convolution, we have

h1D1[β]Uimp[β]=h1γ=D1[βγ]Uimp[γ]h^{-1}D_{1}\left[\beta\right]*U_{imp}\left[\beta\right]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}\left[\beta-\gamma\right]U_{imp}\left[\gamma\right]
=h1γ=1D1[βγ]Uimp[γ]+h1γ=0kD1[βγ]Uimp[γ]+h1γ=k+1D1[βγ]Uimp[γ]=h^{-1}\sum_{\gamma=-\infty}^{-1}D_{1}\left[\beta-\gamma\right]U_{imp}\left[\gamma\right]+h^{-1}\sum_{\gamma=0}^{k}D_{1}\left[\beta-\gamma\right]U_{imp}\left[\gamma\right]+h^{-1}\sum_{\gamma=k+1}^{\infty}D_{1}\left[\beta-\gamma\right]U_{imp}\left[\gamma\right]
=h1γ=1D1[β+γ]Uimp[γ]+h1γ=0kD1[βγ]Uimp[γ]=h^{-1}\sum_{\gamma=1}^{\infty}D_{1}\left[\beta+\gamma\right]U_{imp}\left[-\gamma\right]+h^{-1}\sum_{\gamma=0}^{k}D_{1}\left[\beta-\gamma\right]U_{imp}\left[\gamma\right]
+h1γ=1D1[βkγ]Uimp[k+γ].+h^{-1}\sum_{\gamma=1}^{\infty}D_{1}\left[\beta-k-\gamma\right]U_{imp}\left[k+\gamma\right]. (3.12)

Equating the expression (3.12) to zero with β=1,\beta=-1,\,\,β=k+1\beta=k+1\,and using the formulas (3.10), (2.20) we get

{h1D1[0]Uimp[1]+h1D1[1]Uimp[2]+h1D1[1]Uimp[0]=0,h1D1[0]Uimp[k+1]+h1D1[1]Uimp[k]+h1D1[1]Uimp[k+2]=0\left\{\begin{array}[]{c}{h^{-1}D_{1}\left[0\right]U_{imp}\left[-1\right]+h^{-1}D_{1}\left[1\right]U_{imp}\left[-2\right]+h^{-1}D_{1}\left[-1\right]U_{imp}\left[0\right]=0,}\\ {h^{-1}D_{1}\left[0\right]U_{imp}\left[k+1\right]+h^{-1}D_{1}\left[1\right]U_{imp}\left[k\right]+h^{-1}D_{1}\left[-1\right]U_{imp}\left[k+2\right]=0}\end{array}\right.

or

{2(1+e2h)[14ehg+ehd+]2eh[14e2hg+e2hd+]2ehf[0]=0,2(1+e2h)[14ehk+hg+ehkhd]2eh[14ehk+2hg+ehk2hd]2ehf[hk]=0.\left\{\begin{array}[]{ll}&2\left(1+e^{2h}\right)\left[-\frac{1}{4}e^{-h}g+e^{h}d^{+}\right]-2e^{h}\left[-\frac{1}{4}e^{-2h}g+e^{2h}d^{+}\right]-2e^{h}f[0]=0,\\ &2\left(1+e^{2h}\right)\left[\frac{1}{4}e^{hk+h}g+e^{-hk-h}d^{-}\right]-2e^{h}\left[\frac{1}{4}e^{hk+2h}g+e^{-hk-2h}d^{-}\right]-2e^{h}f[hk]=0.\end{array}\right.

By virtue of the formulas (3.3) and (3.4), finally, we find

d+=14(ehkehkh),d^{+}=\frac{1}{4}\,\left(e^{hk}-e^{hk-h}\right), (3.13)
d=14(ehkhehk).d^{-}=\frac{1}{4}\,\left(e^{hk-h}-e^{hk}\right). (3.14)

Then from (3.9) we find that d=0d=0.

As a result, we rewrite Uimp[β]U_{imp}[\beta] through the (3.13) and (3.14) as follows

Uimp[β]={ehβ4g+ehβ4(ehkehkh)forβ>k,f[β]forβ=0,k¯,ehβ4g+ehβ4(ehkhehk)forβ<0.U_{imp}\left[\beta\right]=\left\{\begin{array}[]{ll}-\frac{e^{h\beta}}{4}g+\frac{e^{-h\beta}}{4}\left(e^{hk}-e^{hk-h}\right)&for\,\,\beta>k,\\ f\left[\beta\right]&for\,\,\beta=\overline{0,k},\\ \frac{e^{h\beta}}{4}g+\frac{e^{-h\beta}}{4}\left(e^{hk-h}-e^{hk}\right)&for\,\,\,\,\beta<0.\end{array}\right. (3.15)

Now we turn to calculating the optimal coefficients of implicit difference formulas 𝐶1[β],{\mathop{C}\limits^{\circ}}_{1}[\beta]\,, β=0,k¯\beta=\overline{0,k} according to the formula (3.11)

𝐶1[k]=h1γ=D1[kγ]Uimp[γ]={\mathop{C}\limits^{\circ}}_{1}[k]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}[k-\gamma]U_{imp}[\gamma]=
=h1{D1[0]Uimp[k]+D1[1]Uimp[k1]+D1[1]Uimp[k+1]}==h^{-1}\left\{D_{1}[0]U_{imp}[k]+D_{1}[1]U_{imp}[k-1]+D_{1}[-1]U_{imp}[k+1]\right\}=
=h12(1e2h)(2e2h+4eh2)=eh1h(eh+1).=\frac{h^{-1}}{2\left(1-e^{2h}\right)}\left(-2e^{2h}+4e^{h}-2\right)=\frac{e^{h}-1}{h\left(e^{h}+1\right)}.

So 𝐶1[k]=eh1h(eh+1){\mathop{C}\limits^{\circ}}_{1}[k]=\frac{e^{h}-1}{h\left(e^{h}+1\right)}.

Calculate the next optimal coefficient

𝐶1[k1]=h1γ=D1[kγ1]Uimp[γ]={\mathop{C}\limits^{\circ}}_{1}[k-1]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}[k-\gamma-1]U_{imp}[\gamma]=
=h1{D1[0]Uimp[k1]+D1[1]Uimp[k2]+D1[1]Uimp[k]}==h^{-1}\left\{D_{1}[0]U_{imp}[k-1]+D_{1}[1]U_{imp}[k-2]+D_{1}[-1]U_{imp}[k]\right\}=
=h12(1e2h)(2e2h+4eh2)=eh1h(eh+1).=\frac{h^{-1}}{2\left(1-e^{2h}\right)}\left(-2e^{2h}+4e^{h}-2\right)=\frac{e^{h}-1}{h\left(e^{h}+1\right)}.

Thus 𝐶1[k1]=eh1h(eh+1){\mathop{C}\limits^{\circ}}_{1}[k-1]=\frac{e^{h}-1}{h\left(e^{h}+1\right)}.

Go to computed 𝐶1[β]{\mathop{C}\limits^{\circ}}_{1}[\beta]\, when β=1,k2¯\beta=\overline{1,k-2}\,

𝐶1[β]=h1γ=D1[βγ]Uimp[γ]{\mathop{C}\limits^{\circ}}_{1}[\beta]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}[\beta-\gamma]U_{imp}[\gamma]
=h1{D1[0]Uimp[β]+D1[1]Uimp[β1]+D1[1]Uimp[β+1]}=h^{-1}\left\{D_{1}[0]U_{imp}[\beta]+D_{1}[1]U_{imp}[\beta-1]+D_{1}[-1]U_{imp}[\beta+1]\right\}
=h12(1e2h){(1+e2h)(ehk+hβehkhβhehk+hβ+h+ehkhβ)}=\frac{h^{-1}}{2\left(1-e^{2h}\right)}\left\{\left(1+e^{2h}\right)\left(e^{-hk+h\beta}-e^{hk-h\beta-h}-e^{-hk+h\beta+h}+e^{hk-h\beta}\right)\right\}
h12(1e2h){eh(ehk+hβhehkhβehk+hβ+ehkhβ+h)}-\frac{h^{-1}}{2\left(1-e^{2h}\right)}\left\{e^{h}\left(e^{-hk+h\beta-h}-e^{hk-h\beta}-e^{-hk+h\beta}+e^{hk-h\beta+h}\right)\right\}
h12(1e2h){eh(ehk+hβ+hehkhβ2hehk+hβ+2h+ehkhβh)}-\frac{h^{-1}}{2\left(1-e^{2h}\right)}\left\{e^{h}\left(e^{-hk+h\beta+h}-e^{hk-h\beta-2h}-e^{-hk+h\beta+2h}+e^{hk-h\beta-h}\right)\right\}
=h12(1e2h)0=0.=\frac{h^{-1}}{2\left(1-e^{2h}\right)}\cdot 0=0\,.

Thereby, 𝐶1[β]=0,{\mathop{C}\limits^{\circ}}_{1}[\beta]=0\,\,, for β=1,k2¯.\beta=\overline{1,k-2}.

Then calculate 𝐶1[0]{\mathop{C}\limits^{\circ}}_{1}[0]

𝐶1[0]=h1γ=D1[γ]Uimp[γ]{\mathop{C}\limits^{\circ}}_{1}[0]=h^{-1}\sum_{\gamma=-\infty}^{\infty}D_{1}[-\gamma]U_{imp}[\gamma]
=h1{D1[0]Uimp[0]+D1[1]Uimp[1]+D1[1]Uimp[1]}=h^{-1}\left\{D_{1}[0]U_{imp}[0]+D_{1}[1]U_{imp}[-1]+D_{1}[-1]U_{imp}[1]\right\}
=h12(1e2h){(1+e2h)(ehkehkhehk+h+ehk)}=\frac{h^{-1}}{2\left(1-e^{2h}\right)}\left\{\left(1+e^{2h}\right)\left(e^{-hk}-e^{hk-h}-e^{-hk+h}+e^{hk}\right)\right\}
h12(1e2h){eh(ehk+ehkh+ehk+hehk)}-\frac{h^{-1}}{2\left(1-e^{2h}\right)}\left\{e^{h}\left(-e^{-hk}+e^{-hk-h}+e^{hk+h}-e^{hk}\right)\right\}
h12(1e2h){eh(ehk+hehk2hehk+2h+ehkh)}=h12(1e2h)0=0.-\frac{h^{-1}}{2\left(1-e^{2h}\right)}\left\{e^{h}\left(e^{-hk+h}-e^{hk-2h}-e^{-hk+2h}+e^{hk-h}\right)\right\}=\frac{h^{-1}}{2\left(1-e^{2h}\right)}\cdot 0=0.

hence 𝐶1[0]=0.{\mathop{C}\limits^{\circ}}_{1}[0]=0.

Finally, we have proved the following.

Theorem 3.1.

In the Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1), there exists a unique optimal implicit difference formula, of Adams-Moulton type, whose coefficients are determined by formulas

C[β]={1forβ=k,1forβ=k1,0forβ=0,k2¯,C[\beta]=\left\{\begin{array}[]{ll}1&for\,\,\,\,\beta=k,\\ -1&for\,\,\,\ \beta=k-1,\\ 0&for\,\,\,\,\,\beta=\overline{0,k-2},\end{array}\right. (3.16)
𝐶1[β]={eh1h(eh+1)forβ=k,eh1h(eh+1)forβ=k1,0forβ=0,k2¯.{\mathop{C}\limits^{\circ}}_{1}[\beta]\,=\left\{\begin{array}[]{ll}\frac{e^{h}-1}{h\left(e^{h}+1\right)}&for\,\,\,\,\beta=k,\\ \frac{e^{h}-1}{h\left(e^{h}+1\right)}&for\,\,\,\,\beta=k-1,\\ 0&for\,\,\,\,\beta=\overline{0,k-2}\,\,.\end{array}\right. (3.17)

Consequently, the optimal implicit difference formula in W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) has the form

φn+k=φn+k1+eh1eh+1(φn+k+φn+k1),\varphi_{n+k}=\varphi_{n+k-1}+\frac{e^{h}-1}{e^{h}+1}\left(\varphi^{\prime}_{n+k}+\varphi^{\prime}_{n+k-1}\right),\,\, (3.18)

where n=0,1,,Nk,k1.n=0,1,...,N-k,\,\,k\geq 1.

4. Norm of the error functional of the optimal explicit difference formula

The square of the norm of an explicit Adams-Bashforth type difference formula is expressed by the equality

|W2(2,1)(0,1)2=γ=0kβ=0kC[γ]C[β]G2[γβ]\left\|\ell\left|W_{2}^{(2,1)*}(0,1)\right.\right\|^{2}=\sum_{\gamma=0}^{k}\sum_{\beta=0}^{k}C[\gamma]C[\beta]G_{2}\left[\gamma-\beta\right]-
2hγ=0k1C1[γ]β=0kC[β]G2[γβ]h2γ=0k1β=0k1C1[γ]C1[β]G2′′[γβ].-2h\sum_{\gamma=0}^{k-1}C_{1}[\gamma]\sum_{\beta=0}^{k}C[\beta]G^{\prime}_{2}\left[\gamma-\beta\right]-h^{2}\sum_{\gamma=0}^{k-1}\sum_{\beta=0}^{k-1}C_{1}[\gamma]C_{1}[\beta]G^{\prime\prime}_{2}\left[\gamma-\beta\right].\,\, (4.1)

In this section, we deal with the calculation of the squared norm (4.1) in the space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1). For this we use the coefficients C[β]C[\beta] and 𝐶1[β]{\mathop{C}\limits^{\circ}}_{1}[\beta], which is detected in the formulas (2.22) and (2.23).

Then we calculate (4.1) in sequence as follows.

|W2(2,1)(0,1)2=γ=0kC[γ]{G2[γk]G2[γk+1]}\left\|{\mathop{\ell}\limits^{\circ}}\left|W_{2}^{(2,1)*}(0,1)\right.\right\|^{2}=\sum_{\gamma=0}^{k}C[\gamma]\left\{G_{2}\left[\gamma-k\right]-G_{2}\left[\gamma-k+1\right]\right\}
2hγ=0k1𝐶1[γ]{G2[γk]G2[γk+1]}-2h\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}[\gamma]\left\{G^{\prime}_{2}\left[\gamma-k\right]-G^{\prime}_{2}\left[\gamma-k+1\right]\right\}
h2eh1hehγ=0k1𝐶1[γ]{G2′′[γk+1]}-h^{2}\frac{e^{h}-1}{he^{h}}\sum_{\gamma=0}^{k-1}{\mathop{C}\limits^{\circ}}_{1}[\gamma]\left\{G^{\prime\prime}_{2}\left[\gamma-k+1\right]\right\}
=G2[0]G2[1]G2[1]+G2[0]2(eh1)eh{G2[1]G2[0]}(eh1)2e2hG2′′[0]=G_{2}\left[0\right]-G_{2}\left[1\right]-G_{2}\left[-1\right]+G_{2}\left[0\right]-\frac{2(e^{h}-1)}{e^{h}}\left\{G^{\prime}_{2}\left[-1\right]-G^{\prime}_{2}\left[0\right]\right\}-\frac{(e^{h}-1)^{2}}{e^{2h}}G^{\prime\prime}_{2}\left[0\right]
=2G2[1]+2(eh1)ehG2[1]=2(eh1)ehsgn(h)2(eh+eh21)=-2G_{2}\left[1\right]+\frac{2(e^{h}-1)}{e^{h}}G^{\prime}_{2}\left[1\right]=\frac{2(e^{h}-1)}{e^{h}}\cdot\frac{sgn(h)}{2}\left(\frac{e^{h}+e^{h}}{2}-1\right)
2sgn(h)2(eheh2h)=eh1ehe2h2eh+12ehe2h12eh+h-2\cdot\frac{sgn(h)}{2}\left(\frac{e^{h}-e^{h}}{2}-h\right)=\frac{e^{h}-1}{e^{h}}\cdot\frac{e^{2h}-2e^{h}+1}{2e^{h}}-\frac{e^{2h}-1}{2e^{h}}+h
=h(eh1)(3eh1)2e2h.=h-\frac{\left(e^{h}-1\right)\left(3e^{h}-1\right)}{2e^{2h}}.

As a result, we get the following outcome.

Theorem 4.1.

The square of the norm of the optimal error functional of an explicit difference formula of the form (2.1) in the quotient space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) is expressed as formula

|W2(2,1)(0,1)2=h(eh1)(3eh1)2e2h.\left\|{\mathop{\ell}\limits^{\circ}}\left|W_{2}^{(2,1)*}(0,1)\right.\right\|^{2}=h-\frac{\left(e^{h}-1\right)\left(3e^{h}-1\right)}{2e^{2h}}\,\,.

5. Norm of the error functional of the implicit optimal difference formula

In this case, the square of the norm of the error functional of an implicit Adams-Moulton type difference formula of the form (3.1) is expressed by the equality

|W2(2,1)(0,1)2=γ=0kβ=0kC[γ]C[β]G2[γβ]\left\|\ell\left|W_{2}^{(2,1)*}(0,1)\right.\right\|^{2}=\sum_{\gamma=0}^{k}\sum_{\beta=0}^{k}C[\gamma]C[\beta]G_{2}\left[\gamma-\beta\right]-
2hγ=0kC1[γ]β=0kC[β]G2[γβ]h2γ=0kβ=0kC1[γ]C1[β]G2′′[γβ].-2h\sum_{\gamma=0}^{k}C_{1}[\gamma]\sum_{\beta=0}^{k}C[\beta]G^{\prime}_{2}\left[\gamma-\beta\right]-h^{2}\sum_{\gamma=0}^{k}\sum_{\beta=0}^{k}C_{1}[\gamma]C_{1}[\beta]G^{\prime\prime}_{2}\left[\gamma-\beta\right].\,\, (5.1)

Here we use the optimal coefficients of an implicit difference formula of the form (3.1), which is detected in the formulas (3.16) and (3.17).
Then, we calculate (5.1) as follows

|W2(2,1)(0,1)2=γ=0kC[γ]{G2[γk]G2[γk+1]}\left\|{\mathop{\ell}\limits^{\circ}}\left|W_{2}^{(2,1)*}(0,1)\right.\right\|^{2}=\sum_{\gamma=0}^{k}C[\gamma]\left\{G_{2}\left[\gamma-k\right]-G_{2}\left[\gamma-k+1\right]\right\}
2hγ=0k𝐶1[γ]{G2[γk]G2[γk+1]}-2h\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}[\gamma]\left\{G^{\prime}_{2}\left[\gamma-k\right]-G^{\prime}_{2}\left[\gamma-k+1\right]\right\}
h2eh1h(eh+1)γ=0k𝐶1[γ]{G2′′[γk]+G2′′[γk+1]}-h^{2}\frac{e^{h}-1}{h\left(e^{h}+1\right)}\sum_{\gamma=0}^{k}{\mathop{C}\limits^{\circ}}_{1}[\gamma]\left\{G^{\prime\prime}_{2}\left[\gamma-k\right]+G^{\prime\prime}_{2}\left[\gamma-k+1\right]\right\}
=G2[0]G2[1]G2[1]+G2[0]2h(eh1)h(eh+1){G2[0]G2[1]+G2[1]G2[0]}=G_{2}\left[0\right]-G_{2}\left[1\right]-G_{2}\left[-1\right]+G_{2}\left[0\right]-\frac{2h(e^{h}-1)}{h\left(e^{h}+1\right)}\left\{G^{\prime}_{2}\left[0\right]-G^{\prime}_{2}\left[1\right]+G^{\prime}_{2}\left[-1\right]-G^{\prime}_{2}\left[0\right]\right\}
h2(eh1)2h2(eh+1){G2′′[0]+G2′′[1]+G2′′[1]+G2′′[0]}-\frac{h^{2}\left(e^{h}-1\right)^{2}}{h^{2}\left(e^{h}+1\right)}\left\{G^{\prime\prime}_{2}\left[0\right]+G^{\prime\prime}_{2}\left[1\right]+G^{\prime\prime}_{2}\left[-1\right]+G^{\prime\prime}_{2}\left[0\right]\right\}
=2G2[1]+4(eh1)eh+1G2[1]2(eh1)2(eh+1)G2′′[1]=-2G_{2}\left[1\right]+\frac{4(e^{h}-1)}{e^{h}+1}G^{\prime}_{2}\left[1\right]-2\frac{\left(e^{h}-1\right)^{2}}{\left(e^{h}+1\right)}G^{\prime\prime}_{2}\left[1\right]
=4(eh1)eh+1sgn(h)2(eh+eh21)2sgn(h)2(eheh2h)=\frac{4(e^{h}-1)}{e^{h}+1}\cdot\frac{sgn(h)}{2}\left(\frac{e^{h}+e^{h}}{2}-1\right)-2\cdot\frac{sgn(h)}{2}\left(\frac{e^{h}-e^{h}}{2}-h\right)
2(eh1)2(eh+1)sgn(h)2(eheh2)-2\frac{\left(e^{h}-1\right)^{2}}{\left(e^{h}+1\right)}\cdot\frac{sgn(h)}{2}\left(\frac{e^{h}-e^{h}}{2}\right)
=he2h12eh+2(eh1)2(eh1)2eh(eh+1)(eh1)2(eh1)2eh(eh+1)=h-\frac{e^{2h}-1}{2e^{h}}+\frac{2\left(e^{h}-1\right)^{2}\left(e^{h}-1\right)}{2e^{h}\left(e^{h}+1\right)}-\frac{\left(e^{h}-1\right)^{2}\left(e^{h}-1\right)}{2e^{h}\left(e^{h}+1\right)}
=h+(eh1)2eh((eh1)2eh+1eh1)=h2(eh1)eh+1.=h+\frac{\left(e^{h}-1\right)}{2e^{h}}\left(\frac{\left(e^{h}-1\right)^{2}}{e^{h}+1}-e^{h}-1\right)=h-\frac{2\left(e^{h}-1\right)}{e^{h}+1}\,\,.

Consequently, we get the following result.

Theorem 5.1.

Among all implicit difference formulas of the form (3.1) in the Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1), there is a unique implicit optimal difference formula square the norms of the error functional of which is determined by the equality

|W2(2,1)(0,1)2=h2(eh1)eh+1.\left\|{\mathop{\ell}\limits^{\circ}}\left|W_{2}^{(2,1)*}(0,1)\right.\right\|^{2}=h-\frac{2\left(e^{h}-1\right)}{e^{h}+1}.

6. Numerical results

In this section, we give some numerical results in order to show tables and graphs of solutions and errors of our optimal explicit difference formulas (2.24) and optimal implicit difference formulas (3.18), with coefficients given correspondingly in Theorem 2.3 and Theorem 3.1.

We show the results of the created formulas in some examples in the form of tables and graphs. Here, of course, the results presented in the table are then shown in the graph.We have taken examples from the book by Burden R.L. et al [1] to illustrate numerical results.

Refer to caption
Figure 1. In this figure, the table is shown, that the approximate and exact solutions of the example given, and also are demonstrated the differences between the actual and approximate solutions.
Refer to caption
Refer to caption
Figure 2.
Refer to caption
Figure 3.
Refer to caption
Refer to caption
Figure 4.
Refer to caption
Figure 5. In this figure, the table is shown, that the approximate and exact solutions of the example given, and also are demonstrated the differences between the actual and approximate solutions.
Refer to caption
Refer to caption
Figure 6. Corresponding to the table above, given in Figure 5, the left side of this figure graphs approximate and exact solutions are shown, and the right side of this figure graphs of difference between actual and approximate solutions are demonstrated.

In accordance with the table above, shown in Figures 1,3,5, on the left side of these Figures 2,4,6, graphs of approximate and exact solutions are shown, and on the right side of these Figures 2,4,6, graphs of the difference between the actual and approximate solutions. As can be seen from the results presented above, in a certain sense, optimal explicit formula give better results than the classical Euler formula.

Refer to caption
Figure 7. In this figure, the table is shown, that the approximate and exact solutions of the example given, and also are demonstrated the differences between the actual and approximate solutions.
Refer to caption
Refer to caption
Figure 8. Corresponding to the table above, given in Figure 7, the left side of this figure graphs approximate and exact solutions are shown, and the right side of this figure graphs of difference between actual and approximate solutions are demonstrated.

In accordance with the table above, shown in Figure 7, on the left side of these Figure 8, graphs of approximate and exact solutions are shown, and on the right side of these Figure 8, graphs of the difference between the exact and approximate solutions. As can be seen from the result presented above, in a certain sense, optimal implicit difference formulas give better results than the classical Euler formula.

7. Conclusion

In conclusion, In this paper, new Adams-type optimal difference formulas are constructed and exact expressions for the exact estimation of their error are obtained. Moreover, we have shown that the results obtained by the optimal explicit difference formulas constructed in the W2(2,1)(0,1)W_{2}^{(2,1)}(0,1) Hilbert space are better than the results obtained by the Euler formula. In addition, the optimal implicit formula is more accurate than the optimal explicit formula and the effectiveness of the new optimal difference formulas was shown in the numerical results.

References

  • [1] Burden R.L., Faires D.J., Burden A.M. Numerical Analysis. - Boston, MA : Cengage Learning, 2016, 896 p.
  • [2] Ahmad Fadly Nurullah Rasedee, Mohammad Hasan Abdul Sathar, Siti Raihana Hamzah, Norizarina Ishak, Tze Jin Wong, Lee Feng Koo and Siti Nur Iqmal Ibrahim. Two-point block variable order step size multistep method for solving higher order ordinary differential equations directly. Journal of King Saud University - Science, vol.33, 2021, 101376, https://doi.org/10.1016/j.jksus.2021.101376
  • [3] Adekoya Odunayo M. and Z.O. Ogunwobi. Comparison of Adams-Bashforth-Moulton Method and Milne-Simpson Method on Second Order Ordinary Differential Equation. Turkish Journal of Analysis and Number Theory, vol.9, no.1, 2021: 1-8., https://doi:10.12691/tjant-9-1-1.
  • [4] N.S. Hoang, R.B. Sidje. On the equivalence of the continuous Adams-Bashforth method and Nordsiecks technique for changing the step size. Applied Mathematics Letters, 2013, 26, pp. 725-728.
  • [5] Loïc Beuken, Olivier Cheffert, Aleksandra Tutueva, Denis Butusov and Vincent Legat. Numerical Stability and Performance of Semi-Explicit and Semi-Implicit Predictor-Corrector Methods. Mathematics, 2022, 10(12), https://doi.org/10.3390/math10122015
  • [6] Aleksandra Tutueva and Denis Butusov. Stability Analysis and Optimization of Semi-Explicit Predictor-Corrector Methods. Mathematics, 2021, 9, 2463. https://doi.org/10.3390/math9192463
  • [7] Babus̆ka I., Sobolev S.L. Optimization of numerical methods. - Apl. Mat., 1965, 10, 9-170.
  • [8] Babus̆ka I., Vitasek E., Prager M. Numerical processes for solution of differential equations. - Mir, Moscow, 1969, 369 p.
  • [9] Shadimetov Kh.M., Hayotov A.R. Optimal quadrature formulas in the sense of Sard in W2(m,m1)W_{2}^{\left(m,m-1\right)} space. Calcolo, Springer, 2014, V.51, pp. 211-243.
  • [10] Shadimetov Kh.M., Hayotov A.R. Construction of interpolation splines minimizing semi-norm in W2(m,m1)W_{2}^{\left(m,m-1\right)} space. BIT Numer Math, Springer, 2013, V.53, pp. 545-563.
  • [11] Shadimetov Kh.M. Functional statement of the problem of optimal difference formulas. Uzbek mathematical Journal, Tashkent, 2015, no.4, pp.179-183.
  • [12] Shadimetov Kh.M., Mirzakabilov R.N. The problem on construction of difference formulas. Problems of Computational and Applied Mathematics, - Tashkent, 2018, no.5(17). pp. 95-101.
  • [13] Sobolev S.L. Introduction to the theory of cubature formulas. - Nauka, Moscow, 1974, 808 p.
  • [14] Sobolev S.L., Vaskevich V.L. Cubature fromulas. - Novosibirsk, 1996, 484 p.
  • [15] Shadimetov Kh.M., Mirzakabilov R.N. On a construction method of optimal difference formulas. AIP Conference Proceedings, 2365, 020032, 2021.
  • [16] Akhmedov D.M., Hayotov A.R., Shadimetov Kh.M. Optimal quadrature formulas with derivatives for Cauchy type singular integrals. Applied Mathematics and Computation, Elsevier, 2018, V.317, pp. 150-159.
  • [17] Boltaev N.D., Hayotov A.R., Shadimetov Kh.M. Construction of Optimal Quadrature Formula for Numerical Calculation of Fourier Coefficients in Sobolev space L2(1)L_{2}^{\left(1\right)}. American Journal of Numerical Analysis, 2016, v.4, no.1, pp. 1-7.
  • [18] Hayotov A.R., Karimov R.S. Optimal difference formula in the Hilbert space W2(2,1)(0,1)W_{2}^{(2,1)}(0,1). Problems of Computational and Applied Mathematics, - Tashkent, 5(35), 129-136, (2021).
  • [19] Dahlquits G. Convergence and stability in the numerical integration of ordinary differential equations. -Math. Scand., 1956, v.4, pp. 33-52.
  • [20] Dahlquits G. Stability and error bounds in the numerical integration of ordinary differential equations. - Trans. Roy. Inst. Technol. Stockholm, 1959.
  • [21] Shadimetov Kh. M., Mirzakabilov R. N. Optimal Difference Formulas in the Sobolev Space. - Contemporary Mathematics. Fundamental Directions, 2022, Vol.68, No.1, 167-177.
  • [22] Shadimetov Kh.M., Hayotov A.R. Construction of a discrete analogue of a differential operator Uzbek Matematical Journal, - Tashkent, 2004, no. 2, pp. 85-95.