This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A method to find approximate solutions of first order systems of non-linear ordinary equations.

J.J. Alvarez-Sánchez1, M. Gadella2, L.P. Lara3,4

1 Escuela de Informática de Segovia, Universidad de Valladolid, Segovia, Spain
2 Departamento de Física Teórica, Atómica y Optica and IMUVA,
Universidad de Valladolid, 47011 Valladolid, Spain
3 Instituto de Física Rosario, CONICET-UNR,
Bv. 27 de Febrero, S2000EKF Rosario, Santa Fe, Argentina.
4 Departamento de Sistemas, Universidad del Centro Educativo
Latinoamericano, Av. Pellegrini 1332, S2000 Rosario, Argentina
Abstract

We develop a one step matrix method in order to obtain approximate solutions of first order non-linear systems and non-linear ordinary differential equations, reducible to first order systems. We find a sequence of such solutions that converge to the exact solution. We apply the method to different well known examples and check its precision, in terms of local error, comparing it with the error produced by other methods. The advantage of the method over others widely used lies on the great simplicity of its implementation.

1 Introduction

This paper pretends to be a contribution to methods to find the approximate solutions of nonlinear first order equations (or systems) with given initial values of the form

𝐲(t)=𝐟(𝐲(t)),𝐲(t0)=𝐲0,{\mathbf{y}}^{\prime}(t)={\mathbf{f}}({\mathbf{y}}(t))\,,\qquad{\mathbf{y}}(t_{0})={\mathbf{y}}_{0}\,, (1)

where the prime represents first derivative with respect to the variable tt. As many higher order ordinary differential equations either linear or non-linear may be written as a system of the form (1), our method to obtain approximate solutions will apply also to these kind of equations.

Our approach is based in a generalization of the one-step matrix method developed by Demidovich and other authors [1, 2, 3] some time ago, valid for systems of the form 𝐲(t)=A(t)𝐲(t)\mathbf{y}^{\prime}(t)=A(t)\mathbf{y}(t). One clear presentation of this method appears in the textbook by Farkas [4] and it is interesting to compare it with some related procedures, see for instance [6, 7]. It is quite important to remark that, while the Demidovich matrix method is applied to linear equations (with variable coefficients), our matrix method is a generalization to non-linear systems. The advantage that our method may have in comparison of other one step methods lies on its great algorithmic simplicity. In addition, our solutions have a reasonable level of accuracy in few steps and this is workable in a table computer. This method is quite easily programmable and is very suitable for its use with the package Mathematica. It may be also seen as an alternative to Runge-Kutta and Taylor methods due precisely to its simplicity and precision. The objetive of the present study is to obtain approximate solutions of all kind of systems described by non-linear systems of differential equations (although we may include those linear systems with variable coefficients), including those who appear from physics.

In the derivation of the present approach, our motivation was rooted in the practice of operational numerical calculus. Nevertheless, we are mainly focused in the mathematical analysis of our method instead of a detailed analysis of the algorithm or CPU times. As for the case of one step Taylor polynomial method, ours shows a great conceptual simplicity. As a consequence, our proposal for obtaining approximate solutions of non-linear systems can be very easily implemented.

The presentation of the method to obtain the approximate solutions is introduced in Section 2. Thus, we have a sequence of approximate solutions, which are defined on a given interval of the real line. This sequence converges uniformly to the exact solution on the given interval as is proven in Section 3, where we also discuss a question of order. It is important to remark that we do not impose any periodicity conditions, so that our results are valid either for periodic or for non-periodic solutions.

We have applied our method to various examples of two or three dimensional examples. It is also necessary to test the applicability and accuracy of the method on widely used equations and/or systems. To this end, we have used the van der Pol [5], Duffing [8], Lorentz [9] equations, a pseudo diffusive equation depending on a parameter, studied in the standard literature [10], an epidemic equation and the predator-prey Lotka-Volterra [11, 12] equation. We have compared the precision of our solutions with the exact solution, whenever this is known. If not, the comparison is based on Runge-Kutta solutions, for a modern presentation, see [13, 14, 15]. Also with the widely used Taylor method.

The analysis of a variety of examples suggest that our method may be more precise for two dimensional systems than for higher dimensional ones, although this is not always exact: we give two examples of three dimensional systems (Lorentz and epidemic equations), for which our method gives different precisions. And it looks like particularly efficient in the case of the pseudo diffusive equation we mentioned earlier, at least when compared with the standard perturbative method studied in [16].

We usually obtain precisions in between of those obtained for the Taylor method of third and fourth order (although the precision depends also on the length of subintervals in which is divided the interval in which we are looking for solutions), which is reasonable when we use a table computer. We close this article with concluding remarks and a conjecture on the Liénard equation which is motivated by some of our results and confirmed through numerical experiments.

2 A matrix method.

We begin with an initial value problem as given in (1). While the function 𝐲(t){\mathbf{y}}(t) is a n\mathbb{R}^{n} valued function with real variable tt of class C1C^{1} on the neighbourhood |tt0|a|t-t_{0}|\leq a, 𝐟()\mathbf{f}(-) is a n\mathbb{R}^{n} valued function with variable in n\mathbb{R}^{n}, which is continuous on D{𝐲n/𝐲𝐲0d}D\equiv\{\mathbf{y}\in\mathbb{R}^{n}\;\;/\;\;||\mathbf{y}-\mathbf{y}_{0}||\leq d\} and satisfies a Lipschitz condition with respect to 𝐲\mathbf{y}. Needless to say that a,da,d are positive constants.

In relation with the identity (1), we shall use either the denomination of “equation” or “system” indistinctly. In any case, it is well known that the initial value problem (1) has one unique solution on the interval |tt0|inf(a,d/M)|t-t_{0}|\leq\inf(a,d/M) with M:=supD𝐟M:=\sup_{D}||\mathbf{f}||.

Our objective is to introduce a generalization of a method of solutions of (1) proposed in [4]. This generalization is based in an iterative procedure of numerical integration for equations of the type 𝐟(𝐲)=A(𝐲)𝐲\mathbf{f}(\mathbf{y})=A(\mathbf{y})\cdot\mathbf{y}, where A()A(-) is an n×nn\times n matrix. With this choice for the function 𝐟()\mathbf{f}(-), the differential equation in (1) has the form

𝐲(t)=A(𝐲(t))𝐲(t).\mathbf{y}^{\prime}(t)=A(\mathbf{y}(t))\cdot\mathbf{y}(t)\,. (2)

We assume that the entries of A(𝐲)A(\mathbf{y}) are continuous on DD. Let us define a uniform partition of the interval [0,tN][0,t_{N}] into subintervals Ik[tk,tk+1]I_{k}\equiv[t_{k},t_{k+1}], where tk=hkt_{k}=hk with k=0,1,2,,Nk=0,1,2,\dots,N, with NN natural and h=tN)/Nh=t_{N})/N, tN<at_{N}<a. We have chosen this form of the interval by simplicity, needless to say that if the original interval is somehow else, it always may be transformed into [0,tN][0,t_{N}] by a translation. Also, the equal spacing of all subintervals is not strictly necessary, although it simplifies our notation. Conventionally, we may call nodes to the points {tk}\{t_{k}\}.

We proceed as follows: On each interval IkI_{k}, we approximate Equation (2) by

𝐲N,k(t)=A(𝐲N,k)𝐲N,k(t).\mathbf{y}^{\prime}_{N,k}(t)=A(\mathbf{y}^{*}_{N,k})\cdot\mathbf{y}_{N,k}(t)\,. (3)

At each node, tkt_{k}, we impose 𝐲N,k(tk)=𝐲N,k1(tk)\mathbf{y}_{N,k}(t_{k})=\mathbf{y}_{N,k-1}(t_{k}), while 𝐲N,kn\mathbf{y}^{*}_{N,k}\in\mathbb{R}^{n} is a constant to be determined. This gives the segmentary solution, which has to be of the form 𝐲N(t){𝐲N,k(t);k=0,1,,N1}\mathbf{y}_{N}(t)\equiv\{\mathbf{y}_{N,k}(t)\;;\;k=0,1,\dots,N-1\}. It satisfies

𝐲N(t)=A(𝐲N)𝐲N(t),\mathbf{y}^{\prime}_{N}(t)=A(\mathbf{y}^{*}_{N})\cdot\mathbf{y}_{N}(t)\,, (4)

where 𝐲N\mathbf{y}^{*}_{N} coincides with 𝐲N,k\mathbf{y}^{*}_{N,k} on each of the kk-th intervals.

These segmentary solutions give a sequence of functions {𝐲N(t)}\{\mathbf{y}_{N}(t)\}, t[0,tN]t\in[0,t_{N}], that are approximations to the solution of (2). Here, we shall give a method to obtain each of the 𝐲N(t)\mathbf{y}_{N}(t) and, in next sections, we shall discuss the properties of the sequence.

Then, we proceed to an iterative integration of (4) as follows: Take the first interval I0I_{0} and fix an initial value 𝐲(t0)\mathbf{y}(t_{0}). This initial value gives the solution 𝐲N,0(t)\mathbf{y}_{N,0}(t) on I0I_{0}. Thus, we have the value 𝐲N,0(t1)=𝐲N,1(t1)\mathbf{y}_{N,0}(t_{1})=\mathbf{y}_{N,1}(t_{1}), which serves as the initial value for the solution on the interval I1I_{1}. Then, we repeat the procedure in an obvious manner for I2I_{2} and so on. For each interval IkI_{k} the matrix A(𝐲N,k)A(\mathbf{y}^{*}_{N,k}), which appears in (3), is a constant matrix and, therefore, (3) is a system with constant coefficients. Therefore, the solution of (3) has the form

𝐲N,k(t)=exp{A(𝐲N,k)(ttk)}𝐲N,k(tk),k=0,1,2,,N1,\mathbf{y}_{N,k}(t)=\exp\{A(\mathbf{y}^{*}_{N,k})(t-t_{k})\}\cdot\mathbf{y}_{N,k}(t_{k})\,,\qquad k=0,1,2,\dots,N-1\,, (5)

where we have used the notation 𝐲N,0(t0)=𝐲(t0)\mathbf{y}_{N,0}(t_{0})=\mathbf{y}(t_{0}). We determine the numbers 𝐲N,k\mathbf{y}^{*}_{N,k} through the following expression:

𝐲N,k=𝐲N,k(tk+h2)=exp{A(𝐲N,k(tk))h2}𝐲N,k(tk),\mathbf{y}^{*}_{N,k}=\mathbf{y}_{N,k}\left(t_{k}+\frac{h}{2}\right)=\exp\left\{A(\mathbf{y}_{N,k}(t_{k}))\,\frac{h}{2}\right\}\cdot\mathbf{y}_{N,k}(t_{k})\,, (6)

where k=0,1,2,N1k=0,1,2,\dots N-1.

Then, the approximate solution 𝐲N(t)\mathbf{y}_{N}(t) gives at tIkt\in I_{k} and on each of the intervals IkI_{k}:

𝐲N(t)=exp{A(𝐲N,k)(ttk)}j=0k1exp{A(𝐲N,j)h}𝐲0.\mathbf{y}_{N}(t)=\exp\left\{A(\mathbf{y}^{*}_{N,k})(t-t_{k})\right\}\cdot\prod_{j=0}^{k-1}\exp\left\{A(\mathbf{y}^{*}_{N,j})\,h\right\}\cdot\mathbf{y}_{0}\,. (7)

The determination of the exponential of a matrix may often be rather complicated for large matrices. Then, we may use the Putzer spectral formula. This establish that if AA is a constant matrix of order n×nn\times n with eigenvalues {λk}k=1n\{\lambda_{k}\}_{k=1}^{n}, then, its exponential verifies the following expression:

exp{At}=k=1nrk(t)Pk1,\exp\{A\,t\}=\sum_{k=1}^{n}r_{k}(t)\,P_{k-1}\,, (8)

with

P0I,Pk=j=1k(AλjI),k=1,2,,n1,P_{0}\equiv I\,,\qquad P_{k}=\prod_{j=1}^{k}(A-\lambda_{j}I)\,,\qquad k=1,2,\dots,n-1\,, (9)

where II is the n×nn\times n identity matrix and the coefficients rk(t)r_{k}(t) are to be determined through the following first order system of differential equations:

r1(t)=λ1r1(t),r1(0)=1;\displaystyle r^{\prime}_{1}(t)=\lambda_{1}\,r_{1}(t)\,,\quad r_{1}(0)=1\,;
rk(t)=λkrk(t)+rk1(t),rk(0)=0,\displaystyle r^{\prime}_{k}(t)=\lambda_{k}\,r_{k}(t)+r_{k-1}(t)\,,\quad r_{k}(0)=0\,, (10)

for k=2,3,,nk=2,3,\dots,n.

For simplicity, let us consider the particular case, in which A(𝐲N,j)A(\mathbf{y}^{*}_{N,j}) are matrices of order 2×22\times 2. Each of these matrices has two eigenvalues, λ1,j\lambda_{1,j} and λ2,j\lambda_{2,j}, which may be either different or equal. Let us assume that λ1,jλ2,j\lambda_{1,j}\neq\lambda_{2,j}. Then,

exp{A(𝐲N,j)h}\displaystyle\exp\left\{A(\mathbf{y}^{*}_{N,j})\,h\right\}
=1λ1,jλ2,j{(A(𝐲N,j)λ2,jI)exp{λ1,jh}(A(𝐲N,j)λ1,jI)exp{λ2,jh}}.\displaystyle=\frac{1}{\lambda_{1,j}-\lambda_{2,j}}\left\{(A(\mathbf{y}^{*}_{N,j})-\lambda_{2,j}\,I)\exp\{\lambda_{1,j}\,h\}-(A(\mathbf{y}^{*}_{N,j})-\lambda_{1,j}\,I)\exp\{\lambda_{2,j}\,h\}\right\}\,. (11)

On the other hand, when λ1,j=λ2,j=λj\lambda_{1,j}=\lambda_{2,j}=\lambda_{j}, we have for the exponential

exp{A(𝐲N,j)h}=exp{λjh}{I+h(A(𝐲N,j)λjI)}.\exp\left\{A(\mathbf{y}^{*}_{N,j})\,h\right\}=\exp\{\lambda_{j}\,h\}\,\{I+h(A(\mathbf{y}^{*}_{N,j})-\lambda_{j}\,I)\}\,. (12)

We conclude here the crude description of the method. In the sequel, we shall show the convergence of the sequence, {𝐲N(t)}\{\mathbf{y}_{N}(t)\}, of approximate segmentary solutions and shall evaluate the precision of the method.

3 On the convergence of approximate solutions

In the previous Section, we have obtained a set of approximate solutions of the initial value problem on a compact interval of the real line. The question is now, assuming we have obtained by the previous method a sequence of solutions. Does this sequence converges to the exact solution in any reasonable sense as the length of the sub intervals, here called hh, becomes arbitrarily small. To investigate this possibility is the goal of the present Section. Let us go back to (4) and rewrite it as

𝐲N(t)=A(𝐲N)𝐲N(t)+ηN,\mathbf{y}^{\prime}_{N}(t)=A(\mathbf{y}_{N})\cdot\mathbf{y}_{N}(t)+\eta_{N}\,, (13)

so that,

ηN(t)=(A(𝐲N)A(𝐲N))𝐲N(t).\eta_{N}(t)=(A(\mathbf{y}^{*}_{N})-A(\mathbf{y}_{N}))\cdot\mathbf{y}_{N}(t)\,. (14)

Let us add and subtract A(𝐲N)𝐲NA(\mathbf{y}^{*}_{N})\cdot\mathbf{y}^{*}_{N} in the right hand side of (14). Then, let us take the supremum norm on the interval [tt0,t+t0][t-t_{0},t+t_{0}] and use the triangle inequality of the norm, so as to obtain the following inequality:

ηN(A(𝐲N)𝐲NA(𝐲N)𝐲N)+A(𝐲N)𝐲N𝐲N.||\eta_{N}||\leq||(A(\mathbf{y}^{*}_{N})\cdot\mathbf{y}^{*}_{N}-A(\mathbf{y}_{N})\cdot\mathbf{y}_{N})||+||A(\mathbf{y}^{*}_{N})||\,||\mathbf{y}^{*}_{N}-\mathbf{y}_{N}||\,. (15)

Then, we apply in (15) the Lipschitz condition with respect to the variable 𝐲\mathbf{y} with constant K>0K>0. Then, it comes that

ηN(K+A(𝐲N))𝐲N𝐲N.||\eta_{N}||\leq(K+||A(\mathbf{y}_{N}^{*})||)\,||\mathbf{y}_{N}^{*}-\mathbf{y}_{N}||\,. (16)

On each one of the intervals IkI_{k}, let us expand 𝐲N(t)\mathbf{y}_{N}(t) in Taylor series around tkt_{k}. We obtain the following inequality:

𝐲N𝐲N(t)h2𝐲n(tk)h2maxt0ttN𝐲N(t).||\mathbf{y}_{N}^{*}-\mathbf{y}_{N}(t)||\leq\frac{h}{2}\,||\mathbf{y}^{\prime}_{n}(t_{k})||\leq\frac{h}{2}\,\max_{t_{0}\leq t\leq t_{N}}||\mathbf{y}^{\prime}_{N}(t)||\,. (17)

Since 𝐲N(t)\mathbf{y}^{\prime}_{N}(t) is continuous on the interval t0ttNt_{0}\leq t\leq t_{N}, the maximum in the right hand side of (17) exists. Furthermore, A(𝐲)A(\mathbf{y}) is continuous with respect to 𝐲\mathbf{y} on the neighborhood 𝐲𝐲0d||\mathbf{y}-\mathbf{y}_{0}||\leq d, so that there exists a constant C>0C>0 such that A(𝐲)C||A(\mathbf{y})||\leq C. Taking norms in (4), we have that

𝐲NA(𝐲N)𝐲NC𝐲N.||\mathbf{y}^{\prime}_{N}||\leq||A(\mathbf{y}^{*}_{N})||\,||\mathbf{y}_{N}||\leq C\,||\mathbf{y}_{N}||\,. (18)

Equation (7) implies that

maxt0ttN𝐲N(t)Cexp{C(tNt0)},\max_{t_{0}\leq t\leq t_{N}}||\mathbf{y}_{N}(t)||\leq C^{\prime}\exp\{C(t_{N}-t_{0})\}\,, (19)

where C>0C^{\prime}>0 is a constant. After (18-19), we see that 𝐲(t)||\mathbf{y}^{\prime}(t)|| is bounded for all tt in the interval [t0,tN][t_{0},t_{N}]. Consequently after (16) and (18-19), we have that

ηNh2(K+C)Cexp{C(tNt0)}=Sh,||\eta_{N}||\leq\frac{h}{2}\,(K+C)\,C^{\prime}\,\exp\{C(t_{N}-t_{0})\}=S\,h\,, (20)

where the meaning of the constant S>0S>0 is obvious.

Next, le us integrate (13) on the interval [t0,t][t_{0},t]. Since for all value of NN, we use the same initial value y(t0)y(t_{0}), we have

𝐲N(t)=𝐲(t0)+t0tA(𝐲N(s))𝐲N(s)𝑑s+t0tηN(s)𝑑s.\mathbf{y}_{N}(t)=\mathbf{y}(t_{0})+\int_{t_{0}}^{t}A(\mathbf{y}_{N}(s))\cdot\mathbf{y}_{N}(s)\,ds+\int_{t_{0}}^{t}\eta_{N}(s)\,ds\,. (21)

From (21), we have that

𝐲N+M(t)𝐲N(t)t0tA(𝐲N+M(s))𝐲N+M(s)A(𝐲N(s))𝐲N(s)𝑑s+δN(t),||\mathbf{y}_{N+M}(t)-\mathbf{y}_{N}(t)||\leq\int_{t_{0}}^{t}||A(\mathbf{y}_{N+M}(s))\cdot\mathbf{y}_{N+M}(s)-A(\mathbf{y}_{N}(s))\cdot\mathbf{y}_{N}(s)||\,ds+\delta_{N}(t)\,, (22)

with

δN(t)=t0tηN+M(s)ηN(s)𝑑s2Sh(tNt0).\delta_{N}(t)=\int_{t_{0}}^{t}||\eta_{N+M}(s)-\eta_{N}(s)||\,ds\leq 2Sh(t_{N}-t_{0})\,. (23)

Using the Lipschitz condition in (22), we obtain

𝐲N+M(t)𝐲N(t)Kt0t𝐲N+M(s)𝐲N(s)𝑑s+2Sh(tNt0).||\mathbf{y}_{N+M}(t)-\mathbf{y}_{N}(t)||\leq K\int_{t_{0}}^{t}||\mathbf{y}_{N+M}(s)-\mathbf{y}_{N}(s)||\,ds+2Sh(t_{N}-t_{0})\,. (24)

At this point, we use the Gronwall lema, which states the following

Lemma (Gronwall).- Let f(t):If(t):I\longmapsto\mathbb{R} an integrable function on the compact real interval II such that there exists two positive constants AA and BB, with

0f(t)A+Bt0tf(s)𝑑s,t0I0\leq f(t)\leq A+B\int_{t_{0}}^{t}f(s)\,ds\,,\qquad t_{0}\in I\, (25)

for all tIt\in I. Then,

f(t)AeB(tt0).f(t)\leq A\,e^{B(t-t_{0})}\,. (26)

\blacksquare


Then, we use the Gronwall lema with

f(t)𝐲N+M(t)𝐲N(t),AMh:=2S(tNt0)h,BK,f(t)\equiv||\mathbf{y}_{N+M}(t)-\mathbf{y}_{N}(t)||\,,\quad A\equiv Mh:=2S(t_{N}-t_{0})h\,,\quad B\equiv K\,, (27)

to conclude that

𝐲N+M(t)𝐲N(t)MheK(tt0)[MeK(tnt0)]h=Kh,||\mathbf{y}_{N+M}(t)-\mathbf{y}_{N}(t)||\leq Mh\,e^{K(t-t_{0})}\leq[M\,e^{K(t_{n}-t_{0})}]\,h=K^{\prime}\,h\,, (28)

With K>0K^{\prime}>0 a positive constant. Therefore,

𝐲N+M(t)𝐲N(t)0,||\mathbf{y}_{N+M}(t)-\mathbf{y}_{N}(t)||\longmapsto 0\,, (29)

as h0h\longmapsto 0. Since the space C0[t0,tN]C^{0}[t_{0},t_{N}] is complete111Note that tNt_{N} is fixed and NN just denotes the number of intervals in the partition or equivalently, the length of hh., (29) implies the existence of a continuous function 𝐳(t):[t0,tN]\mathbf{z}(t):[t_{0},t_{N}]\longmapsto\mathbb{R}, such that

𝐳(t):=limN𝐲N(t),\mathbf{z}(t):=\lim_{N\to\infty}\mathbf{y}_{N}(t)\,, (30)

uniformly.

Now, we claim that 𝐳(t)\mathbf{z}(t) is differentiable on (t0,tN)(t_{0},t_{N}). Furthermore, 𝐳(t)\mathbf{z}(t) is a solution of the differential equation (2).

The proof goes as follows: The Lipschitz condition applied to our situation implies that

A(𝐲N+M(t))𝐲N+M(t)A(𝐲N(t))𝐲N(t)K𝐲N+M(t)𝐲N(t),||A(\mathbf{y}_{N+M}(t))\cdot\mathbf{y}_{N+M}(t)-A(\mathbf{y}_{N}(t))\cdot\mathbf{y}_{N}(t)||\leq K\,||\mathbf{y}_{N+M}(t)-\mathbf{y}_{N}(t)||\,, (31)

so that A(𝐲N(t))𝐲N(t)A(\mathbf{y}_{N}(t))\cdot\mathbf{y}_{N}(t) converges uniformly to A(𝐳(t))𝐳(t)A(\mathbf{z}(t))\cdot\mathbf{z}(t). In addition, after (20), we have that

ηN(t)ShS(tNt0).||\eta_{N}(t)||\leq Sh\leq S(t_{N}-t_{0})\,. (32)

Recall that tNt_{N} is fixed for whatever value of NN. Then, taken limits in (21), we have

𝐳(t)=𝐲(t0)+limNt0tA(𝐲N(s))𝐲N(s)𝑑s+limNt0tηN(s)𝑑s\displaystyle\mathbf{z}(t)=\mathbf{y}(t_{0})+\lim_{N\to\infty}\int_{t_{0}}^{t}A(\mathbf{y}_{N}(s))\cdot\mathbf{y}_{N}(s)\,ds+\lim_{N\to\infty}\int_{t_{0}}^{t}\eta_{N}(s)\,ds
=𝐲(t0)+t0t[limNA(𝐲N(s))𝐲N(s)]𝑑s+t0tlimN[ηN(s)]ds.\displaystyle=\mathbf{y}(t_{0})+\int_{t_{0}}^{t}[\lim_{N\to\infty}A(\mathbf{y}_{N}(s))\cdot\mathbf{y}_{N}(s)]\,ds+\int_{t_{0}}^{t}\lim_{N\to\infty}[\eta_{N}(s)]\,ds\,. (33)

In the second integral, we may interchange the limit and the integral due to the uniform convergence of the sequence under the integral to its limit. In the case of the second integral, we have used the Lebesgue convergence theorem, which can be applied here due to (32). Since obviously limN[ηN(s)]=0\lim_{N\to\infty}[\eta_{N}(s)]=0, we finally conclude that

𝐳(t)=𝐲(t0)+t0tA(𝐳(s))𝐳(s)𝑑s.\mathbf{z}(t)=\mathbf{y}(t_{0})+\int_{t_{0}}^{t}A(\mathbf{z}(s))\cdot\mathbf{z}(s)\,ds\,. (34)

From (34), we conclude the following:

1.- The function 𝐳(t)\mathbf{z}(t) is differentiable in the considered interval.

2.- The function 𝐳(t)\mathbf{z}(t) is the solution of equation (2) with initial value 𝐳(t0)=𝐲(t0)\mathbf{z}(t_{0})=\mathbf{y}(t_{0}).


3.1 A question of order

The expansion into Taylor series on a neighborhood of tkt_{k} of the solutions of equations (2) and (3) have these forms, respectively:

𝐲(tk+1)=𝐲(tk)+A(yk)𝐲(tk)h+12(A2(𝐲(tk))𝐲(tk))h2+12ddt[A(𝐲(tk))𝐲(tk)]h2+O(h3).\mathbf{y}(t_{k+1})=\mathbf{y}(t_{k})+A(y_{k})\,\mathbf{y}(t_{k})\,h+\frac{1}{2}(A^{2}(\mathbf{y}(t_{k}))\cdot\mathbf{y}(t_{k}))h^{2}+\frac{1}{2}\frac{d}{dt}[A(\mathbf{y}(t_{k}))\cdot\mathbf{y}(t_{k})]h^{2}+O(h^{3})\,. (35)

Taking into account that

ddtA(𝐲(t))=j=1nyjA(𝐲(t))ddtyj(t),\frac{d}{dt}\,A(\mathbf{y}(t))=\sum_{j=1}^{n}\frac{\partial}{\partial\,y_{j}}\,A(\mathbf{y}(t))\,\frac{d}{dt}\,y_{j}(t)\,, (36)

where yjy_{j} is the jj-th component of 𝐲\mathbf{y}, equation (35) becomes:

𝐲(tk+1)=𝐲(tk)+A(yk)𝐲(tk)h+12(A2(𝐲(tk))𝐲(tk))h2\displaystyle\mathbf{y}(t_{k+1})=\mathbf{y}(t_{k})+A(y_{k})\,\mathbf{y}(t_{k})\,h+\frac{1}{2}(A^{2}(\mathbf{y}(t_{k}))\cdot\mathbf{y}(t_{k}))h^{2}
+12(j=1nyjA(𝐲(tk))ddtyj(tk))𝐲(tk)h2+o(h3).\displaystyle+\frac{1}{2}\left(\sum_{j=1}^{n}\frac{\partial}{\partial\,y_{j}}\,A(\mathbf{y}(t_{k}))\,\frac{d}{dt}\,y_{j}(t_{k})\right)\cdot\mathbf{y}(t_{k})\,h^{2}+o(h^{3})\,. (37)

On the kk-th interval, equation (4) takes te formula

𝐲N(tk+1)=𝐲N(tk)+A(𝐲Nk)𝐲N(tk)h+12A2(𝐲Nk)𝐲N(tk)h2+o(h3).\mathbf{y}_{N}(t_{k+1})=\mathbf{y}_{N}(t_{k})+A(\mathbf{y}^{*}_{N_{k}})\cdot\mathbf{y}_{N}(t_{k})\,h+\frac{1}{2}\,A^{2}(\mathbf{y}^{*}_{N_{k}})\cdot\mathbf{y}_{N}(t_{k})\,h^{2}+o(h^{3})\,. (38)

Then, we may proceed to expand into Taylor series the matrix A(𝐲Nk)A(\mathbf{y}^{*}_{N_{k}}) on a neighborhood of 𝐲Nk\mathbf{y}_{N_{k}}. Taking into account (5) and after some simple calculations, we obtain:

A(𝐲Nk)=A(𝐲N(tk))+j=1nyjA(𝐲N(tk))(yN,j(tk+h/2)yN,j(tk)),A(\mathbf{y}^{*}_{N_{k}})=A(\mathbf{y}_{N}(t_{k}))+\sum_{j=1}^{n}\frac{\partial}{\partial\,y_{j}}\,A(\mathbf{y}_{N}(t_{k}))\,(y_{N,j}(t_{k}+h/2)-y_{N,j}(t_{k}))\,, (39)

where yN,j(t)y_{N,j}(t) is the jj-th component of 𝐲N(t)\mathbf{y}_{N}(t). A first order expansion on the last factor on the right hand side of (39) gives

yN,j(tk+h/2)yN,j(tk)=12ddtyN,j(tk)h+o(h2),y_{N,j}(t_{k}+h/2)-y_{N,j}(t_{k})=\frac{1}{2}\,\frac{d}{dt}\,y_{N,j}(t_{k})\,h+o(h^{2})\,, (40)

so that using (40) in (39), we have

A(𝐲Nk)=A(𝐲N(tk))+h2j=1nyjA(𝐲N(tk))ddtyN,j(tk).A(\mathbf{y}^{*}_{N_{k}})=A(\mathbf{y}_{N}(t_{k}))+\frac{h}{2}\sum_{j=1}^{n}\frac{\partial}{\partial\,y_{j}}\,A(\mathbf{y}_{N}(t_{k}))\,\frac{d}{dt}\,y_{N,j}(t_{k})\,. (41)

Then, we replace (41) into (38), and performing some simple manipulations, taking into account that up to second order in hh:

12A2(𝐲Nk)𝐲N(tk)h212A2(𝐲(tk))𝐲N(tk)h2,\frac{1}{2}\,A^{2}(\mathbf{y}^{*}_{N_{k}})\cdot\mathbf{y}_{N}(t_{k})\,h^{2}\approx\frac{1}{2}\,A^{2}(\mathbf{y}(t_{k}))\cdot\mathbf{y}_{N}(t_{k})\,h^{2}\,, (42)

we finally obtain that

𝐲(tk+1)=𝐲N(tk)+A(𝐲N(tk))𝐲N(tk)h+h22j=1nyjA(𝐲n(tk))ddtyN,j(tk)\displaystyle\mathbf{y}(t_{k+1})=\mathbf{y}_{N}(t_{k})+A(\mathbf{y}_{N}(t_{k}))\cdot\mathbf{y}_{N}(t_{k})\,h+\frac{h^{2}}{2}\sum_{j=1}^{n}\,\frac{\partial}{\partial\,y_{j}}A(\mathbf{y}_{n}(t_{k}))\,\frac{d}{dt}\,y_{N,j}(t_{k})
+12A2(𝐲N(tk))𝐲N(tk)h2+o(h3).\displaystyle+\frac{1}{2}\,A^{2}(\mathbf{y}_{N}(t_{k}))\cdot\mathbf{y}_{N}(t_{k})\,h^{2}+o(h^{3})\,. (43)

The advantage that (3.1) offers with respect to (3.1) is that in (3.1) the terms up to second order in hh are correctly shown.

3.1.1 Going beyond second order

In (3.1), we have found the solution up to second order in hh. Would we obtain a better precision for third or higher order keeping at the same time the simplicity of the method? First of all our construction is based on equation (3), which is not longer valid if we require an approximation of order higher than two. To fix ideas, let us take 𝐲(t)=(y1(t),y2(t)){\mathbf{y}}(t)=(y_{1}(t),y_{2}(t)) bidimensional for simplicity, a choice which does not affect to our argument. Let us expand A(𝐲(t))A({\mathbf{y}}(t)) around 𝐲(t)=(y1,y2){\mathbf{y}}^{*}(t)=(y_{1}^{*},y_{2}^{*}) (where we have omitted the sub-indices N,kN,k for simplicity) and take one more term in the Taylor span. The result is

A(𝐲(t))=A(𝐲(t))+y1A(𝐲(t))(y1(t)y1)+y2A(𝐲(t))(y2(t)y2).A({\mathbf{y}}(t))=A({\mathbf{y}}^{*}(t))+\frac{\partial}{\partial y_{1}}\,A({\mathbf{y}}^{*}(t))(y_{1}(t)-y_{1}^{*})+\frac{\partial}{\partial y_{2}}\,A({\mathbf{y}}^{*}(t))(y_{2}(t)-y_{2}^{*})\,. (44)

Then, instead of the approximation (3), we have the following, where again we have suppressed the indices NN and kk:

𝐲(t)=[A(𝐲(t))+y1A(𝐲(t))(y1(t)y1)+y2A(𝐲(t))(y2(t)y2)]𝐲(t).{\mathbf{y}}^{\prime}(t)=\left[A({\mathbf{y}}^{*}(t))+\frac{\partial}{\partial y_{1}}\,A({\mathbf{y}}^{*}(t))(y_{1}(t)-y_{1}^{*})+\frac{\partial}{\partial y_{2}}\,A({\mathbf{y}}^{*}(t))(y_{2}(t)-y_{2}^{*})\right]\cdot{\mathbf{y}}(t)\,. (45)

The solution to be determined is just 𝐲(t)=(y1(t),y2(t)){\mathbf{y}}(t)=(y_{1}(t),y_{2}(t)), which now is a part of the Ansatz (45). One possibility to solve this contradiction is to proceed with the following span on each of the IkI_{k} intervals:

y1(t)=y1(tk)+y1t(tk)(ttk)+,y_{1}(t)=y_{1}(t_{k})+\frac{\partial y_{1}}{\partial t}(t_{k})\,(t-t_{k})+\dots\,, (46)

and same for y2(t)y_{2}(t). If we use these manipulations in (3), we obtain an equation of the type:

𝐲N,k(t)=GN,k(t)𝐲N,k(t),{\mathbf{y}^{\prime}_{N,k}}(t)=G_{N,k}(t)\cdot{\mathbf{y}_{N,k}}(t)\,, (47)

with

GN,k(t)=A(𝐲N,k(t))+y1A(𝐲N,k(t))(y1(t)y1)+y2A(𝐲N,k(t))(y2(t)y2)+.G_{N,k}(t)=A({\mathbf{y}^{*}_{N,k}}(t))+\frac{\partial}{\partial y_{1}}\,A({\mathbf{y}^{*}_{N,k}}(t))(y_{1}(t)-y_{1}^{*})+\frac{\partial}{\partial y_{2}}\,A({\mathbf{y}^{*}_{N,k}}(t))(y_{2}(t)-y_{2}^{*})+\dots\,. (48)

Obviously, system (48) is non-autonomic. We have to use a new approximation of GN,k(t)G_{N,k}(t) by a constant on the interval IkI_{k} and re-start again.

As we see, advancing to just one higher order of accuracy destroys the simplicity of the method which is one of its more interesting added values. Therefore, we cannot consider going to higher orders an advantage. It may slightly improve the precision at the price of destroying the efficiency and simplicity of the method.

3.2 Some examples

  • The van der Pol equation

    The van der Pol equation

    y′′(t)+μ(y2(t)1)y(t)+y(t)=0y^{\prime\prime}(t)+\mu(y^{2}(t)-1)y^{\prime}(t)+y(t)=0 (49)

    is a particular case of the Liénard equation,

    y′′(t)+f(y)y(t)+g(y)=0.y^{\prime\prime}(t)+f(y)\,y^{\prime}(t)+g(y)=0\,. (50)

    which will be discussed in the Appendix. In the van der Pol equation, we have obviously that f(y)=μ(y2(t)1)f(y)=\mu(y^{2}(t)-1) and g(y)yg(y)\equiv y. This equation can be easily written in the matrix form (2) by writing y1(t)y(t)y_{1}(t)\equiv y(t) and y2(t)y(t)y_{2}(t)\equiv y^{\prime}(t), as

    (y1(t)y2(t))=(011μ(1y12))(y1(t)y2(t)).\left(\begin{array}[]{c}y^{\prime}_{1}(t)\\[8.61108pt] y^{\prime}_{2}(t)\end{array}\right)=\left(\begin{array}[]{cc}0&1\\[8.61108pt] -1&\mu(1-y_{1}^{2})\end{array}\right)\left(\begin{array}[]{c}y_{1}(t)\\[8.61108pt] y_{2}(t)\end{array}\right)\,. (51)

    Our goal is to compare the precision of our method with a reference solution. No explicit solutions to the van der Pol equation (49) are known, so that we use the Runge-Kutta solution of eight order, yrk(t)y_{rk}(t), as reference solution (alternatively, one may consider a Taylor solution of eighth order, which has a comparable precision). We compare the precision of our method with the precision of the solutions as obtained by a third or fourth order Taylor method. As a measure of the error, we use

    eh:=1Nj=0N1(yrk(tj)y(tj))2,e_{h}:=\frac{1}{N}\sum_{j=0}^{N-1}(y_{rk}(t_{j})-y(t_{j}))^{2}\,, (52)

    where y(t)y(t) is the solution obtained using our method or any other, like the Taylor method. Then, we need to use given values of the parameters and inicial conditions. In Table 1 below, we compre the errors produced by our method as compared with the error given with the use of third and fourth order Taylor method for different values of the interval width hh for T=20T=20, μ=1/2\mu=1/2 and the initial conditions y(0)=0y(0)=0 and y(0)=2y^{\prime}(0)=2.


    hMatrixTaylor 3rdTaylor 4rd1042.63 10118.60 1076.40 1071032.08 10118.30 1096.45 1091021.46 1089.11 1096.60 10111012.04 1051.17 1041.67 1072 1012.30 1042.23 1037.94 1065 1018.10 1031.49 1012.75 103\begin{array}[c]{cccc}h&{\rm Matrix}&{\rm Taylor\,3^{\rm rd}}&{\rm Taylor\,4^{\rm rd}}\\[8.61108pt] 10^{-4}&2.63\,10^{-11}&8.60\,10^{-7}&6.40\,10^{-7}\\ 10^{-3}&2.08\,10^{-11}&8.30\,10^{-9}&6.45\,10^{-9}\\ 10^{-2}&1.46\,10^{-8}&9.11\,10^{-9}&6.60\,10^{-11}\\ 10^{-1}&2.04\,10^{-5}&1.17\,10^{-4}&1.67\,10^{-7}\\ 2\,10^{-1}&2.30\,10^{-4}&2.23\,10^{-3}&7.94\,10^{-6}\\ 5\,10^{-1}&8.10\,10^{-3}&1.49\,10^{-1}&2.75\,10^{-3}\end{array}

    TABLE 1.- Values for the error ehe_{h} for our matrix method and the Taylor method of orders three and fourth for distinct values of hh for the van der Pol equation.


    It is clear that our matrix method has a precision in between those of the third and fourth orden Taylor method. These results are just an example of the results obtain in multiple numerical experiments, we have performed showing essentially the same result. However, Table 1 as well as other numerical experiments show and important tendency: the lower hh the better our precision as compared with the precision given by the Taylor method. The reason is clear, the smaller hh the bigger the number of operations needed to obtain the approximate solution. Our matricidal method requieres less operations than the Taylor method, so that our precision gets better as hh gets smaller.

    In Appendix B, we give our source code when our method is to be applied in the present case.

  • The Duffing equation

    This is another second order non linear equation, which has the following form:

    y′′(t)+y(t)+y3(t)=0,y^{\prime\prime}(t)+y(t)+y^{3}(t)=0\,, (53)

    where we have omitted the term in the first derivative of the indeterminate, y(t)y^{\prime}(t). For the Duffing equation, there are explicit solutions in terms of the Jacobi elliptic functions. For instance, being given the initial conditions y(0)=1y(0)=1 and y;(0)=0y;(0)=0, we have the following solution:

    ye(t)=i1+ksn(u;m),y_{e}(t)=-i\sqrt{1+k}\,{\rm sn}\,(u;m)\,, (54)

    where sn(u;m){\rm sn}\,(u;m) denotes the elliptic sine. The arguments in (49) denote the following:

    u=12(t2(1k)+2t(c2kc1)+(1k)c22),\displaystyle u=\frac{1}{\sqrt{2}}\,(t^{2}(1-k)+2t(c_{2}-kc_{1})+(1-k)c_{2}^{2})\,,
    m=1+k1k,k=1+2c1.\displaystyle m=\frac{1+k}{1-k}\,,\qquad k=\sqrt{1+2c_{1}}\,. (55)

    The value of the constants included in (3.2) are c1=1.5c_{1}=1.5 and c2=1.1920055c_{2}=1.1920055. The Duffing equation may be written in matrix form, if we write again y1(t)y(t)y_{1}(t)\equiv y(t) and y2(t)y(t)y_{2}(t)\equiv y^{\prime}(t), as

    (y1(t)y2(t))=(01(1+y12)0)(y1(t)y2(t)).\left(\begin{array}[]{c}y^{\prime}_{1}(t)\\[8.61108pt] y^{\prime}_{2}(t)\end{array}\right)=\left(\begin{array}[]{cc}0&1\\[8.61108pt] -(1+y_{1}^{2})&0\end{array}\right)\left(\begin{array}[]{c}y_{1}(t)\\[8.61108pt] y_{2}(t)\end{array}\right)\,. (56)

    We define the error ehe_{h} as in (52), where we replace yrk(t)y_{rk}(t) by the exact solution, ye(t)y_{e}(t), which does exist in the present case. Also y(t)y(t) is the solution for which we want to compare its precision with the exact solution, in our case the solutions obtained by our matrix method as well as the third or fourth order Taylor solutions. The errors produced by each method are given on Table 2 below.


    hMatrixTaylor 3rdTaylor 4rd1041.69 1095.60 1054.13 1051031.03 10105.06 1074.24 1071029.61 1092.47 1084.24 1091011.16 1054.92 1044.12 1082 1011.18 1046.78 1037.56 1065 10182.00 1031.04 1017.73 103\begin{array}[c]{cccc}h&{\rm Matrix}&{\rm Taylor\,3^{\rm rd}}&{\rm Taylor\,4^{\rm rd}}\\[8.61108pt] 10^{-4}&1.69\,10^{-9}&5.60\,10^{-5}&4.13\,10^{-5}\\ 10^{-3}&1.03\,10^{-10}&5.06\,10^{-7}&4.24\,10^{-7}\\ 10^{-2}&9.61\,10^{-9}&2.47\,10^{-8}&4.24\,10^{-9}\\ 10^{-1}&1.16\,10^{-5}&4.92\,10^{-4}&4.12\,10^{-8}\\ 2\,10^{-1}&1.18\,10^{-4}&6.78\,10^{-3}&7.56\,10^{-6}\\ 5\,10^{-1}&82.00\,10^{-3}&1.04\,10^{-1}&7.73\,10^{-3}\end{array}

    TABLE 2.- Values for the error ehe_{h} for our matrix method and the Taylor method of orders three and fourth for distinct values of hh for the Duffing equation.


    We see that the results are quite similar to those obtained with the van der Pol equation. Similarly, we have made some numerical experiments that confirm these results.

  • The Lorenz equation

    The Lorenz equation, which is a model for the study of chaotic systems has been introduced in the study of atmospheric behaviour [9]. This equations arises in many problems of physics, where chaoticity is present [17, 18, 19, 20]. The Lorenz equation is usually written in matrix form and is three-dimensional:

    (y1(t)y2(t)y3(t))=(aa0by3(t)10y2(t)0c)(y1(t)y2(t)y3(t)),\left(\begin{array}[]{c}y^{\prime}_{1}(t)\\[8.61108pt] y^{\prime}_{2}(t)\\[8.61108pt] y^{\prime}_{3}(t)\end{array}\right)=\left(\begin{array}[]{ccc}-a&a&0\\[8.61108pt] b-y_{3}(t)&-1&0\\[8.61108pt] y_{2}(t)&0&-c\end{array}\right)\left(\begin{array}[]{c}y_{1}(t)\\[8.61108pt] y_{2}(t)\\[8.61108pt] y_{3}(t)\end{array}\right)\,, (57)

    aa, bb and cc being positive constants.

    This system is very sensitive to the particular choice of the parameters and of initial values, as may numerical experiments show. Based in these experiments, we have choosed the following values for the parameters, a=10a=10, b=9.996b=9.996 and c=8/3c=8/3, the fixed point (y1,y2,y3)=(4.88808,4.88808,8.996)(y_{1},y_{2},y_{3})=(4.88808,4.88808,8.996) is an attractor. We proceed as in the previous case and compare solutions of the errors (52) resulting of the use of the Taylor method of order two and this Matrix method using as reference the solution obtained with the Ruge Kutta method of eighth order. We use T=10T=10 and the solution with initial values (y1(0),y2(0),y3(0))=(1,0,1)(y_{1}(0),y_{2}(0),y_{3}(0))=(1,0,1). We have obtained a table (Table 3) of errors given in terms of the interval width hh as


    hMatrixTaylor second orderTaylor third orderTaylor fourth order1025.2 1066.73 1053.6 1083.1 10101015.3 1027.1 1011.8 1015.2 1022 1013.7 101errorerrorerror\begin{array}[c]{ccccc}h&{\rm Matrix}&\text{Taylor second order}&\text{Taylor third order}&\text{Taylor fourth order}\\[8.61108pt] 10^{-2}&5.2\,10^{-6}&6.73\,10^{-5}&3.6\,10^{-8}&3.1\,10^{-10}\\ 10^{-1}&5.3\,10^{-2}&7.1\,10^{-1}&1.8\,10^{-1}&5.2\,10^{-2}\\ 2\,10^{-1}&3.7\,10^{-1}&{\rm error}&{\rm error}&{\rm error}\end{array}

    TABLE 3.- Values of the precision ehe_{h} in terms of hh for T=10T=10, y1(0)=1y_{1}(0)=1, y2=0y_{2}=0 and y3(0)=1y_{3}(0)=1.


    The word “error” written in two entries in Table 2 means that the error that appears if the interval width is of the order of 0.2, when we use the Taylor method, is incontrollable. We also see that the precision obtained with the matrix method clearly improves the precision by the Taylor method the larger the length of the subintervals.

  • Neutral damping equation.

    This equation has been discussed in the literature [10, 16] and is

    x′′(t)+ε(x(t))2+x(t)=0,x^{\prime\prime}(t)+\varepsilon\,(x^{\prime}(t))^{2}+x(t)=0\,, (58)

    where the tilde means derivative with respect to the variable tt and ε\varepsilon is a real parameter. As in previous cases, let us define y(t):=x(t)y(t):=x^{\prime}(t), so that (58) can be written in the following form:

    (x+εy2)dxydy=0.(-x+\varepsilon y^{2})\,dx-y\,dy=0\,. (59)

    This equation is integrable with integrating factor:

    μ(x,y)e2εx.\mu(x,y)\equiv e^{2\varepsilon x}\,. (60)

    It is readily shown that equation (60) admits the following first integral:

    f(x,y)=[12y2+14ε2(2εx1)]e2εxf(x,y)=\left[\frac{1}{2}\,y^{2}+\frac{1}{4\varepsilon^{2}}\,(2\varepsilon x-1)\right]e^{2\varepsilon x} (61)

    If we write (58) in the standard matrix form as

    (xy)=(011εy)(xy),\left(\begin{array}[]{c}x^{\prime}\\[8.61108pt] y^{\prime}\end{array}\right)=\left(\begin{array}[]{cc}0&1\\[8.61108pt] -1&-\varepsilon y\end{array}\right)\left(\begin{array}[]{c}x\\[8.61108pt] y\end{array}\right)\,, (62)

    we readily observe that the only fixed point is the origin. It is also the origin the point at which the minimum of the first integral (61) lies. All orbits are periodic around the origin.

    In Figure 1, we have depicted the periodic solutions in phase space. Note that, although equation (58) may look like dissipative, it is not as Figure 1 manifests.

    Now, let us repeat the comparison between errors given by our matrix method with the errors given by the Taylor method at some low orders. As always, we use (47) as the definition of the error and the numerical Runge-Kutta solution of eighth order as the reference solution. We obtained the following results, which appear in Table 4, where hh is, as always, the distance between two consecutive nodes:


    hMatrixTaylor 3rdTaylor 4rd1042.2 10144.2 10143.2 10111031.9 10144.1 10113.1 10111029.0 10144.6 10134.0 10121011.6 1092.7 1078.2 10112.1012.6 1088.6 1061.0 1095.1011.1 1067.5 1046.1 106\begin{array}[c]{cccc}h&{\rm Matrix}&{\rm Taylor\,3^{\rm rd}}&{\rm Taylor\,4^{\rm rd}}\\[8.61108pt] 10^{-4}&2.2\,10^{-14}&4.2\,10^{-14}&3.2\,10^{-11}\\ 10^{-3}&1.9\,10^{-14}&4.1\,10^{-11}&3.1\,10^{-11}\\ 10^{-2}&9.0\,10^{-14}&4.6\,10^{-13}&4.0\,10^{-12}\\ 10^{-1}&1.6\,10^{-9}&2.7\,10^{-7}&8.2\,10^{-11}\\ 2.10^{-1}&2.6\,10^{-8}&8.6\,10^{-6}&1.0\,10^{-9}\\ 5.10^{-1}&1.1\,10^{-6}&7.5\,10^{-4}&6.1\,10^{-6}\end{array}

    TABLE 4. Values of the error in terms of hh for ε=0.1\varepsilon=0.1, T=2πT=2\pi (see (47)) and the initial values x(0)=1x(0)=1, x(0)=0x^{\prime}(0)=0.


    We observe that the precision of the matrix method is much higher than the precision of the third and forth order Taylor method for a distance between nodes h0.01h\leq 0.01. Moreover, we have to underline that our matrix one step method is much simpler to programming that the Taylor method as the reader can easily convince him/herself using any of these examples.

    There is another method based in the theory of perturbations in order to obtain approximate solutions to (53), which receives the name of Lindstedt-Poincaré. It is described in [16]. It consists in a series in terms of ε\varepsilon of the form:

    x(t)=x0(t)+εx1(t)+ε2x2(t)+.x(t)=x_{0}(t)+\varepsilon\,x_{1}(t)+\varepsilon^{2}\,x_{2}(t)+\dots\,. (63)

    Coefficients xi(t)x_{i}(t) may be obtain iteratively, once we have fixed the initial values. For instance, for x(0)=1x(0)=1 and x(0)=0x^{\prime}(0)=0, we obtain:

    x0(t)=cosωt,x1(t)=16(3+4cosωtcos2ωt),\displaystyle x_{0}(t)=\cos\omega t\,,\qquad x_{1}(t)=\frac{1}{6}\left(-3+4\cos\omega t-\cos 2\omega t\right)\,,
    x2(t)=13(2+6124cosωt23cos2ωt+18cos3ωt),\displaystyle x_{2}(t)=\frac{1}{3}\left(-2+\frac{61}{24}\,\cos\omega t-\frac{2}{3}\,\cos 2\omega t+\frac{1}{8}\,\cos 3\omega t\right)\,,
    ω=116ε+o(ε3).\displaystyle\omega=1-\frac{1}{6}\,\varepsilon+o(\varepsilon^{3})\,. (64)

    We may also evaluate the error for this perturbative method, which is independent of any division of the interval, in which the solution is considered, into subintervals. This error is 1.1 1041.1\,10^{-4}, which is obviously higher to those obtained using any of the numerical method considered.

    We finish the present example by proposing another approach to an approximate solution. By either method, matrix or Taylor, we obtain on each of the nodes {tk}\{t_{k}\} a value, xkx_{k}, of the approximate solution. Let us interpolate each interval by means of cubic splines, so that we obtain a segmentary approximation by cubic polynomials 222In [22], we have already proposed segmentary cubic solutions and studied their properties, although in [22] they were not necessarily cubic splines.. Let us assume that the cubit interpolating solution is s(t)s(t), then the error of this solution on an interval TT, with respect to the exact solution is given by (recall that cubic splines admit first and second continuous derivatives at the nodes)

    e=1T0T(s′′(t)+ε[s(t)]2+s(t))2𝑑t.e=\frac{1}{T}\int_{0}^{T}(s^{\prime\prime}(t)+\varepsilon\,[s^{\prime}(t)]^{2}+s(t))^{2}\,dt\,. (65)

    The form of the error for the Taylor method is also given by (60). The resulting errors appear in the following table (Table 5):


    hSplineTaylor 2rdTaylor 3rdTaylor 4rd1047.2 10164.7 10166.7 10166.5 10161031.8 10141.8 10141.8 10141.8 10141021.9 10101.9 10111.1 10101.1 10101014.6 1066.4 1064.6 1064.6 1062.1016.1 1059.8 1056.0 1056.0 1055.1014.8 1031.2 1034.6 1034.9 103\begin{array}[c]{ccccc}h&{\rm Spline}&{\rm Taylor\,2^{\rm rd}}&{\rm Taylor\,3^{\rm rd}}&{\rm Taylor\,4^{\rm rd}}\\[8.61108pt] 10^{-4}&7.2\,10^{-16}&4.7\,10^{-16}&6.7\,10^{-16}&6.5\,10^{-16}\\ 10^{-3}&1.8\,10^{-14}&1.8\,10^{-14}&1.8\,10^{-14}&1.8\,10^{-14}\\ 10^{-2}&1.9\,10^{-10}&1.9\,10^{-11}&1.1\,10^{-10}&1.1\,10^{-10}\\ 10^{-1}&4.6\,10^{-6}&6.4\,10^{-6}&4.6\,10^{-6}&4.6\,10^{-6}\\ 2.10^{-1}&6.1\,10^{-5}&9.8\,10^{-5}&6.0\,10^{-5}&6.0\,10^{-5}\\ 5.10^{-1}&4.8\,10^{-3}&1.2\,10^{-3}&4.6\,10^{-3}&4.9\,10^{-3}\end{array}

    TABLE 5.- Comparison between errors by the cubic spline and Taylor methods.


    Finally, for the perturbative method the error obtained is 7.4 1057.4\,10^{-5}. Concerning the conservation of the constant of motion, we measure its dispersion by means of the following parameter, efe_{f}, defined as

    ef:=1T0T(f(x0,y0)f(x,y))2𝑑t,e_{f}:=\frac{1}{T}\int_{0}^{T}(f(x_{0},y_{0})-f(x,y))^{2}\,dt\,, (66)

    where f(x,y)f(x,y) should be calculated using different approximations such as Taylor, matrix and the analytic approximate solution as obtained by the perturbative Lindstedt-Poincaré method mentioned earlier. The point (x0,y0)(x_{0},y_{0}) gives the chosen initial conditions. In the latter case, we have obtained ef=1.1 104e_{f}=1.1\,10^{-4}, in all others, we always got ef<108e_{f}<10^{-8}.

    Refer to caption
    Figure 1: Periodic orbits around the origin for the neutral damping equation. The horizontal coordinate represents the values of xx, while vertical coordinate gives the values of y=xy=x^{\prime}. Note that these periodic orbits are represented in phase space.
  • An epidemic equation

    A model for an epidemia has been proposed as early as in 1927 [23, 24, 25]. If x(t)x(t), y(t)y(t) and z(t)z(t) are the number, at certain time tt, of healthy, sick and dead persons, respectively, in some society, the model assumes that these functions satisfy the following non-linear system:

    x˙(t)\displaystyle\dot{x}(t) =\displaystyle= ax(t)y(t),\displaystyle-ax(t)\,y(t)\,,
    y˙(t)\displaystyle\dot{y}(t) =\displaystyle= ax(t)y(t)by(t),\displaystyle ax(t)\,y(t)-by(t)\,,
    z˙(t)\displaystyle\dot{z}(t) =\displaystyle= by(t),\displaystyle by(t)\,, (67)

    where aa and bb are positive constants and the dot means derivation with respect to time tt. The model assumes infection of healthy persons from sick persons. The latter died after some time. Note that, in the studied time interval, the sole cause of population dynamics is the epidemic. For obvious reasons, we consider only positive solutions. The fixed points for (3.2) have the form (α,0,β)(\alpha,0,\beta) with α,β>0\alpha,\beta>0.

    Let us consider the following vector field, also called the flux of (62),

    F(t):=(ax(t)y(t),ax(t)y(t)by(t),by(t)).F(t):=(-ax(t)\,y(t),ax(t)\,y(t)-by(t),by(t))\,. (68)

    Note that Fx<0F_{x}<0. For x(t)<b/ax(t)<b/a, we have Fy<0F_{y}<0, while x>b/ax>b/a, it results that Fy>0F_{y}>0. Thus, the fixed point is inestable if α>b/a\alpha>b/a. This is a necessary condition for the existence of the epidemic. If α<b/a\alpha<b/a, then the fixed points are asymptotically stable as depicted in Figure 2, where we have chosen a=0.0005a=0.0005 and b=0.1b=0.1. Different curves in Figure 2 represent different initial conditions.

    Refer to caption
    Figure 2: Asymptotic stability of fixed points with a=0.0005a=0.0005 and b=0.1b=0.1 in (3.2). The horizontal line represent the scaled number of healthy people, while the vertical figures give the scaled number of sick people. The arrow means the direction of time. Different curves are obtained using different initial conditions. Observe the presence of a maximum at a point which is independent on the initial conditions.

    Let us write (3.2) in matricial form as

    X˙(t)=A(x,y,z)X(t),\dot{X}(t)=A(x,y,z)\cdot X(t)\,, (69)

    where,

    X(t):=(x(t)y(t)z(t)),A(x,y,z):=(0ax00axb00b0).X(t):=\left(\begin{array}[]{c}x(t)\\[8.61108pt] y(t)\\[8.61108pt] z(t)\end{array}\right)\,,\qquad A(x,y,z):=\left(\begin{array}[]{ccc}0&-ax&0\\[8.61108pt] 0&ax-b&0\\[8.61108pt] 0&b&0\end{array}\right)\,. (70)

    In Figure 3, we obtain the curve giving the total number of infected people with time. After a maximum, the number of sick people decays quickly. We have used the values for the parameters a=0.0005a=0.0005 and b=0.1b=0.1 and the initial values x(0)=300>b/ax(0)=300>b/a, y(0)=20y(0)=20, z(0)=0z(0)=0.

    Refer to caption
    Figure 3: Number of infected people in relation with time. Observe the existence of a maximum. After the maximum the curve decreases steeply.

    The form of the error for the method is obtained using (65) again. Next in Table 6, we compare the errors for given values of hh of our Matrix Method as compared to second and third order Taylor.


    hMatrix MethodTaylor 2rdTaylor 3rd0.012.2 10114.0 10111.9 10110.12.0 1087.3 1082.0 10111.02.1 1047.2 1041.7 1072.03.6 1031.1 1021.2 105\begin{array}[c]{cccc}h&\text{Matrix Method}&{\rm Taylor\,2^{\rm rd}}&{\rm Taylor\,3^{\rm rd}}\\[8.61108pt] 0.01&2.2\,10^{-11}&4.0\,10^{-11}&1.9\,10^{-11}\\ 0.1&2.0\,10^{-8}&7.3\,10^{-8}&2.0\,10^{-11}\\ 1.0&2.1\,10^{-4}&7.2\,10^{-4}&1.7\,10^{-7}\\ 2.0&3.6\,10^{-3}&1.1\,10^{-2}&1.2\,10^{-5}\end{array}

    TABLE 6.- Comparison between the errors for the Matrix Method and the Taylor method in the case of the epidemic model.


    We see that the level of error is similar, although our method keeps the advantage of needing much less arithmetics than any Taylor method.

  • Lotka-Volterra equation.

    Models in population dynamics, as for instance the predator-prey competition, were independently developed by the american biologist A.J. Lotka [11] and the Italian mathematician V. Volterra [12], see modern references for the Lotka-Volterra equation in [4, 26, 27, 28]. The most general form of the Lotka-Volterra equation has the form

    x˙1=x1(ε1a11x1a12x2),\displaystyle\dot{x}_{1}=x_{1}(\varepsilon_{1}-a_{11}\,x_{1}-a_{12}\,x_{2})\,,
    x˙2=x2(ε2a12x1a22x2),\displaystyle\dot{x}_{2}=x_{2}(\varepsilon_{2}-a_{12}\,x_{1}-a_{22}\,x_{2})\,, (71)

    where x1x_{1} and x2x_{2} are functions of time tt and εi\varepsilon_{i}, aija_{ij}, i,j=1,2i,j=1,2 are constants. Following [26], we consider here a simpler version, yet non-linear, of (3.2), which is

    x˙=axbxy,\displaystyle\dot{x}=a\,x-b\,xy\,,
    y˙=dxycy,\displaystyle\dot{y}=d\,xy-c\,y\,, (72)

    and the initial conditions x(t0)=x0x(t_{0})=x_{0}, y(t0)=y0y(t_{0})=y_{0}.

    We may consider the solutions, x=x(t)x=x(t), y=y(t)y=y(t) of (3.2) as the equations determining a parametric curve on the xyx-y plane. Then, by elimination of tt and integration, we obtain:

    C(x,y)=xcyaexp{(b+d)x}.C(x,y)=x^{c}y^{a}\,\exp\{-(b+d)x\}\,. (73)

    Note that C(x,y)C(x,y) is a constant on each curve solution (constant of motion) and its value over each solution is determined via the initial conditions. Equations (3.2) have two fixed points, which are (0,0)(0,0) and (c/d,a/b)(c/d,a/b). For x>0x>0 and y>0y>0 all solutions are periodic.

    To apply the method proposed in the present article to this system, let us write it in matrix form as

    X˙=A(x,y)X,\dot{X}=A(x,y)\,X\,, (74)

    where,

    X(t)(x(t)y(t)),A(aby00dxc).X(t)\equiv\left(\begin{array}[]{c}x(t)\\[8.61108pt] y(t)\end{array}\right)\,,\qquad A\equiv\left(\begin{array}[]{cc}a-by&0\\[8.61108pt] 0&dx-c\end{array}\right)\,. (75)

    In order to estimate the precision of the method, we need to choose values for the initial values as well as for the parameters aa, bb, cc and dd. We have use several choices and obtain in all of them similar values. to show a table comparing the precision of our method with some others, let us choose as the values of the parameters, a=1.2a=1.2, b=0.6b=0.6, c=0.8c=0.8and d=0.3d=0.3. As the initial values, let us choose, x0=y0=3x_{0}=y_{0}=3. In addition, we have to take an integration time, in our case we took T=20T=20, which approximately accounts for three periods.

    In order to determine the error produced by our matrix method is determined by the formula (65) above. We compare this error with those of second and third order Taylor method as compared to the numerical solution obtained by a forth order Runge-Kutta. These errors are shown in Table 7.


    hMatrix MethodTaylor 2rdTaylor 3rd0.014.1 1083.5 1088.2 10130.13.5 1085.4 1082.3 1090.22.6 1035.4 1031.0 105\begin{array}[c]{cccc}h&\text{Matrix Method}&{\rm Taylor\,2^{\rm rd}}&{\rm Taylor\,3^{\rm rd}}\\[8.61108pt] 0.01&4.1\,10^{-8}&3.5\,10^{-8}&8.2\,10^{-13}\\ 0.1&3.5\,10^{-8}&5.4\,10^{-8}&2.3\,10^{-9}\\ 0.2&2.6\,10^{-3}&5.4\,10^{-3}&1.0\,10^{-5}\end{array}

    TABLE 7.- Comparison between the errors for the Matrix Method and the Taylor method in the case of the Lotka-Volterra equation.


    We see that the precision of our method is equivalent to those of a second order Taylor, with much less arithmetic operations.

  • On the possibility of extending the method to PDE: The Burger’s equation.

    Can we extend this precedent discussion to partial differential equations admitting separation of variables? One possible example would have been the convection diffusion equation in one spatial dimension [29]:

    tu(x,t)=x[D(u)xu(x,t)]+Q(x,u)xu(x,t)+P(x,u).\frac{\partial}{\partial t}u(x,t)=\frac{\partial}{\partial x}\left[D(u)\,\frac{\partial}{\partial x}\,u(x,t)\right]+Q(x,u)\,\frac{\partial}{\partial x}\,u(x,t)+P(x,u)\,. (76)

    Separation of variables for (76) is discussed in [29]. In general, our method is not applicable here since the resulting equations after separation of variables are not of the form (1). Nevertheless, another point of view is possible. Assume we want to obtain approximate solutions of an equation of the type (76) on the interval [0,X][0,X] under the conditions u(0,t)=0u(0,t)=0, u(X,t)=0u(X,t)=0, u(x,0)=h(x)u(x,0)=h(x), h(x)h(x) being a given smooth function and u(0,0)=u(X,0)=0u(0,0)=u(X,0)=0. On the interval [0,X][0,X], we define a uniform partition of width h:=X/nh:=X/n and nodes xk=khx_{k}=kh, k=0,1,,nk=0,1,\dots,n.

    One may propose one discretization of the solution of the form uk(t):=u(xk,t)u_{k}(t):=u(x_{k},t), k=0,1,,nk=0,1,\dots,n. Then, the second derivative in (76) could be approximated using finite differences [30]:

    2x2u(xk,t)=u(xk1,t)2u(xk,t)+u(xk+1,t)h2,\frac{\partial^{2}}{\partial x^{2}}\,u(x_{k},t)=\frac{u(x_{k}-1,t)-2u(x_{k},t)+u(x_{k+1},t)}{h^{2}}\,, (77)

    while for the first spatial derivative, we have

    xu(xk,t)=u(xk+1,t)u(xk1,t)2h.\frac{\partial}{\partial x}\,u(x_{k},t)=\frac{u(x_{k}+1,t)-u(x_{k-1},t)}{2h}\,. (78)

    with k=1,2,,n1k=1,2,\dots,n-1, so that equation (76) takes the form

    ddtU(t)=F(U(t)),\frac{d}{dt}\,U(t)=F(U(t))\,, (79)

    where FF is a square matrix of order n1n-1 and U(t)=(u1(t),u2(t),,un1(t)U(t)=(u_{1}(t),u_{2}(t),\dots,u_{n-1}(t) with uk(t):=u(xk,t)u_{k}(t):=u(x_{k},t), k=1,2,,n1k=1,2,\dots,n-1 and initial values U(0)=(u(x1,0),u(x2,0),,u(xn1,0))U(0)=(u(x_{1},0),u(x_{2},0),\dots,u(x_{n-1},0)) and initial value u(x,0)=h(x)u(x,0)=h(x). If it were possible to write equation (79) in the form:

    ddtU(t)=A(U(t))U(t),\frac{d}{dt}\,U(t)=A(U(t))\cdot U(t)\,, (80)

    then, we would be able to apply our method to find an approximate solution of (80) on a given finite interval. This property is not fulfilled by the general convection diffusion equation (76). However, it is satisfied by a particular choice of this type of parabolic equations: the non-linear Burger’s diffusion equation, which is [31, 32]:

    tu(x,t)=2x2u(x,t)u(x,t)xu(x,t).\frac{\partial}{\partial t}\,u(x,t)=\frac{\partial^{2}}{\partial x^{2}}\,u(x,t)-u(x,t)\,\frac{\partial}{\partial x}\,u(x,t)\,. (81)

    We choose [0,1][0,1] as the integration interval for the coordinate variable, so that X=1X=1 as above. For the time variable, we use t[0,1]t\in[0,1]. This choice is made just for simplicity and as an example to implement our numerical calculations. We use the finite difference method, where the second derivative and first derivatives are replaced as in (77) and (78), respectively.

    Using (77) and (78) in (81), we have for each of the nodes the following recurrence relation:

    ddtu(xk,t)=1h2((1h2u(xk,t))u(xk1,t)2u(xk,t)+(1+h2u(xk,t))u(xk+1,t)),\displaystyle\frac{d}{dt}\,u(x_{k},t)=\frac{1}{h^{2}}\left(\left(1-\frac{h}{2}\,u(x_{k},t)\right)u(x_{k-1},t)-2u(x_{k},t)+\left(1+\frac{h}{2}\,u(x_{k},t)\right)u(x_{k+1},t)\right)\,, (82)

    for k=1,2,,n1k=1,2,\dots,n-1. When written expressions (82) in matrix form, we obtain a matrix equation of the form (6) and, then, suitable for applying our method.

    In our numerical realization, we have used a small number of nodes, to begin with, say n=5n=5. Then, we integrate on the time variable, on the interval t[0,1]t\in[0,1], using ht:=1/mh_{t}:=1/m, where mm is a given integer, as the distance between time nodes. Then, we compare our solution with the solution of (81) given by the sentence NDSolve provided by the Mathematica software, solution that we denote as v(x,t)v(x,t).

    Then, we can estimate the error of our solution as compared with v(x,t)v(x,t). This is given by

    error:=1mj=0mk=1n1(u(xk,tj)v(xk,tj))2,{\rm error}:=\frac{1}{m}\sum_{j=0}^{m}\sum_{k=1}^{n-1}(u(x_{k},t_{j})-v(x_{k},t_{j}))^{2}\,, (83)

    where tj:=jhtt_{j}:=jh_{t} for all j=0,1,2,,mj=0,1,2,\dots,m. To estimate the error, we have to give values to mm. For m=10m=10, the error is 1.07 1041.07\,10^{-4}. A similar result can be obtained with m=20m=20 or even higher, so that m=10m=10 gives already a reasonable approximation. The solution for t=1t=1 is nearly zero, which one may have expected taking into account that equation (80) describes a dissipative model. Thus, our approximate solution may be considered satisfactory also in this case.

    A few more words on the comparison of our solution and the solution using the Euler method, performed through NDSolve. First of all, we use the explicit Euler method, for which the local error is o(ht2)o(h^{2}_{t}), through the option “Method \to “ExplicitEuler”, “StartingStepSize” \to 1/1001/100”, since for ht>0.01h_{t}>0.01 the result is unstable. Once we have done the spatial discretization, we integrate (82) with respect to time. The instability often appears when one uses an explicit method of spatial and time discretization for parabolic PDE [33]. This instability comes after the errors due to the arithmetic calculations and are amplified after time integration. Then, let us consider n=5n=5, where nn is the number of spatial nodes, 0t10\leq t\leq 1 and m=100m=100, so that the time integration interval becomes ht=0.01h_{t}=0.01. Then, the error (83) is 3.40 1043.40\,10^{-4}.

    We may improve time integration using Euler mid-point integration. In this case, one uses the option “Method \to “ExplicitMidpoint”, “StartingStepSize” \to 1/10”. Choosing m=10m=10, we obtain an error of 1.76 1041.76\,10^{-4}, so that ht=0.1h_{t}=0.1. This error is of the order given by our method. Just recalling that the local error given by the mid-point Euler method is of the order of o(ht3)o(h_{t}^{3}).

    In addition, we may compare the precision of our method and both Euler methods mentioned above by the errors obtained using the third and forth order Adams method (see Chapter III in [34]), which for ht=0.1h_{t}=0.1 are respectively given by 1.76 1041.76\,10^{-4} and 1.74 1041.74\,10^{-4}. This error has always been obtained using (83), where u(x,t)u(x,t) is our solution and v(x,t)v(x,t) is the solution given by either Euler, Euler mid-point or Adams methods. Nevertheless, the Adams method is a multi-step method while ours is one step method, which means that ours is more easily programmable.

4 Concluding remarks

In the present paper, we have generalized a one step integration method, which has been developed in the seventies of last century by Demidovich and some other authors [1, 2, 3]. While Demidovich and others restrict themselves to the search for approximate solutions of linear systems of first order differential equations, albeit with variable coefficients, we propose a way to extend the ideas of the mentioned authors to non-linear systems. The solutions we have obtained have a similar degree of precision than those proposed in [1, 2, 3]. In our method, we obtain a sequence of approximate solutions on a finite interval of ordinary differential equations, and we have proven that this sequence converges uniformly to the exact solution. In order to obtain each approximate solution, we have divided the integration interval into subintervals of length hh. The sequence of approximate solutions can be obtained after successive refinements of hh. For each approximate solution, characterized by a value of hh, we have determined the local error up to o(h3)o(h^{3}).

It is certainly true, that our one step matrix method does not improve the precision obtained with the fourth order Runge-Kutta method or other equivalent. However, the great advantage of the proposed method with any other is that its algorithm of construction, through the exponential matrix as described in Section 2, is much simpler than their competitors. Simplicity that is inherited from its antecesor the Demidovich method [1, 2, 3]. This paper is somehow the continuation of previous research of the authors in the same field [35, 36].

We have added some examples of the application of the method, on where we have performed numerous numerical test on the precision of the method, which lies between second and third Taylor method. We have used the sofware Mathematica to implement these numerical tests.

Acknowledgements

We acknowledge partial financial support to the Spanish MINECO, grant MTM2014-57129-C2-1-P, and the Junta de Castilla y León, grants VA137G18, BU229P18 and VA057U16. We are grateful to Prof L.M. Nieto (Valladolid) for some useful suggestions.

5 Appendix A: A conjecture relative to the Liénard equation.

As is well known, the Liénard equation has the following form:

y′′(t)+f(y)y(t)+g(y)=0.y^{\prime\prime}(t)+f(y)\,y^{\prime}(t)+g(y)=0\,. (84)

Let us assume that the function g(y)g(y) is a product of some function, that we also call g(y)g(y) for simplicity, and yy, so that equation (3.2) takes the form:

y′′(t)+f(y)y(t)+g(y)y(t)=0.y^{\prime\prime}(t)+f(y)\,y^{\prime}(t)+g(y)\,y(t)=0\,. (85)

This second order equation may be easily transformed into a two dimensional first order equation (y(t)=y(t)y(t)=y(t), z(t)=y(t)z(t)=y^{\prime}(t)):

(yz)=(01g(y)f(y))(yz)=A(yz),\left(\begin{array}[]{c}y^{\prime}\\[8.61108pt] z^{\prime}\end{array}\right)=\left(\begin{array}[]{cc}0&1\\[8.61108pt] -g(y)&-f(y)\end{array}\right)\left(\begin{array}[]{c}y\\[8.61108pt] z\end{array}\right)=A\left(\begin{array}[]{c}y\\[8.61108pt] z\end{array}\right)\,, (86)

where the meaning of the matrix AA is obvious. Its eigenvalues are

λ±(y)=12(f(y)±f2(y)4g(y)).\lambda_{\pm}(y)=-\frac{1}{2}\left(f(y)\pm\sqrt{f^{2}(y)-4g(y)}\,\right)\,. (87)

The conjecture is the following: A sufficient condition for the solutions of (3.2) to be bounded for t>0t>0 is that the following three properties hold simultaneously: i.) The discriminant in (60) be negative, i.e., f2(y)4g(y)<0f^{2}(y)-4g(y)<0; ii.) The function f(y)f(y) be non-negative, f(y)0f(y)\geq 0 and iii.) The function g(y)g(y) is positive and smaller than one, 0<g(y)<10<g(y)<1.

This conjecture is based in the form of equations (2) and (12) and we have tested it in several numerical experiments.

Two more comments in relation to the Liénard equation:

1.- The vector field associated to the equation is given by J(z,g(y)yf(y)z)J\equiv(z,-g(y)y-f(y)z), for which the divergence is div(J)=f(y)0{\rm div}\,(J)=-f(y)\leq 0 if f(y)0f(y)\geq 0. Therefore, if f(y)f(y) is non-negative the divergence is always negative, so that the origin (0,0)(0,0) is an attractor. We conjecture that this attractor is also global.

2.- If we take f(y)0f(y)\equiv 0, then (3.2) represents a Hamiltonian flow with Hamiltonian given by

H(y,z)=12z2+V(y),H(y,z)=\frac{1}{2}\,z^{2}+V(y)\,, (88)

with

V(y)=0yug(u)𝑑uV(y)=\int_{0}^{y}ug(u)\,du (89)

Under the assumption g(y)>0g(y)>0, the derivative V(y)=yg(y)V^{\prime}(y)=yg(y) is positive for y>0y>0 and negative for y<0y<0. Thus the only critical point is y=0y^{*}=0, at this point V(0)=0V^{\prime}(0)=0 and V′′(0)=g(0)>0V^{\prime\prime}(0)=g(0)>0, so that all the orbits are closed, hence periodic.

6 Appendix B: Source code to use our method in the van der Pole equation

Refer to caption
Figure 4: Source Code used when applying the method to our example using the van der Pole equation.

References

  • [1] Demidovich B. Lectures on the mathematical Theory of Stability, Nauka, Moscow, 1967.
  • [2] Kotsis D. The approximation of the characteristic multipliers of periodic differencial equations. Alkalmaz. Mat. Lapok., 1977; 2: 269276.
  • [3] Hsu C.S. On approximating a general linear periodic system J. Math. Anal. Appl. 1974; 45: 234251.
  • [4] Farkas M. Periodic Motions. Springer, New York, 1994.
  • [5] van der Pol B. On relaxation-oscillations, The London, Edinburgh and Dublin Phil. Mag. & J. of Sci. 1927; 2 (7): 978-992.
  • [6] Nepomuceno E.P., Peixoto M.L.C., Martins S.A.M., Rodrigues Junior H.M., Perc M., Inconsitencies in numerical simulations of dynamical systems using interval arithmetic, Int. J. Bifur. Chaos 2018; 28 (04): 1850055.
  • [7] Nepomuceno E.P., Guedes P.F.S., Barbosa A.M., Perc M., Repnik R., Soft computing simulations of chaotic systems, Int. J. Bifur. Chaos 2019; 29 (08): 1950112.
  • [8] Duffing G. Forced oscillations with variable natural frequency and their technical relevance (German), Heft 41/42, Braunschweig: Vieweg; 1918, 1-134.
  • [9] Lorenz E.N., Deterministic non-periodic flow, J. Atmos. Sci. 1963; 20 (2): 130-141.
  • [10] Hale J., Kocak H., Dynamics and Bifurcations. Springer-Verlag; New York, 1991.
  • [11] Lotka A.J. Elements of Physical Biology. Williams and Wilkins; Philadelphia, USA, 1925.
  • [12] Volterra V. Variations and fluctuations of the number of individuals in animal species living together, in Animal Ecology, R.N. Chapman, Ed., McGraw-Hill, New York, 1931.
  • [13] Beléndez A., Gimeno E., Fernández E., Méndez D.I., Alvarez M.L. Accurate approximate solutions to non-linear oscillators in which the restoring force is inversely proportional to the dependent variable. Phys. Scr. 2008; 77: 065004 (2008).
  • [14] Egger H., Schmidt K., Shashkov V. Multistep and RungeKutta convolution quadrature methods for coupled dynamical systems, to be published in the J. Comp. Appl. Math.
  • [15] Kalogiratoua Z., Monovasilis Th., Psihoyios G., Simos T.E., Runge-Kutta type methods with special properties for the numerical integration of ordinary differential equations. Phys. Rep. 2014; 536: 75-146.
  • [16] Mickens R. Oscillations in Planar Dynamic Systems. World Scientific, London, 1996.
  • [17] Haken H. Analogy between higher instabilities in fluids and lasers. Phys. Lett. A 1975; 53: 77-78.
  • [18] Gorman M., Widmann P.J., Robbins K.A., Nonlinear dynamics of a convection loop: A quantitative comparison of experiment with theory. Physica D. 1986; 19 (2): 255-267.
  • [19] Coumo K.M., Oppenheim A.V., Circuit implementation of synchronized chaos with applications to communications. Phys. Rev. Lett. 1993; 71 (1): 65-68.
  • [20] Mishra A., Sanghi S. A study of the asymmetric Malkus waterwheel: The biased Lorenz equations. Chaos 2006; 16 (1): 013114.
  • [21] Bario R. Performance of the Taylor series method for ODEs/DAEs. Appl. Math. Comp. 2005; 163: 525-545.
  • [22] Lara L.P., Gadella M. An approximation to solutions of linear ODE by cubic interpolationComp. Math. Appl. 2008; 56: 1488-1495.
  • [23] Kermack W.O., McKendrick A.G. A contribution to the Mathematic Theory of Epidemics. Proceedings of the Royal Society A 1927; 115 (772): 700-721.
  • [24] S. Strogatz. Non-linear Dynamics and Chaos. Addison-Wesley, New York, 1994.
  • [25] Torralba-Rodríguez O., Conde-Gutiérrez R.A., Hernández-Javier A.L. Modeling and prediction of COVID-19 in México applying mathematical and computational models. Chaos, Solitons and Fractals 2020; 138: 1009946 (2020).
  • [26] Chicone C. Ordinary Differential Equations with Applications. Springer, New York, 1999.
  • [27] Murray J.D. Mathematical Biology I: An Introduction. Springer, Berlin and New York, 2003.
  • [28] Llibre J., Valls C., Global analytic first integrals for the real planar Lotka-Volterra system. J. Math. Phys. 2007; 48 (3): 033507.
  • [29] Huabing Jia, Wei Xu, Xiaoshan Zhao, Zhanguo Li, Separation of variables and exact solutions to nonlinear diffusion equations with x-dependent convection and absorption. J. Math. Annal. Appl. 2008; 339: 982-995.
  • [30] Kincaid D, Cheney W, Numerical analysis, Mathematics of Scientific Computing, AMS, Providence, Rhode Island, USA, Third Edition 2002.
  • [31] Burger J.M., A mathematical Model Illustrating the Theory of Turbulence, Academic Press, New York, 1948.
  • [32] Biazar J., Aminikhah H., Exact numerical solutions for non-linear Burger’s equation by VIM. Math. Comp. Mod. 2009; 49: 1394-1400.
  • [33] Carnahan B., Lutter H.A., Wilkes J.O., Applied Numerical Methods. Wiley, New York, 1969.
  • [34] Hairer E., Nørsett S.P., Wanner G., Solving Ordinary Differential equations I: Nonstiff Problems. Springer, Berlin, Heidelberg, 1993.
  • [35] Gadella M., Lara L.P. On the determination of approximate periodic solutions of some non-linear ODE. Appl. Math. Comp. 2012; 18 (10): 6038-6044.
  • [36] Gadella M., Lara L.P., Pronko G.P. Iterative solution of some nonlinear differential equations. Appl. Math. Comp. 2011; 217 (22): 9480-9487.