Forecasting causal dynamics with universal reservoirs
Abstract
An iterated multistep forecasting scheme based on recurrent neural networks (RNN) is proposed for the time series generated by causal chains with infinite memory. This forecasting strategy contains, as a particular case, the iterative prediction strategies for dynamical systems that are customary in reservoir computing. Readily computable error bounds are obtained as a function of the forecasting horizon, functional and dynamical features of the specific RNN used, and the approximation error committed by it. The framework in the paper circumvents difficult-to-verify embedding hypotheses that appear in previous references in the literature and applies to new situations like the finite-dimensional observations of functional differential equations or the deterministic parts of stochastic processes to which standard embedding techniques do not necessarily apply.
Key Words: forecasting, dynamical system, ergodicity, nonuniform hyperbolicity, Oseledets’ theorem, recurrent neural network, reservoir computing, state-space system, echo state network, ESN, generalized synchronization, causal chain with infinite memory, universality, universal approximation.
1 Introduction
Recurrent neural networks (RNN) and in particular reservoir computing (RC) [Luko 09, Tana 19] have gained much visibility in recent years due to their remarkable success in the learning of the chaotic attractors of complex nonlinear infinite dimensional dynamical systems [Jaeg 04, Path 17, Path 18, Lu 18, Peco 20, Hart 20, Grig 21a, Arco 22] and in a great variety of empirical classification and forecasting applications [Wyff 10, Bute 13, Grig 14]). Even though much research has been conducted in recent years that theoretically explains this good performance, most of the work has been concentrated on the estimation [Gono 20a] and approximation [Grig 18b, Grig 18a, Gono 20b, Gono 21, Cuch 22, Gono 23a] properties of input/output systems but not in the way in which reservoir computing has been traditionally used as an RNN in order to forecast the observations of a dynamical system.
This paper aims to quantify how the universal approximation properties of a family of recurrent networks in the sense of input/output systems impact their approximation properties in the realm of the modeling and forecasting of the observations of autonomous dynamical systems and, more generally, of causal chains with infinite memory (CCIM) [Dede 07, Douk 08, Alqu 12]. More specifically, we shall quantify the forecasting performance of an RNN that has been trained to mimic the observations of a dynamical system or a causal chain to a certain accuracy as an input/output system (where the output is a time-shifted version of the system observations in the input). This result will be obtained with minimal requirements on the state-space map. The upshot of our main result is that the short-term forecasting (or path-continuation) error committed by an RNN that has been trained to uniformly approximate the dynamics of the observations up to a certain accuracy is bounded above by the product of with an exponential factor controlled by the top Lyapunov exponent of the autonomous dynamical system associated with the RNN that has been trained using the observations.
The two most popular forecasting schemes for causal dynamics are the so-called iterated multistep and direct forecasting strategies. There is a consensus on the fact that direct forecasting methods are superior in robustness with respect to model misspecification to those based on iterated multistep schemes such as those with RNNs [Marc 06, Ghys 09, McEl 15]. This statement can be rigorously proved in many contexts, especially when estimation errors can be neglected. In our work, we focus on the iterated multistep approach since we have an interest not only in minimizing short-term forecasting errors but ultimately in using the trained system as a proxy for the underlying data-generating process.
The causal chains that we forecast in this paper contain, as particular cases, two important dynamical elements. First, given a dynamical system on a -dimensional manifold and a map that provides lower dimensional observations of it, Takens’ Embedding Theorem [Take 81] proves that time series of -observations of form generically (that is, for an open and dense set of observation maps in the topology) a causal chain (actually of finite memory). Second, causal chains also appear as the deterministic parts of many stochastic processes after using the Wold [Broc 06, Theorem 5.7.1] or the Doob [Foll 02] decompositions.
The observations-based modeling and forecasting in the dynamical systems context is a venerable topic that has been traditionally tackled using attractor reconstruction techniques based on the Takens’ Embedding Theorem [Take 81, Star 99, Huke 06, Huke 15] and embedology [Saue 91]. More recently, new links have been established between traditional RNN-based forecasting schemes and dynamical systems learnability with classical notions like attractor embeddings and generalized synchronizations. The reader can check with [Hart 20, Lu 20, Manj 20, Verz 20, Verz 21, Hart 21, Grig 21a, Grig 23, Manj 21, Manj 22] and references therein for recent work in this direction.
A particularly interesting recent work in the context of dynamical systems is [Berr 23], which introduces a very general embedding framework that contains, as a particular case, most of the schemes in the references that we cited above. Moreover, it provides error estimates for the iterative multistep and direct forecasting strategies in that general framework, which are somewhat related to the results that we present in this paper. In that respect, we conclude this introduction by emphasizing some features of our approach that make it useful and, at the same time, differentiate it from the references above:
-
•
We focus exclusively on the modeling and forecasting of the observations of a given dynamical system and not of the full dynamical system itself. This choice allows us to circumvent the need for embedding and general synchronization properties of the state-space system used for the learning and the dynamical system to be forecasted. This fact is particularly important because it is difficult to establish that a given state-space system driven by prescribed observations of a dynamical system has embedding properties, a hypothesis that appears in most references like [Berr 23]. The embedding property has been proved for certain linear Takens-like systems [Grig 23] but remains to be shown for many standard universal RC and RNN paradigms like echo state networks [Jaeg 04] or state-affine systems [Grig 18b], even though the synchronization maps that they induce have been proved to be typically differentiable [Grig 21a].
-
•
Notwithstanding what has been said above, when given access to the full states of a dynamical system, our work also provides bounds for forecasting the dynamical system itself via two strategies, namely (i) seeing the identity map on the dynamical system as an observation map, and (ii) the causal embedding learning scheme introduced in [Manj 21]. The latter scheme uses a state map to learn a dynamical system from inputs consisting of the dynamical system’s left-infinite trajectories. When the reservoir map satisfies a condition called state-input invertibility (see [Manj 21] or Section 3.2 later on for details), the scheme is a particular case of our setup.
-
•
Unlike the bounds for the iterated multistep scheme in [Berr 23], the ones that we propose in this work are explicit (can be derived from constants introduced in the proofs) and are valid within a finite time horizon and not just asymptotically.
-
•
Our treatment covers not only observations of dynamical systems but also extends to general causal chains with infinite memory. We can hence handle forecasting problems that go beyond the situations usually treated with embedding techniques. These could be the solutions of infinite dimensional or functional differential equations (see example in Section 4) or the deterministic parts of stochastic processes.
-
•
The short-term forecasting bounds, or at least the rate at which the error grows with the forecasting horizon, are readily computable in terms of the approximation error and analytic and dynamical features of the RNN used for forecasting. The dependence on the approximation error makes it possible to use bounds developed in previous works (see [Gono 23a, Gono 23b]) that estimate it as a function of relevant architecture parameters in many popular reservoir families.
2 Preliminaries
This section describes the dynamic processes that are the object of our research and the forecasting scheme we will use for them. Additionally, we spell out some classical results on ergodic theory that will be central in our developments.
The data generating process (DGP) and its approximants.
Our results apply to the discrete-time evolution of dynamical variables in a -dimensional Euclidean space whose behavior is determined by a causal chain with infinite memory (CCIM). More specifically, we are interested in the dynamical properties of infinite sequences , where , for all , for which there exists a functional such that
and where the symbol denotes the semi-infinite sequence . These dynamics are the deterministic counterparts of various structures with a widespread presence in time series analysis. See for instance [Dede 07, Douk 08, Alqu 12] and references therein.
The CCIM determined by the functional has a canonical causal and time-invariant filter associated determined by the equality
(2.1) |
A classical problem in control and systems theory is the realization of filters like (2.1) as the unique solution (when it exists) of a state-space transformation
(2.2) | ||||
(2.3) |
where and are a state and a readout map, respectively, is, in this case, the state space and, typically, is much bigger than . Some classical references in connection to this problem are [Kalm 59b, Kalm 59a, Kalm 60, Kalm 62, Baum 66, Kalm 10] or, more recently, [Sand 78, Sont 79, Dang 84, Nare 90, Matt 94, Lind 15, Grig 21b, Orte 24b]. Even though the realization problem does not always have a solution, it has been shown that if the domain of the functional is restricted to a space of uniformly bounded sequences of the type for some and, additionally, it has a continuity condition called the fading memory property [Boyd 85, Grig 18b], then various families of state-space transformations of the type (2.2)-(2.3) can be used to approximate it uniformly. This is the case for linear systems with polynomial or neural network readouts [Boyd 85, Grig 18b, Gono 20b], state-affine systems with linear readouts [Grig 18b], or echo state networks [Grig 18a, Gono 20b, Gono 23a, Gono 21]. In other words, any of these families exhibit universality properties in the fading memory category. This allows us to conclude that given a fading memory functional and , there exists a state-space system in that family to which a functional can be associated and such that
For future reference, a condition that ensures that a state-space system has a functional associated is the so-called echo state property (ESP) [Jaeg 10]: the state equation has the ESP with respect to inputs in when for any there exists a unique such that (2.2) holds for any . In such a situation, has a filter associated that is uniquely determined by the relations . The filter induces the functional given by The functional corresponding to the state-space system is given by . The ESP has important dynamical implications [Grig 19, Manj 20, Manj 22]. In practice, the state space is often taken to be compact. In this case, if is a continuous map and a contraction on its first entry, the ESP is ensured. Indeed, for such a state map, it can be shown that with inputs from there is a compact subset such that . Stochastic generalizations of some of these statements can be found in [Manj 23, Orte 24a].
Causal chains with infinite memory and dynamical systems observations.
A particularly important family of CCIMs can be generated using the observations of some dynamical systems. Indeed, let be a manifold and let be a continuous map that determines a dynamical system on it via the evolution recursions
using a given point as initial condition. The data available for the forecasting of a dynamical system is given in the form of observations that are usually encoded as the values of a continuous map , that is, they are given by the sequences . The problem that we tackle in this work is the forecasting or continuation of those sequences using state-space transformations of the type (2.2)-(2.3) that have been learned/trained using exclusively temporal traces of -observations.
We shall now show that the observations of a large class of dynamical systems are CCIMs. More specifically, we shall work in the context of invertible dynamical systems for which an invertible generalized synchronization map can be found. We briefly recall the meaning of these terms. First, we say that the dynamical system on is invertible when it belongs either to the set of homeomorphisms (continuous invertible maps with continuous inverse) of a topological space or to the set of diffeomorphisms of a finite-dimensional manifold . The invertibility of allows us, given an observation map , to define the -delay map as . Second, given a state-space map , consider the drive-response system associated to the -observations of and determined by the recursions:
(2.4) |
We say that a generalized synchronization (GS) [Rulk 95, Peco 97, Ott 02, Bocc 02, Erog 17] occurs in this configuration when there exists a map such that for any , , as in (2.4), it holds that
(2.5) |
that is, the time evolution of the dynamical system in phase space (not just its observations) drives the response in (2.4). We emphasize that the definition (2.5) presupposes that the recursions (2.4) have a (unique) solution, that is, that there exists a sequence such that (2.4) holds true. When that existence property holds and, additionally, the solution sequence is unique, we say that has the -Echo State Property (ESP). The existence, continuity, and differentiability of GSs have been established, for instance, in [Star 99, Grig 21a] for a rich class of systems that exhibit the so-called fading memory property and that are generated by locally state-contracting maps . In such a framework, the state equation (2.4) uniquely determines a filter that satisfies the recursion , for all , in terms of which the GS can be written as
(2.6) |
The following result, introduced in [Grig 21a, Grig 24], shows that the observations of a dynamical system in the presence of injective GS form a CCIM. More explicitly, we shall be assuming that the GS is an embedding which means that is an injective immersion ( map with injective tangent map) and, additionally, the manifold topology in induced by coincides with the relative topology inherited from the standard topology in .
Proposition 2.1
Let be an invertible dynamical system on the manifold and let be an observation map. Let be a state map that determines a generalized synchronization when driven by the -observations of as in (2.4). Suppose that is an embedding. Then:
- (i)
-
The set is an embedded submanifold of the reservoir space .
- (ii)
-
There exists a differentiable observation map that extracts the corresponding observations of the dynamical system out of the reservoir states. That is, with the notation introduced in (2.4):
(2.7) - (iii)
-
The maps and determine a differentiable dynamical system given by
(2.8) which is -conjugate to by , that is,
(2.9) - (iv)
-
The sequences of -observations , , for a fixed , are causal chains with infinite memory.
The proof of this result can be found in [Grig 24] and is an elementary consequence of the fact that is an embedding (see, for instance, [Abra 88] for details). The map in (ii) is (notice that it can be constructed since, by hypothesis, is invertible). Regarding (iv), this is a consequence of (2.7), (2.5), and (2.6):
where the dependence of the newly defined map exclusively on is a consequence of the causality of the filter .
As we show in the next corollary, the last part of this proposition, together with Takens’ Embedding Theorem (or, more generally, the main result in [Grig 23]), implies that generic observations of a large class of dynamical systems form CCIMs.
Corollary 2.2
Let be a compact manifold of dimension and let be a twice-differentiable diffeomorphism that satisfies the following two properties:
- (i)
-
has only finitely many periodic points with periods less than or equal to .
- (ii)
-
If is any periodic point of with period , then the eigenvalues of the linear map are distinct.
Then for any generic scalar observation function , the sequences of -observations are causal chains with infinite memory.
Proof. Recall first that in the hypotheses of the corollary, Takens’ Theorem [Take 81, Huke 06] guarantees that a -truncated version of the -delay map given by is a continuously differentiable embedding. This map is in turn, the GS corresponding to the linear state map , with the lower shift matrix in dimension and . The claim then follows from part (iv) of Proposition 2.1.
Ergodic theorems for dynamical systems.
An important tool in the study of dynamical systems is the concept of ergodicity, which allows us to translate statements about space averages to averages along trajectories of the system, thus relating global analytic properties of a map to the dynamics it induces. Of particular relevance to our work is the study of nonuniform hyperbolicity and the multiplicative ergodic theorem of Oseledets [Osel 68, Ruel 79, Barr 07]. We briefly introduce the theory as applied to our context, keeping to the details and results necessary for our work. Our overview is based largely on [Barr 07, Chapter 1 and Chapter 3].
Let be an -dimensional submanifold of and let be a dynamical system acting on . A concept that allows us to quantify the rate of divergence of two points initially close together is that of the Lyapunov exponent. For a point the forward Lyapunov exponent along a vector in the tangent space of at the point is defined as
(2.10) |
where is the tangent map of at the point . Similarly, we define the backward Lyapunov exponent
(2.11) |
It can be shown that, for any given point , the forward Lyapunov exponent satisfies the following three properties:
- (i)
-
for all and all
- (ii)
-
for all and
- (iii)
-
where we make use of the convention
As a result, takes on finitely many values for some . These are related to the differing rates of convergence/divergence along different tangent directions. The properties (i)–(iii) also allow us to construct a filtration of vector subspaces in the tangent space using the ordered Lyapunov exponents to single out linearly independent tangent directions up to a given Lyapunov value. The filtration is where we define
(2.12) |
for that is, is the subspace of the tangent space consisting of all directions with asymptotic rate of divergence at most The number of tangent directions having a given rate of divergence is for The value is referred to as the multiplicity of the Lyapunov exponent
In like manner, for the backward Lyapunov exponent takes on finitely many values for some and has a filtration defined by
(2.13) |
for In a similar fashion to the forward Lyapunov exponent, the multiplicity of the backward Lyapunov exponent is defined as for
While the forward Lyapunov exponent captures the growth of tangent vectors in forward time, the backward exponent measures this growth in backward time. An obstacle in the definitions (2.10) and (2.11) is the which makes the growth of individual terms in the sequences intractable. To deal with this, we relate the forward and backward Lyapunov exponents and their corresponding filtrations. This relation is expressed in the notion of coherence. The filtrations and are said to be coherent if
- (i)
-
- (ii)
-
there is a decomposition such that
(2.14) - (iii)
-
uniformly over
The subspaces are called the Oseledets subspaces and the decomposition is called the Oseledets splitting. The number of distinct Lyapunov exponents and the Oseledets subspaces are invariant along a trajectory of the dynamical system in the sense that and for The Oseledets subspaces can be constructed by taking for Expressing them in this way is instructive: the set filters out those tangent directions which, in forward time, have greater Lyapunov exponent than , while the set filters out those that, in backward time, have greater Lyapunov exponent than . Thus the tangent directions left in are in some sense pure, in that their Lyapunov exponent in forward and backward time are the same.
We focus on a certain regular class of points whose properties allow us to establish a link between the Lyapunov exponent and the rate of divergence of points in a nearby neighborhood using the theory of nonuniform hyperbolicity. In particular, for these points the s in (2.10) and (2.11) become limits and they possess a splitting as in (2.14), allowing us to decompose the tangent space into subspaces where the rate of expansion/contraction under the action of the dynamical system is uniform. A point is said to be forward regular if
Likewise, is said to be backward regular if
In both cases, the expression on the right is the sum of the distinct values of the Lyapunov exponent counted with multiplicity. Intuitively, this says that a small volume of space around the point grows at an exponential rate controlled by the sum of the Lyapunov exponents. Finally, we are ready to define the set of regular points: a point is said to be Lyapunov-Perron regular, or simply regular, if it is both forward and backward regular and the filtrations and are coherent. This brings us to Oseledets’ Multiplicative Ergodic Theorem [Barr 07, Theorem 3.4.3].
Theorem 2.3 (Oseledets’ Multiplicative Ergodic Theorem)
Let be a dynamical system acting on the -dimensional submanifold of . Let be a Borel probability measure on , invariant with respect to , and suppose and where Then the set of regular points is invariant under and has full -measure.
While above we gave the standard definition for a regular point, we require a slightly stronger consequence of regularity found in [Barr 07, Theorem 1.3.11]. The result involves a certain type of basis which is conveniently adapted to the associated filtration at the point.
Definition 2.4
Consider an -dimensional vector space and a filtration A basis of is said to be normal with respect to this filtration if for every there is a basis for in the set
The result we need is in fact an equivalent condition for forward regularity of a point, quantifying how a small volume at a point expands along trajectories. It allows us to bound how the angles between the Oseledets subspaces in the Oseledets splitting evolve with time. For a set of linearly independent vectors, we denote by the volume of the -parallelepiped formed by these vectors. The following result can be found in [Barr 07, Theorem 1.3.11].
Theorem 2.5
Let be an -dimensional submanifold of and let be a dynamical system acting on . Consider a point , with associated filtration as defined above. Then is forward regular if and only if for any basis for , normal with respect to and for any
3 Forecasting causal chains with recurrent neural networks
We now study the iterated multistep forecasting scheme for causal chains using recurrent neural networks and prove our main result. The main theorem provides error bounds for the forecasting error as a function of the forecasting horizon and the dynamical properties of the RNN used in the prediction exercise.
3.1 The iterated multistep forecasting scheme
We formulate the forecasting problem in a simplified setup in which we are given a bi-infinite sequence which is a realization of a causal chain with infinite memory determined by a functional . This means that the entries of satisfy , for all . The iterated multistep forecasting of consists in producing predictions of the positive entries , of based on the knowledge of the semi-infinite sequence . The RNN-based iterated multistep forecasting scheme consists of first finding a state-space system with associated functional that uniformly -approximates , that is,
(3.1) |
Once such has been found, the predictions are iteratively produced in tandem with a sequence in the state space according to the scheme
(3.2) | ||||
(3.3) |
for where we initialize and If we denote by the concatenation of and , we can write our two sequences as
(3.4) |
We also let . While our aim is to estimate the forecasting error we do this by mapping the sequences and into the state-space variables via and studying how the error accumulates there.
The reason why we called this setup simplified at the beginning of this paragraph is that in practice, one does not have at one’s disposal a full semi-infinite history of the chain that one wants to forecast but only a finite sample, say . This fact carries in its wake two complementary sources of error that we briefly describe, together with references where they are specifically addressed:
- (i)
-
The estimation error: the universality results that we cited in the introduction guarantee the existence of uniform approximants that satisfy (3.1) in a variety of situations and for various families of state-space systems. Given a finite sample , the approximant in the family is in general estimated using the empirical risk minimization with a prescribed loss. All estimation methods have an error associated that decreases with the sample size. Explicit bounds for this error can be found in [Gono 20a].
- (ii)
-
The state starting error: as it can be seen in (3.4), the iterations that produce the forecasts are initialized using the state given by which is uniquely determined if has the echo state property and the entire semi-infinite sequence is known. When only a finite history is available, the initial state can be determined only approximately, which leads to what we denominate the state starting error. If the state map is contracting in the state variable, then the corresponding filter has what is called the state forgetting property [Grig 19, Theorem 27] elsewhere known as the uniform attraction property (see [Manj 22]). This property implies that if we initialize the state equation at an arbitrary state sufficiently far in the past, then the resulting state at converges to as regardless of the initialization chosen. This means that in the presence of this property, the state can be approximated by using a sufficiently long washout. In the dynamical systems context for which an embedding generalized synchronization exists, one can learn a map that allows what is called the cold starting of the RNN without necessarily using a washout (see [Grig 24] for a presentation of this methodology).
Bounds for the forecasting error.
We proceed to develop the necessary theory to estimate the error associated with the iterated multistep forecasting scheme as a function of the forecasting horizon. We do this by mapping the true and predicted sequences into the state space using the state-space system. Predictions in the state space come from the trajectory of the associated dynamical system. Using ergodic theory, we can track the accumulation of error.
Let be a Lipschitz constant for , that is, for any According to the identities (3.4) we have
(3.5) |
Thus we can track the prediction error by bounding the error in the state space. Associated to the state-space system is the dynamical system determined by According to the iterations (3.2) and (3.3), the sequence is a trajectory of the dynamical system , that is, for with initial condition Defining for we have
(3.6) |
where is the error resulting from the Taylor approximation of . Let be a Lipschitz for the second argument of , that is, for any and any Since , we can bound .
Bounds for the linearized error.
We begin our analysis by ignoring the Taylor approximation error and considering the linearized sequence
Using the ergodic theorems Theorem 2.3 and 2.5 introduced in Section 2, we bound the growth of such a sequence in Theorem 3.1. The ergodic theorems tell us that, asymptotically, the convergence/divergence of trajectories is dominated by an exponential factor. While this growth estimate may deteriorate over time, it will only do so at a small exponential rate, called the leakage rate in [Berr 23]. Including the Taylor error leads to a quadratic term in the sequence of errors. Dealing with the error accumulation in this case is harder, but we bound it up to a finite time horizon later on in Theorem 3.11.
Theorem 3.1
Let be a dynamical system acting on the -dimensional submanifold of and let be a compact invariant subset of . Consider a point and its trajectory in forward time under the dynamical system that is, for , with the initial condition Along with this, consider a sequence such that for all and moreover the sequence is bounded by a constant Construct the sequence according to the linear recursion
Then for any -invariant Borel probability measure on , there exists a -invariant set of full measure such that for every point , for any sufficiently small leakage constant , the corresponding sequence is bounded by
where is the largest Lyapunov exponent of , we write and is a constant depending on the initial point and the leakage constant.
Remark 3.2
We prove in fact that
(3.7) |
for constants and Choosing the constant appropriately then gives us the bound in the theorem. In particular, for the first and third cases we may take and respectively. In the second case, we note that is eventually overtaken by so there is a constant such that for all Taking yields the desired result.
Remark 3.3
In practice, we are concerned with the case when the largest Lyapunov exponent is positive, which is a necessary condition for chaos. In this case, trajectories of the dynamical system that start arbitrarily near to each other may diverge (sensitivity to initial conditions), and we wish to bound the rate at which this occurs. When the leading Lyapunov exponent is negative, trajectories that start near to one another converge, making this scenario trivial. While we deal with all possible values of the leading Lyapunov exponent in our theorem, for nonuniformly hyperbolic behavior to hold, we require that all the Lyapunov exponents be nonzero.
Remark 3.4
The leakage constant quantifies the rate at which the exponential estimate deteriorates. Typically we take As noted in the theorem, the constant has a dependence on Taking a smaller leakage constant will necessarily entail a larger constant .
Remark 3.5
We comment on the assumption of the existence of a compact invariant set . First of all, since the Borel probability measure is defined on compactness guarantees that satisfies the integrability condition in Oseledets’ Theorem, the cornerstone of our proof. Secondly, by the Krylov-Bogolyubov theorem, the compactness of ensures that a -invariant Borel probability measure for exists. Indeed, we can go even further and guarantee the existence of an ergodic Borel probability measure for , since the set of invariant probability measures is convex in the space of measures on , and the ergodic measures are precisely the extreme points of this set. For an ergodic invariant probability measure, almost all points will have the same Lyapunov exponent. The question of the uniqueness of such measures is well-studied. Given an ergodic measure on , there is at most one ergodic invariant measure on , absolutely continuous with respect to . For reference texts on the topic of unique ergodicity see [Walt 00, Chapter 6] and [Laso 13, Chapter 4].
Remark 3.6
The theorem that we just formulated and its proof below corrects a misstep in [Berr 23]. Indeed, in the proof, we decompose the error accumulation sequence into the Oseledets subspaces and track its growth separately in each space. A problem arises when we need to recombine the errors across the Oseledets subspaces since the angles between Oseledets subspaces may change along a trajectory of the dynamical system so that errors in individual Oseledets subspaces committed at different points along the trajectory do not add together in the same way. We deal with this by employing another property of (Lyapunov) regular trajectories, which bounds the rate at which angles between Oseledets subspaces change.
Proof of the theorem. Let be the set of regular points of the dynamical system in . As noted in Theorem 2.3, is -invariant and has full measure, for any invariant Borel probability measure. Take Since is a trajectory of , we have the identity . Using the initial condition we can write
(3.8) |
Since is regular, it has Lyapunov exponents and there exists a splitting such that
- (i)
-
for ,
- (ii)
-
for , and
- (iii)
-
uniformly over
Thus by (ii) above, for we have the splitting and we can write where for This allows us to define sequences in the th Oseledets subspace by
for so that , where for , . Now fix . By (iii) above we have that for any there is a such that for all and for all satisfying
and so
Taking a constant large enough, we have for all and for any
(3.9) |
Thus, for ,
(3.10) |
and
(3.11) |
Using the same expansion as in (3.1) for each of the sequences , we have
Let Then we have
(3.12) |
It remains therefore to bound the terms by the terms This is an important step which was overlooked in [Berr 23]. At this point we pause our proof in order to develop the necessary theory to deal with this factor. While in any finite-dimensional space, for linearly independent directions , we can find a constant such that , see for instance [Krey 91, Lemma 2.4-1], in our case we have to deal with the fact that the angles between the Oseledets subspaces change along the trajectory, so we cannot hope to find a single which fulfills this identity at every point in time. In order to get around this we need Theorem 2.5, which tells us how volumes change along trajectories. We begin with a few preliminary observations that allow us to relate volumes and angles between spaces.
Definition 3.7
Let be two subspaces of an inner product space We define the angle between and as
Remark 3.8
By definition Thus
Remark 3.9
For any ,
We have the following lemma.
Lemma 3.10
Let be an inner product space. Suppose are linearly independent vectors and define the filtrations and So we have and Further define the angles Then
Proof of Lemma. For , we take two linearly independent vectors forming an angle We denote the angle between the spaces and as defined above by By Remark 3.9 we have
Now, for we have the identity In particular, And so, by Remark 3.8,
(3.13) |
We proceed by induction. Suppose the result is true for any set of linearly independent vectors. Consider a linearly independent set . Noting that we use (3.13) and the induction hypothesis to get
as desired.
We use Lemma 3.10 to bound in terms of Let be any basis for , such that for each , we may pick a basis for from Thus this basis is normal with respect to the filtration defined by (2.12). Let the filtrations and be defined as in Lemma 3.10, that is, and for Furthermore, for each time step define for Then we may pick a basis for each from Analogous to Lemma 3.10, we define the filtrations and with angles
Now, by Theorem 2.5, . Let Then there exists a constant such that
Similarly, by definition of the forward Lyapunov exponents, for each we have a constant such that for all
Thus
and so
Now, we may write for some constants . By construction, for each we may pick a basis from Thus, writing we can group the terms in this sum into their respective Oseledets subspaces. In particular, this implies . By Lemma 3.10, we have
We obtain the desired bound by returning to (3.1).
We let
The main result.
Theorem 3.1 deals only with a linearized version of the recursion (3.6). We now turn to bounding the actual state error . In the bounds above, we simplified the error accumulation by performing a Taylor approximation and ignoring the second-order error incurred. When we factor in the Taylor approximation errors, we can maintain the exponential bound, but only up to a certain finite time threshold. This is made explicit in the following theorem.
Theorem 3.11
Let be a causal chain with infinite memory and let be the functional associated to a state-space system that has the echo state property, where and , for a convex -dimensional submanifold of . Let be the associated dynamical system, that is, and suppose . Further, let be a compact -invariant subset of and define H the set of reachable states of the CCIM. We assume that
- (i)
-
there exists an open and convex subset of such that and the second order derivatives of are bounded on , and
- (ii)
-
is contained in a convex subset of .
Suppose that is Lipschitz on its second entry with constant , uniform with respect to the first entry, that is,
Assume also that the readout is Lipschitz with constant . Suppose that is a uniform -approximant of and hence it satisfies (3.1). Let be a realization of the chain , that is, for all , and let , , be the forecast of the positive terms of using and the iterative scheme introduced in (3.2) and (3.3). Let be the corresponding sequence in the state space produced by these recursions and let , a sequence in the state space parallel to . Then for any -invariant Borel probability measure on , there is a set of full measure such that, if , then for sufficiently small , there exists a constant and a time horizon such that
(3.14) | ||||
(3.15) |
for where is the largest Lyapunov exponent of under the dynamical system and as before we take
Remark 3.12
The assumptions (i) and (ii) above are omitted in [Berr 23]. However, they are necessary for two reasons: first of all, in the proof below, for a point and a left infinite sequence from the CCIM, we approximate a difference using Taylor’s theorem, which requires both endpoints to be in an open and convex subset of , whence the requirement that for a convex open set . Requiring that the second order derivatives of are bounded on allows us to bound the Taylor approximation from above. Secondly, in the proof we encounter the terms for a left infinite sequence from the CCIM. We require that be in the tangent space of , which is ensured by the assumption that is contained in a convex subset of .
Remark 3.13
We comment on the embedding requirement in the case when the causal chain consists of observations from a dynamical system. In the paper [Berr 23], the map is assumed to embed the underlying dynamical system in the state-space. This is necessary in order to guarantee the existence of a readout as in Proposition 2.1 that forms a conjugate dynamical system in the state-space. The readout is ultimately the map that is learnt and they work with the corresponding learnt dynamical system as an approximation of the dynamical system conjugate to the original. We circumvent the embedding condition by replacing it with the weaker requirement that the functional associated to the state-space system uniformly -approximates the functional of the causal chain. This is an important improvement as the question of determining when a given state-space system will determine a generalised synchronisation that is an embedding when driven by observations from a dynamical system remains open except for a certain class of linear state-space systems [Grig 23]. Furthermore, it enables us to extend the results to a much larger class of time series, namely that of CCIMs. CCIMs appear as the deterministic parts of many stochastic processes after using the Wold or Doob decompositions and also as the solutions of infinite dimensional or functional differential equations. While our bounds show an error scaling controlled by the top Lyapunov exponent of the learnt dynamical system, it is interesting to note that many of these applications may not possess an underlying system with Lyapunov exponents at all.
Remark 3.14
One of the contributions of this paper is that we deal explicitly and rigorously with the error from the Taylor approximation, in contrast to former efforts, where this error was absorbed by giving the error bounds in terms of a factor We do this by controlling the error growth within a factor of the exponential term up to a finite time horizon. The expression we find for the forecasting horizon within which we can control the error is
where is a constant depending on the second order tangent map of and is a constant. When we expand and in terms of the other constants introduced in the proofs we obtain
It is difficult to make qualitative deductions on how the different variables influence the value of . To begin with, the constants and all depend on , so while decreasing may serve to increase the first logarithmic term, we don’t know how it will influence the second term. Indeed, we expect these constants to increase for a smaller leakage constant, so the influence on is unknown. A second observation is that all the constants have an implicit dependence on , since taking a smaller restricts the set of state-space systems from which we can select . Therefore, although at first glance decreasing may seem to lead to an increase in the predictable time horizon, this may not necessarily be true in practice.
Remark 3.15
Notwithstanding what has been said above, in the error bounds (3.14), (3.15), we do see an interplay between overfitting and approximation error. The top Lyapunov exponent is related to the regularity of the dynamical system generated by the state-space system, while the approximation error determines the set of state-space systems we can choose from. Thus decreasing the approximation error may lead to a highly irregular dynamical system and hence a large leading Lyapunov exponent. See [Berr 23, Remark 12] for further discussion on this.
Remark 3.16
In this setup, it is visible how there are three contributions to the error bound in (3.15), namely the approximation error of by , the Lipschitz constants and , which are related to the structure of the RNN, and the exponential factor which is exclusively related to the dynamical features of the map that determines. Much work has been done on the ability of state-space systems to reproduce the Lyapunov exponents of the dynamical system being learnt. Two factors come into play here: first, while it is known that under suitable conditions on the GS of a state-space system that embeds the dynamical system in a higher dimensional space, the Lyapunov spectrum of the original system is contained in that of the reconstructed system, a number of additional Lyapunov exponents arise, commonly referred to as spurious Lyapunov exponents, some of which may exceed the top Lyapunov exponent of the original dynamical system [Dech 96, Dech 00]; second is the question of continuity of Lyapunov exponents under perturbations of the generating dynamical system, which is discussed in detail in the survey [Vian 20]. Of particular interest, then, is the stability gap, namely the difference between the top Lyapunov exponent of the reconstructed system and that of the original system. An upper bound for the stability gap is given in [Berr 23, Theorem 3 (iii)]. In our case, however, we are concerned only with the Lyapunov exponents of the dynamical system associated to . Indeed, it is more useful for us to have bounds for the error in terms of the Lyapunov exponents of the learnt dynamical system, which can be calculated, rather than that of the dynamical system we are learning, which are unknown to us. Nevertheless, work on the stability gap is relevant: as discussed in [Berr 23, Theorem 3], the stability gap is always nonnegative, so when learning the observations from a dynamical system, we cannot hope to get a reconstructed dynamical system with lower Lyapunov exponent than that of the original. We note that, in general, CCIMs need not have Lyapunov exponents.
Remark 3.17
Let be a Lipschitz constant for the first entry of . It is easy to show that if and have bounded differentials and is the top Lyapunov exponent of the dynamical system , then the Lipschitz constants , and will satisfy
(3.16) |
and we can bound the prediction error as
(3.17) |
Now, if the Lipschitz constants can be chosen so that , then the forecasting error admits a horizon-uniform bound given by the inequality:
(3.18) |
Thus, in the case when , using Lipschitz constants may give a better bound than that established using the theory of Lyapunov exponents in (3.14) and (3.15), which grows exponentially. In the case of a positive maximum Lyapunov exponent, the bound (3.17) grows exponentially with factor . Calculations show that in practice this factor is typically much larger than the factor using the Lyapunov exponent. Thus ergodic theory provides us with a bound which far outperforms an approach using only tools from analysis.
Remark 3.18
The Lipschitz constants that appear in the error bounds (3.14) and (3.15) are readily computable for most commonly used families of recurrent neural networks. Consider the echo state network (ESN) [Jaeg 04] which is defined by using a linear readout map , and a state map given by
(3.19) |
and where is the map obtained by componentwise application of a squashing function that is is non-decreasing and satisfies and . Moreover, we assume that . With this hypothesis, we note that and that . Consequently, by the mean value theorem we can choose in this case
We recall that for the ESN family, is a sufficient condition for the echo state and the fading memory properties to hold when the inputs are in (see [Grig 19]). With regard to the Lyapunov exponents of the dynamical system associated to the ESN, algorithms exist for computing these.
Remark 3.19
Proof of the theorem. As in the proof of Theorem 3.1, we take to be the set of (Lyapunov-Perron) regular points in of the dynamical system . Let As noted before, according to the iterations (3.2) and (3.3), the sequence is a trajectory of the dynamical system , that is, for with initial condition Defining for we have
(3.20) |
where is the error resulting from the Taylor approximation of . Using the fact that we can write
(3.21) |
where is the sequence defined by the recursion for with as in Theorem 3.1. Since , using the Lipschitz condition on the second coordinate of , we can bound . Thus there exists a constant such that
By Taylor’s Theorem we also have where is a constant depending on the second order derivatives of . Using the same strategies as in the proof of Theorem 3.1, we can find a constant such that Thus,
We can thus construct a sequence of upper bounds for by the recursion
We transform this sequence by dividing out the factor that is, we form the sequence by We have
(3.22) |
We can bound the sequence by an exponential growth factor for a finite time horizon. Consider equation (3.22). Suppose that for we have for some Then
Thus we can maintain the bound as long as
To maximise the time steps for which this bound holds we optimise We find
for , so that the function has a maximum of Thus, if for , then , as long as
that is, we can control the error bound as within the aforementioned number of time steps. Finally, by (3.5),
3.2 The causal embedding forecasting scheme
As an application of the previous results, we compare the causal embedding forecasting scheme proposed by Manjunath and de Clerq in [Manj 21] with the traditional approach used in reservoir computing and RNNs that we analyzed in the previous section. We start by deriving an upper bound for the prediction error corresponding to Theorem 3.11 for his scheme.
We start with a brief overview of the causal embedding scheme adapted to our setup. The causal embedding scheme consists of a state-space system that takes the states of a dynamical system as inputs and creates an embedding of the dynamical system in a higher dimensional space. In our setup we only have access to observations from a dynamical system, so we begin by embedding the original dynamical system in a proxy space. For this, we may use a state-space system known to provide embedding, such as Takens’ delay embedding scheme. We then apply the causal embedding scheme to the proxy space. Another simplification in our setup is that we assume an invertible dynamical system from the start, a property that is treated only as a special case in [Manj 21].
Let be a dynamical system, with a -dimensional compact manifold and Let be an observation map and suppose is a state-space map whose GS is an embedding, so that the proxy space is a compact submanifold of By Proposition 2.1, there exists a readout satisfying (2.7) and we can form the conjugate dynamical system given by for We assume that .
Whereas in the typical RC setup we would learn the readout , in the causal embedding scheme, we concatenate a second state-space system onto the first. Let be a state-space map with GS We define the map by
Let the image of under be . This is the space of single-delay lags in the state variable of . We assume an additional property for the state-space map , namely that of state-input invertibility (SI-invertibility): is SI-invertible when the map is invertible for all SI-invertibility is not a very strict condition to impose, and can in practice easily be guaranteed. When the map is SI-invertible, from a single delay in the state space we can compute the corresponding input from the dynamical system . This allows us to construct a readout as
and we can form a dynamical system given by The map is an embedding and the dynamical system is conjugate to and hence to . The map maps a single-delay coordinate one step forward under input from the dynamical system. To see this more clearly, let be the filter corresponding to the state-space map . Then for a point , let be its trajectory under . For any we have
Essentially, what we have done is to enlarge the state-space map to a map given by for Thus . In the causal embedding scheme, we learn the readout . Note that we do not require the GS to be an embedding but only that the map is SI-invertible and has the ESP. The reasoning behind adding a second state-space system onto the first is that the first might be highly irregular, making the readout hard to learn. By contrast, the state-space map may be chosen without requiring its GS to be an embedding, and hence the map and in particular the readout , is hoped to be more regular and thus easier to learn.
For a point , let , where for all , be the corresponding sequence of observations. The state space maps and generate sequences and according to the iterations
(3.25) |
We approximate with the learnt map . In similar manner to the iterated multistep forecasting scheme discussed previously, we take as input the sequence of past observations so that we only have access to the left infinite sequences and Initializing and , for we produce predictions , , and according to the scheme
(3.29) |
Corollary 3.20
Let be a dynamical system on a finite-dimensional compact manifold and an observation map and let the maps and the spaces and be as described above. In particular, the GS belonging to the state-space map is an embedding so that is a compact submanifold of and is conjugate to ; the state-space map is SI-invertible and possesses a GS , and is conjugate to and hence to . Suppose that is a convex submanifold of and that Further, let be a compact -invariant subset of and define the set of reachable states. We assume that
- (i)
-
there exists an open and convex subset of such that and the second order derivatives of are bounded on , and that
- (ii)
-
is contained in a convex subset of .
Suppose that is Lipschitz on with constant and is Lipschitz on its second entry with constant , uniform with respect to the first entry, that is,
Let be a learnt version of the map and let Assume that is Lipschitz with constant . For a point , construct the sequences and as in (3.25), and the forecasts and as in (3.29). Then, for any -invariant Borel probability measure on , there is a -invariant set of full measure such that if the following will hold: For small , there exists a constant and a time horizon such that
(3.30) | ||||
(3.31) | ||||
(3.32) |
for where is the largest Lyapunov exponent of under the dynamical system and as before we take
Proof. As noted above, the sequences and satisfy the state-space equations of where is given by for Similarly, the sequences and are generated by the state-space system Thus, for and for Now,
Thus (3.30) and (3.31) follow from Theorem 3.11. Finally we note that
from which (3.32) follows immediately.
4 Numerical illustration
In this section we illustrate the error bounds given in Theorem 3.11 for predicting observations coming from dynamical systems using reservoir systems. Our results show the applicability of the error bound to these systems in accurately predicting the rate of growth of the forecasting errors. Nevertheless, there are some limitations to the error bound, which become apparent in our numerical work. First of all, the dynamical systems we learn all have attractors with finite diameter, and hence, while the echo state network (ESN) that we use for the forecasting (see 3.19) typically reconstructs the attractor, the actual predictions deviate from the true values over time until the error saturates the attractor, being bounded by its diameter, and hence it settles at a constant value. After this, the error bound is no longer applicable. Second, the error bound given in (3.15) includes the multiplicative constant . The existence of this constant is guaranteed by the use of arguments involving Oseledets’ Theorem in the proofs; intuitively, the constant gives the system time to settle until the ergodic properties underlying this theorem take effect. However, we do not have an algorithm for calculating this constant. Instead, we estimate it by considering the log prediction errors and fitting the intercept of the log error bound to this. In the case of reservoir systems that do not learn the given system well, this constant may turn out to exceed the error at saturation, and the bound becomes redundant. While on first impression this may seem perplexing, this is a practical difficulty inherent to any learning paradigm. Rather than undermining the usefulness of the error bound given in this paper, it serves to show the inability of the given reservoir computer to track the dynamical system, in which case any error bounds are redundant as saturation is quickly reached. We emphasize that in the case of reservoir systems that demonstrate the ability to produce accurate long-term iterated multistep predictions, the gradient of the log error growth closely matches the predicted value given in (3.15). Nevertheless, the question naturally arises, in the context where we have to deal with observations from an unknown dynamical system, whether we can determine up to what time the error bounds are valid. Once again, as noted in Remark 3.14, while we are guaranteed the existence of a finite time horizon up to which the bound holds, in practice, the horizon is difficult to calculate. In the empirical exercises below, the bound is valid until saturation. If the reservoir system reconstructs the attractor of the dynamical system well, the saturation error can be estimated by running predictions on any two distinct trajectories of the dynamical system with close initial conditions. The time horizon up to which error estimates are both valid and useful can be calculated by checking when the predicted error bound crosses the saturation error. However, this still requires knowledge of the multiplicative constant . Thus, to get a more clear answer to this question, more work needs to be done on calculating this constant.
In our numerical illustrations, we consider the Lorenz and Rössler systems and train an ESN on an input time series consisting of the full states of the given system. In the case of the Lorenz system, we also consider inputs coming from an observation function on the dynamical system. This forms a causal chain as shown in Corollary 2.2. The ESNs we use have a state map of the form (3.19), that is,
(4.1) |
with an affine readout given by for some linear map and bias We take the nonlinear function to be tanh applied componentwise and randomly generate the connectivity matrix and input mask by selecting entries independently from a uniform distribution over the interval , with density and , respectively. The input shift is simply taken to be zero. Data for trajectories of the different systems is generated by integrating the given dynamical system over a discrete time step using the Runge-Kutta45 method. In each case, an input trajectory is fed into the randomly generated state map, generating a trajectory in the state space according to the scheme
(4.2) |
An initial washout of timesteps allows the state space trajectory to converge to the values given by the GS so that the two become indistinguishable, although the state space trajectory generated is arbitrarily initialized at . This is due to the so-called input forgetting property [Grig 19]. The washout data is discarded and a ridge regression with regularization constant is performed between the input points and state space points to fit the output parameters and . Predictions are then made by iterating (3.2) and (3.3), that is,
(4.3) | ||||
(4.4) |
for where and There are several hyperparameters that need to be tuned, namely the regularization constant , the scaling constant for the input mask, and the spectral radius of the connectivity matrix. Selecting these is a perplexing task as, to our knowledge, no clear analytical method exists by which prediction error as a function of these constants may be minimized. Moreover, the error in prediction is not stable with respect to choices of the constants and, especially in selecting the regularization constant, small changes may lead to largely different results. In order to tune our ESNs individually to each system, we had to resort to a brute force cross validation method minimizing mean squared prediction error over a grid for these paremeters. This code is computationally expensive.
The Lorenz system.
The Lorenz system is given by the three-dimensional first-order differential equation
(4.5) | ||||
(4.6) | ||||
(4.7) |
where , , and .


To learn the Lorenz system, we chose an ESN with neurons whose connectivity matrix had spectral radius , and whose input mask was scaled by . Inputs were taken as the states of the entire system, sampled at a time step of over time steps. A washout of time steps was used. Thereafter, using a Tikhonov regularization, with regularization constant , between the state space trajectory and desired output trajectory on the remaining time steps, the readout was trained and predictions made for an additional time steps. The Lyapunov exponent of the RC was calculated at using Bennetin’s algorithm over the length of one trajectory, and the multiplicative constant in the error bound (3.14) was estimated by fitting the predicted error bound to the actual errors. Plotted in Figure 2 are the log errors in prediction over Lyapunov times for an RC trained over trajectories with initial conditions chosen from a Gaussian distribution around , averaged at each point across the trajectories. The log errors grow approximately at a linear rate until saturation of the attractor around Lyapunov times. The gradient of the linear growth is closely matched by the predicted error bound, whose gradient is given by the top Lyapunov exponent of the RC.


The Lorenz system as a causal chain.
To illustrate our result for a causal chain, we took the observation function on the Lorenz system. By Corollary 2.2, sequences of observations through form a causal chain. To learn the Lorenz system using inputs only from the first coordinate, we chose an ESN with neurons whose connectivity matrix had spectral radius , and whose input mask was scaled by . Inputs were sampled at a time step of over time steps. A washout of time steps was used. Thereafter, using a Tikhonov regularization, with regularization constant , between the state space trajectory and desired output trajectory on the remaining time steps, the readout was trained and predictions made for an additional time steps. The Lyapunov exponent of the RC was calculated at using Bennetin’s algorithm over the length of one trajectory, and the multiplicative constant in the error bound (3.14) was estimated by fitting the predicted error bound to the actual errors. We chose initial conditions from a Gaussian distribution around and trained the readout of the RC over a single trajectory. In Figure 4 the log errors in prediction averaged across the trajectories are plotted for Lyapunov times. The errors grow approximately at a linear rate until saturation of the attractor around Lyapunov times. The gradient of the linear growth is closely matched by the predicted error bound, whose gradient is given by the top Lyapunov exponent of the RC.
The Rössler system.
The dynamics of the Rössler system is given by
(4.8) | ||||
(4.9) | ||||
(4.10) |
where and

To learn the Rössler system, we chose an ESN with neurons and connectivity matrix with spectral radius . The input mask was scaled by . Inputs were taken as the states of the entire system, sampled at a time step of over time steps. A washout of time steps was used. Thereafter, using a Tikhonov regularization, with regularization constant , between the state space trajectory and desired output trajectory on the remaining time steps, the readout was trained and predictions made for an additional time steps. The Lyapunov exponent of the RC was calculated at using Bennetin’s algorithm over the length of one trajectory, and the multiplicative constant in the error bound (3.14) was estimated by fitting the predicted error bound to the actual errors. We chose initial conditions from a Gaussian distribution around and trained the readout of the RC over a single trajectory. In Figure 5 the log errors in prediction averaged across the trajectories are plotted for Lyapunov times. The errors grow approximately at a linear rate until saturation of the attractor around Lyapunov times. The gradient of the linear growth is closely matched by the predicted error bound, whose gradient is given by the top Lyapunov exponent of the RC.
Discussion.
The numerical experiments illustrate how the error committed using an ESN to predict three different systems grows at an exponential rate, governed by the top Lyapunov exponent associated with the learned ESN. The interplay between the Lyapunov exponents of the dynamical system and the ensuing effect on forecasting accuracy can clearly be seen in the numerical examples illustrated above. In all three examples the Lyapunov exponents of the learnt dynamical system associated to the trained RC are close to those of the underlying dynamical system being learnt. The Lyapunov exponent of the Lorenz system is greater than that of the Rössler system, leading to a quick saturation of the prediction error, in comparison with the RC trained on the Rössler system, which exhibits the ability to maintain reasonably accurate predictions long into the future. Nevertheless, scaling the time axis in Lyapunov times, shows that the prediction capabilities of ESNs on the two different systems are comparable.
It is to be noted, however, that in cases where the RC does not exhibit accurate long term predictions of the given system, the bound is not as visually apparent, or may indeed be redundant due to the swift saturation of the attractor. In this case errors settle at a finite value, determined by the diameter and stationary measure of the attractor, before the bound comes into play. This may be seen in part in the comparison between predicting the Lorenz system with access to full dynamical states and with access only to a single coordinate. The latter task is more challenging, due to restricted access to information about the system. Thus, while both RCs display similar Lyapunov exponents, the forecasting error in predicting the causal chain reaches saturation before that in predicting the entire system.
5 Conclusions
In this paper, we have studied the iterated multistep forecasting scheme based on recurrent neural networks (RNN) that are customarily used in reservoir computing for the path continuation of dynamical systems. More specifically, we have formulated and rigorously proved error bounds in terms of the forecasting horizon, functional and dynamical features of the specific RNN used, and the approximation error committed by it. The results that we present show, in particular, the interplay between the universal approximation properties of a family of recurrent networks in the sense of input/output systems and their properties in the realm of the modeling and the forecasting of the observations of autonomous dynamical systems and, more generally, of causal chains with infinite memory (CCIMs).
The framework in the paper circumvents difficult-to-verify embedding hypotheses that appear in previous references in the literature and applies to new situations like the finite-dimensional observations of functional differential equations or the deterministic parts of stochastic processes to which standard embedding techniques do not necessarily apply. This is an improvement on previous results, which were restricted to sequences consisting of observations of dynamical systems.
The error bounds in this paper show, in particular, that the growth of forecasting error is exponential at a rate governed by the top Lyapunov exponent of the dynamical system associated with the trained RNN in the state space. This is insightful for several reasons: (i) it illuminates the role of state space dynamics in the accuracy of long-term forecasting, (ii) in the case of observations coming from a dynamical system, it illustrates the relationship between the dynamics of the original system and that of the reconstructed dynamical system in the state space and (iii) it provides a tool for comparing the forecasting accuracy of different state-space system learning techniques based on the top Lyapunov exponents of their associated state space dynamical systems in a given learning problem.
As noted in the numerical section, the bounds given have a number of limitations. In particular, it is not clear how to calculate an important multiplicative constant to fit the error bound to real case prediction scenarios. Nevertheless, in the case of well-trained state space systems used in prediction, the bound demonstrates a striking sharpness, accurately predicting the growth rate of the true forecasting error.
Future work will focus on stochastic extensions of these results in which iterative schemes of this type are used to forecast discrete-time stochastic processes, a context in which RNN universal approximation results have also been formulated.
Acknowledgments: The authors thank G Manjunath for helpful discussions and remarks. JL is funded by a Graduate Scholarship from Nanyang Technological University. JPO acknowledges partial financial support from the School of Physical and Mathematical Sciences of the Nanyang Technological University. LG and JPO thank the hospitality of the Nanyang Technological University and the University of St. Gallen; it is during respective visits to these two institutions that some of the results in this paper were obtained.
References
- [Abra 88] R. Abraham, J. E. Marsden, and T. S. Ratiu. Manifolds, Tensor Analysis, and Applications. Vol. 75, Applied Mathematical Sciences. Springer-Verlag, 1988.
- [Alqu 12] P. Alquier and O. Wintenberger. “Model selection for weakly dependent time series forecasting”. Bernoulli, Vol. 18, No. 3, pp. 883–913, 2012.
- [Arco 22] T. Arcomano, I. Szunyogh, A. Wikner, J. Pathak, B. R. Hunt, and E. Ott. “A hybrid approach to atmospheric modeling that combines machine learning with a physics-based numerical model”. Journal of Advances in Modeling Earth Systems, Vol. 14, No. 3, p. e2021MS002712, 2022.
- [Barr 07] L. Barreira and Y. B. Pesin. Nonuniform hyperbolicity: Dynamics of systems with nonzero Lyapunov exponents. Vol. 115, Cambridge University Press Cambridge, 2007.
- [Baum 66] L. E. Baum and T. Petrie. “Statistical inference for probabilistic functions of finite state Markov chains”. The annals of mathematical statistics, Vol. 37, No. 6, pp. 1554–1563, 1966.
- [Berr 23] T. Berry and S. Das. “Learning theory for dynamical systems”. SIAM Journal on Applied Dynamical Systems, Vol. 22, No. 3, pp. 2082–2122, 2023.
- [Bocc 02] S. Boccaletti, J. Kurths, G. Osipov, D. L. Valladares, and C. S. Zhou. “The synchronization of chaotic systems”. Physics Reports, Vol. 366, pp. 1–101, 2002.
- [Boyd 85] S. Boyd and L. Chua. “Fading memory and the problem of approximating nonlinear operators with Volterra series”. IEEE Transactions on Circuits and Systems, Vol. 32, No. 11, pp. 1150–1161, 1985.
- [Broc 06] P. J. Brockwell and R. A. Davis. Time Series: Theory and Methods. Springer-Verlag, 2006.
- [Bute 13] P. Buteneers, D. Verstraeten, B. V. Nieuwenhuyse, D. Stroobandt, R. Raedt, K. Vonck, P. Boon, and B. Schrauwen. “Real-time detection of epileptic seizures in animal models using reservoir computing”. Epilepsy Research, Vol. 103, No. 2, pp. 124–134, 2013.
- [Cuch 22] C. Cuchiero, L. Gonon, L. Grigoryeva, J. P. Ortega, and J. Teichmann. “Discrete-time signatures and randomness in reservoir computing”. IEEE Transactions on Neural Networks and Learning Systems, Vol. 33, No. 11, pp. 1–10, 2022.
- [Dang 84] H. Dang Van Mien and D. Normand-Cyrot. “Nonlinear state affine identification methods: applications to electrical power plants”. Automatica, Vol. 20, No. 2, pp. 175–188, mar 1984.
- [Dech 00] W. D. Dechert and R. Gençay. “Is the largest Lyapunov exponent preserved in embedded dynamics?”. Physics Letters A, Vol. 276, No. 1-4, pp. 59–64, 2000.
- [Dech 96] W. D. Dechert and R. Gençay. “The topological invariance of Lyapunov exponents in embedded dynamics”. Physica D: Nonlinear Phenomena, Vol. 90, No. 1-2, pp. 40–55, 1996.
- [Dede 07] J. Dedecker, P. Doukhan, G. Lang, J. R. León, S. Louhichi, and C. Prieur. Weak Dependence: With Examples and Applications. Springer Science+Business Media, 2007.
- [Douk 08] P. Doukhan and O. Wintenberger. “Weakly dependent chains with infinite memory”. Stochastic Processes and their Applications, Vol. 118, No. 11, pp. 1997–2013, 2008.
- [Erog 17] D. Eroglu, J. S. W. Lamb, and T. Pereira. “Synchronisation of chaos and its applications”. Contemporary Physics, Vol. 58, No. 3, pp. 207–243, 2017.
- [Foll 02] H. Föllmer and A. Schied. Stochastic Finance. Vol. 27 of de Gruyter Studies in Mathematics, Walter de Gruyter and Co., Berlin, 2002.
- [Ghys 09] E. Ghysels, R. Valkanov, and A. Rubia. “Multi-period forecasts of volatility: direct, iterated, and mixed-data approaches”. EFA 2009 Bergen Meetings Paper, Available at SSRN: https://ssrn.com/abstract=1344742 or http://dx.doi.org/10.2139/ssrn.1344742, 2009.
- [Gono 20a] L. Gonon, L. Grigoryeva, and J.-P. Ortega. “Risk bounds for reservoir computing”. Journal of Machine Learning Research, Vol. 21, No. 240, pp. 1–61, 2020.
- [Gono 20b] L. Gonon and J.-P. Ortega. “Reservoir computing universality with stochastic inputs”. IEEE Transactions on Neural Networks and Learning Systems, Vol. 31, No. 1, pp. 100–112, 2020.
- [Gono 21] L. Gonon and J.-P. Ortega. “Fading memory echo state networks are universal”. Neural Networks, Vol. 138, pp. 10–13, 2021.
- [Gono 23a] L. Gonon, L. Grigoryeva, and J.-P. Ortega. “Approximation error estimates for random neural networks and reservoir systems”. The Annals of Applied Probability, Vol. 33, No. 1, pp. 28–69, 2023.
- [Gono 23b] L. Gonon, L. Grigoryeva, and J. P. Ortega. “Infinite-dimensional reservoir computing”. arXiv:2304.00490, 2023.
- [Grig 14] L. Grigoryeva, J. Henriques, L. Larger, and J.-P. Ortega. “Stochastic time series forecasting using time-delay reservoir computers: performance and universality”. Neural Networks, Vol. 55, pp. 59–71, 2014.
- [Grig 18a] L. Grigoryeva and J.-P. Ortega. “Echo state networks are universal”. Neural Networks, Vol. 108, pp. 495–508, 2018.
- [Grig 18b] L. Grigoryeva and J.-P. Ortega. “Universal discrete-time reservoir computers with stochastic inputs and linear readouts using non-homogeneous state-affine systems”. Journal of Machine Learning Research, Vol. 19, No. 24, pp. 1–40, 2018.
- [Grig 19] L. Grigoryeva and J.-P. Ortega. “Differentiable reservoir computing”. Journal of Machine Learning Research, Vol. 20, No. 179, pp. 1–62, 2019.
- [Grig 21a] L. Grigoryeva, A. G. Hart, and J.-P. Ortega. “Chaos on compact manifolds: Differentiable synchronizations beyond the Takens theorem”. Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics, Vol. 103, p. 062204, 2021.
- [Grig 21b] L. Grigoryeva and J.-P. Ortega. “Dimension reduction in recurrent networks by canonicalization”. Journal of Geometric Mechanics, Vol. 13, No. 4, pp. 647–677, 2021.
- [Grig 23] L. Grigoryeva, A. G. Hart, and J.-P. Ortega. “Learning strange attractors with reservoir systems”. Nonlinearity, Vol. 36, pp. 4674–4708, 2023.
- [Grig 24] L. Grigoryeva, B. Hamzi, F. P. Kemeth, Y. Kevrekidis, G. Manjunath, J.-P. Ortega, and M. J. Steynberg. “Data-driven cold starting of good reservoirs”. arXiv:2403.10325, 2024.
- [Hart 20] A. G. Hart, J. L. Hook, and J. H. P. Dawes. “Embedding and approximation theorems for echo state networks”. Neural Networks, Vol. 128, pp. 234–247, 2020.
- [Hart 21] A. G. Hart, J. L. Hook, and J. H. P. Dawes. “Echo State Networks trained by Tikhonov least squares are L2() approximators of ergodic dynamical systems”. Physica D: Nonlinear Phenomena, Vol. 421, p. 132882, 2021.
- [Huke 06] J. P. Huke. “Embedding nonlinear dynamical systems: a guide to Takens’ theorem”. Tech. Rep., Manchester Institute for Mathematical Sciences. The University of Manchester, 2006.
- [Huke 15] J. P. Huke and M. R. Muldoon. “Embedding and time series analysis”. Tech. Rep., Manchester Institute for Mathematical Sciences. The University of Manchester, 2015.
- [Jaeg 04] H. Jaeger and H. Haas. “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication”. Science, Vol. 304, No. 5667, pp. 78–80, 2004.
- [Jaeg 10] H. Jaeger. “The ‘echo state’ approach to analysing and training recurrent neural networks with an erratum note”. Tech. Rep., German National Research Center for Information Technology, 2010.
- [Kalm 10] R. Kalman. “Lectures on Controllability and Observability”. In: Controllability and Observability, pp. 1–149, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010.
- [Kalm 59a] R. E. Kalman and J. E. Bertram. “A unified approach to the theory of sampling systems”. Journal of the Franklin Institute, Vol. 267, No. 5, pp. 405–436, 1959.
- [Kalm 59b] R. E. Kalman and J. E. Bertram. “General synthesis procedure for computer control of single-loop and multiloop linear systems (An optimal sampling system)”. Transactions of the American Institute of Electrical Engineers, Part II: Applications and Industry, Vol. 77, No. 6, pp. 602–609, 1959.
- [Kalm 60] R. Kalman. “A new approach to linear filtering and prediction problems”. Trans. ASME, J. Basic Engineering, Vol. 82D, pp. 35–45, 1960.
- [Kalm 62] R. E. Kalman. “Canonical structure of linear dynamical systems”. Proceedings of National Academy Of Sciences USA, Vol. 48, No. 4, pp. 596–600, 1962.
- [Krey 91] E. Kreyszig. Introductory functional analysis with applications. Vol. 17, John Wiley & Sons, 1991.
- [Laso 13] A. Lasota and M. C. Mackey. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Applied Mathematical Sciences, Springer New York, 2013.
- [Lind 15] A. Lindquist and G. Picci. Linear Stochastic Systems. Springer-Verlag, 2015.
- [Lu 18] Z. Lu, B. R. Hunt, and E. Ott. “Attractor reconstruction by machine learning”. Chaos, Vol. 28, No. 6, 2018.
- [Lu 20] Z. Lu and D. S. Bassett. “Invertible generalized synchronization: A putative mechanism for implicit learning in neural systems”. Chaos, Vol. 30, No. 063133, 2020.
- [Luko 09] M. Lukoševičius and H. Jaeger. “Reservoir computing approaches to recurrent neural network training”. Computer Science Review, Vol. 3, No. 3, pp. 127–149, 2009.
- [Manj 20] G. Manjunath. “Stability and memory-loss go hand-in-hand: three results in dynamics & computation”. Proceedings of the Royal Society London Ser. A Math. Phys. Eng. Sci., Vol. 476, No. 2242, pp. 1–25, 2020.
- [Manj 21] G. Manjunath and A. de Clercq. “Universal set of observables for the Koopman operator through causal embedding”. arXiv preprint arXiv:2105.10759, 2021.
- [Manj 22] G. Manjunath. “Embedding information onto a dynamical system”. Nonlinearity, Vol. 35, No. 3, p. 1131, 2022.
- [Manj 23] G. Manjunath and J.-P. Ortega. “Transport in reservoir computing”. Physica D: Nonlinear Phenomena, Vol. 449, p. 133744, 2023.
- [Marc 06] M. Marcellino, J. H. Stock, and M. W. Watson. “A comparison of direct and iterated multistep AR methods for forecasting macroeconomic time series”. Journal of Econometrics, Vol. 135, No. 1-2, pp. 499–526, 2006.
- [Matt 94] M. Matthews and G. Moschytz. “The identification of nonlinear discrete-time fading-memory systems using neural network models”. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, Vol. 41, No. 11, pp. 740–751, 1994.
- [McEl 15] T. McElroy. “When are direct multi-step and iterative forecasts identical?”. Journal of Forecasting, Vol. 34, No. 4, pp. 315–336, 2015.
- [Nare 90] K. Narendra and K. Parthasarathy. “Identification and control of dynamical systems using neural networks”. IEEE Transactions on Neural Networks, Vol. 1, No. 1, pp. 4–27, mar 1990.
- [Orte 24a] J.-P. Ortega and F. Rossmannek. “State-space systems as dynamic generative models”. Preprint, 2024.
- [Orte 24b] J.-P. Ortega and D. Yin. “Learnability of linear port-Hamiltonian systems”. Journal of Machine Learning Research, Vol. 25, pp. 1–56, 2024.
- [Osel 68] V. I. Oseledets. “A multiplicative ergodic theorem. Characteristic Ljapunov, exponents of dynamical systems”. Trudy Moskovskogo Matematicheskogo Obshchestva, Vol. 19, pp. 179–210, 1968.
- [Ott 02] E. Ott. Chaos in Dynamical Systems. Cambridge University Press, second Ed., 2002.
- [Path 17] J. Pathak, Z. Lu, B. R. Hunt, M. Girvan, and E. Ott. “Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data”. Chaos, Vol. 27, No. 12, 2017.
- [Path 18] J. Pathak, B. Hunt, M. Girvan, Z. Lu, and E. Ott. “Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach”. Physical Review Letters, Vol. 120, No. 2, p. 24102, 2018.
- [Peco 20] L. Pecora and T. Carroll. “The dependence of reservoir computing on system parameters”. Bulletin of the American Physical Society, Vol. 65, 2020.
- [Peco 97] L. M. Pecora, T. L. Carroll, G. A. Johnson, D. J. Mar, and J. F. Heagy. “Fundamentals of synchronization in chaotic systems, concepts, and applications”. Chaos, Vol. 7, No. 4, pp. 520–543, 1997.
- [Ruel 79] D. Ruelle. “Ergodic theory of differentiable dynamical systems”. Publications Mathématiques de l’Institut des Hautes Études Scientifiques, Vol. 50, No. 1, pp. 27–58, 1979.
- [Rulk 95] N. F. Rulkov, M. M. Sushchik, L. S. Tsimring, and H. D. I. Abarbanel. “Generalized synchronization of chaos in directionally coupled chaotic systems”. Physical Review E, Vol. 51, No. 2, p. 980, 1995.
- [Sand 78] N. R. Sandell and K. I. Yared. “Maximum likelihood identification of state space models for linear dynamic systems”. Electronic Systems Laboratory, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology., Vol. R-814, 1978.
- [Saue 91] T. Sauer, J. A. Yorke, and M. Casdagli. “Embedology”. Journal of Statistical Physics, Vol. 65, No. 3, pp. 579–616, 1991.
- [Sont 79] E. Sontag. “Realization theory of discrete-time nonlinear systems: Part I-The bounded case”. IEEE Transactions on Circuits and Systems, Vol. 26, No. 5, pp. 342–356, may 1979.
- [Star 99] J. Stark. “Regularity of invariant graphs for forced systems”. Ergodic theory and dynamical systems, Vol. 19, No. 1, pp. 155–199, 1999.
- [Take 81] F. Takens. “Detecting strange attractors in turbulence”. pp. 366–381, Springer Berlin Heidelberg, 1981.
- [Tana 19] G. Tanaka, T. Yamane, J. B. Héroux, R. Nakane, N. Kanazawa, S. Takeda, H. Numata, D. Nakano, and A. Hirose. “Recent advances in physical reservoir computing: A review”. Neural Networks, Vol. 115, pp. 100–123, 2019.
- [Verz 20] P. Verzelli, C. Alippi, L. Livi, and P. Tino. “Input representation in recurrent neural networks dynamics”. mar 2020.
- [Verz 21] P. Verzelli, C. Alippi, and L. Livi. “Learn to synchronize, synchronize to learn”. Chaos: An Interdisciplinary Journal of Nonlinear Science, Vol. 31, p. 083119, 2021.
- [Vian 20] M. Viana. “(Dis) continuity of Lyapunov exponents”. Ergodic Theory and Dynamical Systems, Vol. 40, No. 3, pp. 577–611, 2020.
- [Walt 00] P. Walters. An Introduction to Ergodic Theory. Graduate Texts in Mathematics, Springer New York, 2000.
- [Wyff 10] F. Wyffels and B. Schrauwen. “A comparative study of Reservoir Computing strategies for monthly time series prediction”. Neurocomputing, Vol. 73, No. 10, pp. 1958–1964, 2010.