Real-time Forecasting of Time Series in Financial Markets Using Sequentially Trained Many-to-one LSTMs
Abstract
Financial markets are highly complex and volatile; thus, learning about such markets for the sake of making predictions is vital to make early alerts about crashes and subsequent recoveries. People have been using learning tools from diverse fields such as financial mathematics and machine learning in the attempt of making trustworthy predictions on such markets. However, the accuracy of such techniques had not been adequate until artificial neural network (ANN) frameworks were developed. Moreover, making accurate real-time predictions of financial time series is highly subjective to the ANN architecture in use and the procedure of training it. Long short-term memory (LSTM) is a member of the recurrent neural network family which has been widely utilized for time series predictions. Especially, we train two LSTMs with a known length, say time steps, of previous data and predict only one time step ahead. At each iteration, while one LSTM is employed to find the best number of epochs, the second LSTM is trained only for the best number of epochs to make predictions. We treat the current prediction as in the training set for the next prediction and train the same LSTM. While classic ways of training result in more error when the predictions are made further away in the test period, our approach is capable of maintaining a superior accuracy as training increases when it proceeds through the testing period. The forecasting accuracy of our approach is validated using three time series from each of the three diverse financial markets: stock, cryptocurrency, and commodity. The results are compared with those of an extended Kalman filter, an autoregressive model, and an autoregressive integrated moving average model.
keywords:
Many-to-one LSTM , sequential training , real-time forecasting , time series , financial markets1 Introduction
Financial markets refer broadly to any marketplace that entitles the trading of securities, commodities, and other fungibles, and the financial security market includes stock market, cryptocurrency market, etc. Bahadur et al. [2019]. Among the three markets, which are stock, cryptocurrency, and commodity, the stock markets are well known to people while the other two are not. A cryptocurrency market exchanges digital or virtual currencies between peers without the need for a third party such as a bank [Squarepants, 2022], but a commodity market trades raw materials such as gold and oil rather than manufactured products. These markets are both highly complex and volatile due to diverse economical, social, and political conditions [Qiu et al., 2020]. Learning such markets for the sake of making predictions is vital because that aids market analysts to make early alerts about crashes and subsequent recoveries so that the investors can either obtain better precautions for future crashes or gain more profit under future recoveries. Since it is unreliable and inefficient to rely only on a trader’s personal experience and intuition for the analysis and judgment of such markets, traders need smart trading recommendations derived from scientific research methods.
The classical methods of making predictions on time series data are mostly linear statistical approaches such as linear parametric autoregressive (AR), moving average (MA), and autoregressive integrated moving average (ARIMA) [Zhao et al., 2018] where they assume linear relationships between the current output and previous outputs. Thus, they often do not capture non-linear relationships in the data and cannot cope with certain complex time series. Because financial time series are nonstationary, nonlinear, and contaminated with high noise [Bontempi et al., 2013], traditional statistical models have some limitations in predicting financial time series with high precision. Purely data-driven approaches such as Artificial Neural Networks (ANNs) are adopted to forecast nonlinear and nonstationary time series data with both high efficiency and better accuracy and have become a popular predictor due to adaptive self-learning [Gajamannage et al., 2021].
Recurrent Neural Networks (RNNs) are powerful and robust types of ANNs that belong to the most promising algorithms in use because of their internal memory [Park et al., 2022]. This internal memory remembers its inputs and helps RNN to find solutions for a vast variety of problems [Ma & Principe, 2018]. RNN is optimized with respect to its weights to fit the training data by adopting a technique called backpropagation that requires the gradient of the RNN. However, the gradient of RNN may vanish and explode during the optimization routing which hampers RNN’s learning ability of long data sequences [Allen-Zhu et al., 2019]. As a solution to these two problems [Le & Zuidema, 2016], the LSTM architecture [Hochreiter & Schmidhuber, 1997], which is a special type of RNN, is often used. LSTMs are explicitly designed to learn long-term dependencies of time-dependent data by remembering information for long periods. LSTM performs faithful learning in applications such as speech recognition [Tian et al., 2017, Kim et al., 2017] and text processing [Shih et al., 2018, Simistira et al., 2015]. Moreover, LSTM is also suitable for complex data sequences such as stock time series extracted from financial markets because it has internal memory, has capability of customization, and is free from gradient-related issues.
We adopt a real-time iterative approach to train an LSTM that makes only one prediction for each iteration. For that, we train this LSTM with a known length, sat time steps, of previous data while setting the loss function to be the mean square error between labels and predictions. The LSTM predicts only one time step ahead during the current iteration that we treat as an observation for the next training dataset. We train the same LSTM over all the iterations where the number of iterations is equal to the number of total predictions. This real-time LSTM model is capable of incorporating every new future observation of the time series into the ongoing training process to make predictions. Since we use a sequence of observed time series to predict only one time step ahead, the prediction accuracy increases significantly. Moreover, the previous observations along with the current prediction predict for the next time step, and so the prediction error associated with the current prediction is further minimized as it runs through iterations. While classic ways of training result in more error when the predictions are made further away in the test period, our approach is capable of maintaining a superior accuracy as training increases when it proceeds through the testing period.
This paper is structured with four sections, namely, introduction (Sec.1), methods (Sec. 2), performance analysis (Sec. 3), and discussion (Sec. 4). In Sec.1, first, we present the notion of real-time time series predictions. Then, we provide mathematical formulation of many-to-one LSTM architecture for sequential training. Finally, for the state-of-the-art time series prediction methods, we provide the formulation of one nonlinear statistical approach called extended Kalman filters (EKF), and two linear statistical approaches called AR and ARIMA. Sec. 3 provides a detailed analysis of the performance of our LSTM architecture against that of EKF, AR, and ARIMA using three financial stocks (Apple, Microsoft, Google), three cryptocurrencies (Bitcoin, Ethereum, Cardano), and three commodities (gold, crude oil, natural gas). We present the conclusions along with a discussion in Sec. 4.
Notation | Description |
---|---|
Index for time steps | |
Length of the training period | |
Forecasting length | |
Number of epochs | |
Number of stacked LSTMs | |
Training loss at the -th epoch of the -th iteration | |
Sigmoid function in LSTM | |
Order of the AR model | |
Number of past innovations in MA model | |
Relative root mean square error | |
The observation at the -th time step where | |
-th input training window | |
The prediction at the -th time step where | |
White Gaussian noise vector with zero mean in EKF | |
Observation vector at the -th time step in EKF | |
White Gaussian noise vector with zero mean in EKF | |
Parameters of AR model | |
White Gaussian noise vector with zero mean in AR model | |
Parameters of MA model | |
-th past innovation of MA model | |
Biased vector in ARIMA | |
Bias vectors in LSTM | |
System dynamics in EKF | |
Measurement function in EKF | |
-th level lag operator | |
-th differential time series | |
, , | Covariance matrices of , , , respectively, in EKF |
, | Jacobian matrices of , , respectively, in EKF |
Weight matrices in LSTM |
Abbreviations | Description |
---|---|
LSTM | Long Short-Term Memory |
KF | Kalman Filter |
EKF | Extended Kalman Filter |
AR | AutoRegressive |
MA | Moving Average |
ARMA | AutoRegressive Moving Average |
ARIMA | AutoRegressive Integrated Moving Average |
2 Methods
In this section, first, we provide technical details of the real-time time series prediction scheme. Then, we present the LSTM architecture to cater to the real-time time series prediction, and LSTM’s training and predicting procedures. Moreover, we apply this real-time prediction scheme for three other time series prediction methods, namely, EKF, AR, and ARIMA. These three methods will be our state-of-the-art methods to compare the performance of LSTM.
2.1 Real-time time series prediction
We adopt a “sequential” approach to efficiently train time series models and predict for future. For a fixed-length input sequential data, the model is set to predict only one future time step at an iteration where the process runs until the required length of the prediction is performed. This real-time prediction approach is capable of incorporating every new data point of the time series into the ongoing training process to make predictions for the next time step. Let, the current observed time series is for some , the unobserved future portion of the time series is for some , and the time series model is , see Fig. 1. For the first iteration, we train the time series forecasting model with the where . Then, we predict for the time step , denoted by , as where . In the second iteration, we train the same model with where and predict for the time step , denoted by , as where . We keep on doing this process until the predictions are performed for all the time steps .

2.2 Many-to-one LSTM architecture with sequential training
Since we make predictions only for one time step ahead at a time for an input time series, the LSTM architecture implemented here is the many-to-one type, see Fig. 2(a) for -stacked LSTM architecture. An LSTM consists of a series of nonlinear recurrent modules, denoted as for and in Fig. 2, where each module processes data related to one time step. LSTM introduces a memory cell, a special type of the hidden state, that has the same shape as the hidden state which is engineered to record additional information. Each recurrent module in an LSTM filters information through four hidden layers where three of them are gates, namely, forgotten gate, input gate, and output gate, and the other is called the cell state that maintains and updates long-term memory, see 2(b).
The forgotten gate resets the content of the memory cell by deciding what information should be forgotten or retained. This gate produces a value between zero and one where zero means completely forgetting the previous hidden state and one means completely retaining that. Information from the previous hidden state, i.e., , and the information from the current input, i.e., , is passed through the function, denoted as , according to
(1) |
where and are weighting matrix and biased vector, respectively. The input gate consisting of two components decides what new information is to be stored in the cell state. The first component is a layer that decides which values to be updated based on the previous hidden state and the information from the current input such that
(2) |
where and are weighting matrix and biased vector, respectively. The next component is a layer that creates a vector of new candidate values, , based on the previous hidden state and the information from the current input as
(3) |
where and are weighting matrix and biased vector, respectively.

Cell state updates the LSTM’s memory with new long-term information. For that, first, it multiplies point wisely the old cell state by the forgetting state , i.e., , to assure that the information retains from the old cell state is what is allowed by the forgetting state. Then, we add the pointwise product into , i.e.,
(4) |
as the information from the current input state which is found relevant by the ANN. The output gate determines the value of the next hidden state with the information from the current cell state, current input state, and previous hidden state. First, a layer decides how much of the current input and the previous hidden state are going to output. Then, the current cell state is passed through the layer to scale the cell state value between -1 and 1. Thus, the output is
(5) |
where and are weighting matrix and biased vector, respectively. Based upon , the network decides which information from the current hidden state should be carried out to the next hidden state where the next hidden state is used for prediction. To conclude, the forget gate determines which relevant information from the prior steps is needed. The input gate decides what relevant information can be added from the current cell state, and the output gates finalize the input to the next hidden state.
2.2.1 Optimization of LSTM
Training an LSTM is the process of minimizing a relevant reconstruction error function, also called loss function, with respect to weights and bias vectors of Eqns. (1), (2), (3), (4), and (5). Such a minimization problem is implemented in four steps: first, forward propagation of input data through the ANN to get the output; second, calculate the loss, between forecasted output and the true output; third, calculate the derivatives of the loss function with respect to the LSTM’s weights and bias vectors using backpropagation through time (BTT) Werbos [1990]; and fourth, adjusting the weights and bias vectors by gradient descent method Gruslys et al. [2016].
BTT unrolls backward all the dependencies of the output onto the weights of the ANN Manneschi & Vasilaki [2020], which is represented from left side to right side in Fig. 2(a). At each iteration, say , we train the LSTM by only one instance of input-label where the input is and the label is . Due to this process, at the -th iteration, the ANN is trained with -th input-label instance and predicts for the ()-th time step. Thus, we formulate the loss function at the -th iteration of the LSTM as the relative mean square error,
(6) |
where denotes the Frobenius norm and is the output of the LSTM for the input . We use BTT to compute the derivatives of Eqn. (6) with respect to the weights and bias vectors. We update the weights using the gradient descent-based method, called Adaptive Moment Estimation (ADAM) Kingma & Ba [2015]. ADAM is an iterative optimization algorithm used in recent machine learning algorithms to minimize loss functions where it employs the averages of both the first-moment gradients and the second-moment of the gradients for computations. It generally converges faster than standard gradient descent methods and saves memory by not accumulating the intermediate weights.
To assure better convergence of the loss function, we integrate epochs into the training process in a unique way that we explain here for the -th iteration. However, if the loss function is non-convex or semi-convergence choosing the best number of epochs is challenging. Fig. 3 illustrates the non-convex behavior of an LSTM’s loss function that is trained with the closing prices of the Apple stock. Here, we input a sequence of 1227 days of prices into the LSTM and generate the price for the 1228-th day where the loss is computed as the relative mean square error between the predicted price and the observed prices for the 1228-th day. We proceed with this single-day training for 60 epochs as shown in Fig. 3. Since the loss varies non-convexly with respect to epochs, we came up with a unique way of training the LSTM through epochs. Particularly, we maintain two LSTMs, denoted as and , that are trained through each iteration. We assume that those two LSTMs corresponding to the -th iteration are given for the -th iteration. For the -th iteration, we train with the input and the label for fix number of epochs, say . Here, we record ’s optimum weights and biased vectors corresponding to each of the epoch. We reformulate with the weights and biased vectors corresponding to the least loss among epochs. Finally, we redefine as and proceed to the -th iteration. Algorithm 1 summarizes the training and prediction procedure of our sequentially trained many-to-one LSTM scheme.

Denotation: , , and for .
Input: training time series ; forecast length (); number of maximum epochs ().
Output: time series forecast ; trained ( or ).
2.3 State-of-the-art methods
Here, we present three state-of-the-art time series prediction methods, namely, extended Kalman filter (EKF), autoregression, and autoregressive integrated moving average (ARIMA), that we use to validate the performance of our LSTM scheme. Here, we utilize the same sequential training as we did for LSTMs to make real-time predictions on the same financial time series.
2.3.1 Extended Kalman Filter (EKF)
EKF is a nonlinear version of the standard Kalman filter where the formulation of EKF is based on the linearization of both the state and the observation equations. In an EKF, the state Jacobian and the measurement Jacobian replace the state transition matrix and the measurement matrix, respectively, of a linear KF (Valade et al. [2017]). This process essentially linearizes the non-linear function around the current estimate. Linearization enables the propagation of both the state and state covariance in an approximately linear format. Here, the extended Kalman filter is presented in three steps, namely, dynamic process, model forecast step, and data assimilation step.
Dynamic Process
Here, we present both the state model and the observation model of a nonlinear dynamic process. The current state, , is modeled as a sum of the nonlinear function of the previous state, , and the noise, , as
(7) |
where , . Here, the random process is Gaussian white noise with zero mean and covariance matrix of . The initial state is a random vector with known mean and covariance . The Jacobian of the predicted state with respect to the previous state, denoted as , is obtained by partial derivatives as .
The current observation, , is modeled as a sum of the nonlinear function of the current state, , and the noise, , as
(8) |
where , . Here, the random process is Gaussian white noise with zero mean and covariance matrix of . The Jacobian of the predicted observation with respect to the previous state, denoted as , is obtained by partial derivatives as .
Model Forecast Step
The state Jacobian and the measurement Jacobian replace linear KF’s state transition matrix and the measurement matrix, respectively Valade et al. [2017]. Let, the initial estimates of the state and the covariance are and , respectively. The state and the covariance matrix are propagated to the next step using
(9) |
and
(10) |
respectively.
Data Assimilation Step
The measurement at the step is given by
(11) |
Use the difference between the actual measurement and the predicted measurement to correct the state at the step. To correct the state, the filter must compute the Kalman gain. First, the filter computes the measurement prediction covariance (innovation) as
(12) |
Then, the filter computes the Kalman gain as
(13) |
The filter corrects the predicted estimate by using observation. The estimate, after the correction using the observation , is
(14) |
The corrected state is often called the a posteriori estimate of the state, because it is derived after including the observation.
2.3.2 Autoregression (AR) model
Many observed time series exhibit serial autocorrelation which is known to be the linear association between lagged observations. The AR model predicts the value for the current time step, , based on a linear relationship between -recent observations, , , , , where this is known as the order of the model Geurts et al. [1977]. Let, are the coefficients, order AR model is given by
(15) |
where is uncorrelated noise with a zero mean. Let, the lag operator polynomial notation is . We define order autoregression lag operator polynomial as . Thus, AE model is given by
(16) |
The solution for the AR model is given by
(17) |
2.3.3 Autoregressive integrated moving average (ARIMA) model
ARIMA model is made by combining a differential version of AR model into a moving average (MA) model. MA model captures serial autocorrelation in a time series by expressing the conditional mean of as a function of past innovations, . An MA model that depends on past innovations is called an MA model of order , denoted by MA(). In general, the MA() model can be represented by the formula
(18) |
where ’s are uncorrelated innovation processes with a zero mean and is the unconditional mean of for all .
For some observed time series, a higher order AR or MA model is needed to capture the underlying process well. In this case, a combined ARMA model can sometimes be a parsimonious choice. An ARMA model expresses the conditional mean of as a function of both recent observations, , and recent innovations, . The ARMA model with AR degree of and MA degree of is denoted by ARMA (), which is given by
(19) |
Shumway & Stoffer [2017].
The ARIMA process generates nonstationary series that are integrated of order where that nonstationary process can be made stationary by taking differences. A series that can be modeled as a stationary ARMA() process after being differenced times is denoted by ARIMA(), which is given by
(20) |
where denotes a -th differential time series, ’s are uncorrelated innovation processes with a zero mean, and is the unconditional mean of for all [Newbold, 1983]. With the lag operator , the ARIMA model can be written as
(21) |
where and . Thus, the solution for ARIMA model is given by
(22) |
3 Performance Analysis
The performance analysis of LSTM is conducted using nine financial time series obtained from three markets, namely, stocks, cryptocurrencies, and commodities. We chose Apple, Google, and Microsoft for stocks; Bitcoin, Ethereum, Cardano for cryptocurrencies; and Gold, Oil, and Natural Gas for commodities. These diverse examples validate the broad applicability of LSTMs in analyzing and predicting financial time series.
We follow the procedure given in Fig. 1 to train the real-time many-to-one LSTM architecture given in Fig. 2. Setting the LSTM to run for a specific number of epochs and then using that trained network to make predictions often do not perform the best training and then do not perform accurate predictions since the loss function undergoes semi-convergence as shown in Fig. 3. To avoid this issue; first, we train the LSTM for 100 epochs; second, we compute the best number of epochs associated with the least loss; and finally, train again a new LSTM with that many epochs. Moreover, the parameter choices for the training length and prediction length are shown in Table 3.
time series | Training length | Prediction length |
---|---|---|
() | () | |
Apple | 1228 | 30 |
Microsoft | 1228 | 30 |
1228 | 30 | |
Bitcoin | 1064 | 30 |
Ethereum | 1064 | 30 |
Cardano | 1064 | 30 |
Oil | 8248 | 200 |
Natural gas | 5802 | 150 |
Gold | 816 | 30 |
Now, we incorporate the same one-day recursive prediction procedure in Fig. 1 into the other three state-of-the-art methods, namely, EKF, AR, and ARIMA, to predict the above financial time series. After a trial and error process, we found that the best ’s of AR are 300, 400, and 400, for Apple, Microsoft, and Google, respectively; and the best ()’s of ARIMA are (10, 0, 2), (10, 2, 1), and (0, 1 , 1), for Apple, Microsoft, and Google, respectively. Then, the best ’s of AR were found to be 100, 100, and 300, for Bitcoin, Ethereum, and Cardano, respectively; and the best ()’s of ARIMA were found to be (6, 0, 2), (6, 1, 1), and (8, 2 , 1), for Bitcoin, Ethereum, and Cardano, respectively. Finally, the best ’s of AR were 200, 200, and 100, for Oil, Natural gas, and Gold, respectively; and the best ()’s of ARIMA were (4, 1, 1), (10, 1, 2), and (8, 2 , 0), for Oil, Natural gas, and Gold, respectively. Thus, we set the methods with the best parameter values and executed them with the corresponding time series.
We compute the mean of the relative absolute difference between the predicted and the observed time series for the prediction period using
(23) |
as an error measure of the prediction that we show in Table 4. Hereby, we observe that the order of the best to the worst prediction performance is LSTM, EKF, AR, and ARIMA.
time series | LSTM | EKF | AR | ARIMA |
---|---|---|---|---|
Apple | ||||
Microsoft | ||||
Bitcoin | ||||
Ethereum | ||||
Cardano | ||||
Oil | ||||
Natural gas | ||||
Gold |
Fig. 4 shows the price predictions of the three stocks, Apple, Microsoft, and Google, using our real-time many to one LSTM, EKF, AR, and ARIMA. Since some of the predictions closely mimic the observed time series to overlap, we compute the absolute difference between the observations and the predictions, see Figs. 4(c), 4(f), and 4(i). We observe that all four methods are capable of capturing the pattern of the time series with the order of the best to the worst prediction performance is LSTM, ARIMA, AR, and EKF. Moreover, while LSTM and ARIMA perform similarly good, EKF and AR perform similarly weak.

Fig. 5 shows the price predictions of the three cryptocurrencies, Bitcoin, Ethereum, and Cardano, using LSTM, EKF, AR, and ARIMA. Since some of the predictions closely mimic the observed time series to overlap, we compute the absolute difference between the observations and the predictions, see Figs. 4(c), 4(f), and 4(i). We observe that LSTM, ARIMA, and AR are capable of capturing the pattern of the time series in contrast to the weak prediction of EKF. The order of the best to the worst prediction performance is LSTM, AR, ARIMA, and EKF.

Fig. 6 shows the price predictions of the three commodities, Oil, Natural gas, and Gold, using LSTM, EKF, AR, and ARIMA. We compute the absolute difference between the observations and the predictions, see Figs. 5(c), 5(f), and 5(i) since some of the predictions are similar to observations. We observe that mostly LSTM, ARIMA, and AR are capable of capturing the pattern of the time series. The order of the best to the worst prediction performance is LSTM, Ar, ARIMA, and EKF.

The performance of this real-time many-to-one LSTM is highly influenced by the number of epochs that it is executed. To check this assertion, we compute the prediction performance of the LSTM with respect to different numbers of epochs for Apple, Bitcoin, and Gold. The prediction performance is computed as the mean of the relative absolute difference, i.e., , between the prediction and the observed time series. Since EKF, AR, and ARIMA are independent of epochs, we represent their as a straight line. We observe that the performance of LSTM improves from worst to the best when the number of epochs is increased.

4 Discussion
The classical methods of solving temporal chaotic systems are mostly linear models which assume linear relationships between systems’ previous outputs for stationary time series. Thus, they often do not capture non-linear relationships in the data and cannot cope with certain non-stationary signals. Because financial time series are often nonstationary, nonlinear, and contain noise [Bontempi et al., 2013], traditional statistical models encounter some limitations in predicting them with high precision. In this paper, we have presented a real-time forecasting technique for financial markets using sequentially trained many-to-one LSTM. We applied this technique for some time series obtained from stock market, cryptocurrency market, and commodity market, then, compared the performance against three state-of-the-art methods, namely, EKF, AR, and ARIMA.
Here, we train a many-to-one LSTM with sequential data sampled using a moving window approach such that the succeeding window is shifted forward by one data instance from the preceding window. Such sequential window training plays an important role in time series predictions since it, 1) helps generate more data from a given limited time series and then thorough training of the ANN; 2) makes the data heterogeneous so that the overfitting issue of the ANN can be reduced; and 3) facilitates the learning of patterns of the data not only for the entire time series but also for short segments of sequential data. Sequential window training maximizes the performance of this LSTM as accelerates LSTM’s learning capability as well as it increases LSTM’s robustness to new data.
The performance analysis of this study covers the LSTM applied to nine time series obtained from three financial markets, stocks (Apple, Microsoft, Google), cryptocurrencies (Bitcoin, Ethereum, Cardano), and commodities (gold, crude oil, natural gas). We observed that the LSTM performs exceptionally better than the other three methods for all the nine datasets where the performance of EKF was significantly weak. We have seen in Table 4, on average, LSTM performs 17 times better than EKF, 7 times better than AR, and 4 times better than ARIMA. The average prediction errors of LSTM are 0.05, 0.22, and 0.14 for stocks, cryptocurrencies, and commodities, respectively. The reason for that is while the prediction on less volatile time series like in the stock market is easy, the prediction on high volatile time series like in the cryptocurrency market is challenging.
In future work, we are planning to extend this sequentially trained many-to-one LSTM to employ as a real-time fault detection technique in industrial production processes. This real-time fault detection scheme will be capable of producing an early alarm to alert a shift in the production process so the quality controlling team can take necessary actions. Moreover, trajectories of collectively moving agents can be represented on a low-dimensional manifold that underlies on a high-dimensional data cloud Gajamannage et al. [2019], Gajamannage & Paffenroth [2021], Gajamannage et al. [2015]. However, some segments of these trajectories are not tracked by multi-object tracking methods due to natural phenomena such as occlusions. Thus, we are planning to utilize our LSTM architecture to make predictions for the fragmented segments of the trajectories.
We empirically validated that our real-time LSTM outperforms the performance of EKF, AR, and ARIMA. In the future, we are planning to compare the performance of our real-time LSTM with that of the other famous ANN-based methods such as Facebook developed Prophet [Taylor & Letham, 2018], Amazon developed DeepAR [Salinas et al., 2020], Google developed Temporal Fusion Transformer [Lim et al., 2021], and Element AI developed N-BEATS [Oreshkin et al., 2019]. Prophet was designed for automatic forecasting of univariate time series data. DeepAR is a probabilistic forecasting model based on recurrent neural networks. Temporal Fusion Transformer is a novel attention-based architecture that combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. N-BEATS is a custom deep learning algorithm that is based on backward and forward residual links for univariate time series point forecasting.
We presented both nonlinear and real-time prediction technique for financial time series that is made by a many-to-one LSTM which is sequentially trained with windows of data. The sequential window training approach has significantly improved LSTM’s learning ability while dramatically reducing LSTM’s over-fitting issues. We empirically justified that our LSTM possesses superior performance even for highly volatile time series such as those in cryptocurrencies and commodities.
Acknowledgments
The authors would like to thank the Google Cloud Platform for granting Research Credit to access its GPU computing resources under project number 397744870419.
References
- Allen-Zhu et al. [2019] Allen-Zhu, Z., Li, Y., & Song, Z. (2019). On the convergence rate of training recurrent neural networks. In Advances in Neural Information Processing Systems (pp. 1310–1318). PMLR volume 32. arXiv:1810.12065.
- Bahadur et al. [2019] Bahadur, N., Paffenroth, R., & Gajamannage, K. (2019). Dimension Estimation of Equity Markets. In Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019 (pp. 5491--5498). Institute of Electrical and Electronics Engineers Inc. doi:10.1109/BigData47090.2019.9006343.
- Bontempi et al. [2013] Bontempi, G., Ben Taieb, S., & Le Borgne, Y. A. (2013). Machine learning strategies for time series forecasting. In Lecture Notes in Business Information Processing (pp. 62--77). Springer volume 138 LNBIP. doi:10.1007/978-3-642-36318-4_3.
- Gajamannage et al. [2015] Gajamannage, K., Butail, S., Porfiri, M., & Bollt, E. M. (2015). Identifying manifolds underlying group motion in Vicsek agents. European Physical Journal: Special Topics, 224, 3245--3256. doi:10.1140/epjst/e2015-50088-2.
- Gajamannage & Paffenroth [2021] Gajamannage, K., & Paffenroth, R. (2021). Bounded manifold completion. Pattern Recognition, 111, 107661. doi:https://doi.org/10.1016/j.patcog.2020.107661.
- Gajamannage et al. [2019] Gajamannage, K., Paffenroth, R., & Bollt, E. M. (2019). A nonlinear dimensionality reduction framework using smooth geodesics. Pattern Recognition, 87, 226--236. doi:10.1016/j.patcog.2018.10.020.
- Gajamannage et al. [2021] Gajamannage, K., Park, Y., Paffenroth, R., & Jayasumana, A. P. (2021). Reconstruction of Fragmented Trajectories of Collective Motion using Hadamard Deep Autoencoders. arXiv preprint arXiv:2110.10428, . doi:10.48550/arxiv.2110.10428. arXiv:2110.10428.
- Geurts et al. [1977] Geurts, M., Box, G. E. P., & Jenkins, G. M. (1977). Time Series Analysis: Forecasting and Control. Journal of Marketing Research, 14, 269. doi:10.2307/3150485.
- Gruslys et al. [2016] Gruslys, A., Munos, R., Danihelka, I., Lanctot, M., & Graves, A. (2016). Memory-efficient backpropagation through time. Advances in Neural Information Processing Systems, 29, 4132--4140. arXiv:1606.03401.
- Hochreiter & Schmidhuber [1997] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9, 1735--1780. doi:10.1162/neco.1997.9.8.1735.
- Kim et al. [2017] Kim, J., El Khamy, M., & Lee, J. (2017). Residual LSTM: Design of a deep recurrent architecture for distant speech recognition. Proceedings of the Annual Conference of the International Speech Communication Association, 2017-Augus, 1591--1595. doi:10.21437/Interspeech.2017-477.
- Kingma & Ba [2015] Kingma, D. P., & Ba, J. L. (2015). Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, . doi:10.48550/arXiv.1412.6980. arXiv:1412.6980.
- Le & Zuidema [2016] Le, P., & Zuidema, W. (2016). Quantifying the Vanishing Gradient and Long Distance Dependency Problem in Recursive Neural Networks and Recursive LSTMs. arXiv preprint arXiv:1603.00423, (pp. 87--93). doi:10.18653/v1/w16-1610. arXiv:1603.00423.
- Lim et al. [2021] Lim, B., Arık, S., Loeff, N., & Pfister, T. (2021). Temporal Fusion Transformers for interpretable multi-horizon time series forecasting. International Journal of Forecasting, 37, 1748--1764. doi:10.1016/j.ijforecast.2021.03.012. arXiv:1912.09363.
- Ma & Principe [2018] Ma, Y., & Principe, J. (2018). Comparison of Static Neural Network with External Memory and RNNs for Deterministic Context Free Language Learning. In Proceedings of the International Joint Conference on Neural Networks (pp. 1--7). IEEE volume 2018-July. doi:10.1109/IJCNN.2018.8489240.
- Manneschi & Vasilaki [2020] Manneschi, L., & Vasilaki, E. (2020). An alternative to backpropagation through time. Nature Machine Intelligence, 2, 155--156. doi:10.1038/s42256-020-0162-9.
- Newbold [1983] Newbold, P. (1983). ARIMA model building and the time series analysis approach to forecasting. Journal of Forecasting, 2, 23--35. doi:10.1002/for.3980020104.
- Oreshkin et al. [2019] Oreshkin, B. N., Carpov, D., Chapados, N., & Bengio, Y. (2019). N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. arXiv preprint arXiv:1905.10437, . URL: http://arxiv.org/abs/1905.10437. arXiv:1905.10437.
- Park et al. [2022] Park, Y., Gajamannage, K., Jayathilake, D. I., & Bollt, E. M. (2022). Recurrent Neural Networks for Dynamical Systems : Applications to Ordinary Differential Equations , Collective Motion , and Hydrological Modeling, . (pp. 1--15). doi:10.48550/arxiv.2202.07022.
- Qiu et al. [2020] Qiu, J., Wang, B., & Zhou, C. (2020). Forecasting stock prices with long-short term memory neural network based on attention mechanism. PLoS ONE, 15, e0227222. doi:10.1371/journal.pone.0227222.
- Salinas et al. [2020] Salinas, D., Flunkert, V., Gasthaus, J., & Januschowski, T. (2020). DeepAR: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, 36, 1181--1191. doi:10.1016/j.ijforecast.2019.07.001. arXiv:1704.04110.
- Shih et al. [2018] Shih, C. H., Yan, B. C., Liu, S. H., & Chen, B. (2018). Investigating Siamese LSTM networks for text categorization. In Proceedings - 9th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2017 (pp. 641--646). IEEE volume 2018-Febru. doi:10.1109/APSIPA.2017.8282104.
- Shumway & Stoffer [2017] Shumway, R. H., & Stoffer, D. S. (2017). ARIMA Models. (pp. 75--163). Springer, Cham. doi:10.1007/978-3-319-52452-8_3.
- Simistira et al. [2015] Simistira, F., Ul-Hassan, A., Papavassiliou, V., Gatos, B., Katsouros, V., & Liwicki, M. (2015). Recognition of historical Greek polytonic scripts using LSTM networks. In Proceedings of the International Conference on Document Analysis and Recognition, ICDAR (pp. 766--770). IEEE volume 2015-Novem. doi:10.1109/ICDAR.2015.7333865.
- Squarepants [2022] Squarepants, S. (2022). Bitcoin: A Peer-to-Peer Electronic Cash System. SSRN Electronic Journal, (p. 21260). doi:10.2139/ssrn.3977007.
- Taylor & Letham [2018] Taylor, S. J., & Letham, B. (2018). Forecasting at Scale. American Statistician, 72, 37--45. doi:10.1080/00031305.2017.1380080.
- Tian et al. [2017] Tian, X., Zhang, J., Ma, Z., He, Y., Wei, J., Wu, P., Situ, W., Li, S., & Zhang, Y. (2017). Deep LSTM for large vocabulary continuous speech recognition, . doi:10.48550/arXiv.1703.07090. arXiv:1703.07090.
- Valade et al. [2017] Valade, A., Acco, P., Grabolosa, P., & Fourniols, J. Y. (2017). A study about kalman filters applied to embedded sensors. Sensors (Switzerland), 17, 2810. doi:10.3390/s17122810.
- Werbos [1990] Werbos, P. J. (1990). Backpropagation Through Time: What It Does and How to Do It. Proceedings of the IEEE, 78, 1550--1560. doi:10.1109/5.58337.
- Zhao et al. [2018] Zhao, Y., Ge, L., Zhou, Y., Sun, Z., Zheng, E., Wang, X., Huang, Y., & Cheng, H. (2018). A new Seasonal Difference Space-Time Autoregressive Integrated Moving Average (SD-STARIMA) model and spatiotemporal trend prediction analysis for Hemorrhagic Fever with Renal Syndrome (HFRS). PLoS ONE, 13, e0207518. doi:10.1371/journal.pone.0207518.