On a novel training algorithm for sequence-to-sequence predictive recurrent networks
Abstract
Neural networks mapping sequences to sequences (seq2seq) lead to significant progress in machine translation and speech recognition. Their traditional architecture includes two recurrent networks (RNs) followed by a linear predictor. In this manuscript we perform analysis of a corresponding algorithm and show that the parameters of the RNs of the well trained predictive network are not independent of each other. Their dependence can be used to significantly improve the network effectiveness. The traditional seq2seq algorithms require short term memory of a size proportional to the predicted sequence length. This requirement is quite difficult to implement in a neuroscience context. We present a novel memoryless algorithm for seq2seq predictive networks and compare it to the traditional one in the context of time series prediction. We show that the new algorithm is more robust and makes predictions with higher accuracy than the traditional one.
1 Introduction
The majority of predictive networks based of the recurrent networks (RNs) are designed to use a fixed or variable length input sequence to produce a single predicted element (all the input and an output element have the same structure). Such a system can be called -to- predictive network. It includes a chain of RNs (this chain can degenerate into a single RN) followed by a predictor that converts a last inner state of the last RN of the chain into the predicted element. In order to predict a sequence of elements one has to employ special algorithms that use the trained network recursively by appending already predicted terms to the input sequence. In an βexpanding windowβ (EW) algorithm the length of the input sequence increases so that the network should be trained on the inputs of variable length. To employ the input of fixed length one uses a βmoving windowβ (MW) approach in which after each prediction round the input sequence is modified by appending the predicted element and dropping the first element of the current input. The recursive application of the -to- network for prediction of the element sequence requires an access to a short term memory to store the input sequence and this condition might be difficult to satisfy in neuroscience context. To resolve this problem the author recently suggested a memoryless (ML) algorithm that was successfully applied for time series prediction [2, 3].
The sequence prediction design can be considered from a different perspective β to construct a network that takes an input sequence and produces directly an ordered sequence of predicted elements using sequence to sequence (seq2seq) algorithm. This approach can also be called -to- extension of the -to- networks discussed above. Such seq2seq networks are considered to be an ideal tool for machine translation and speech recognition where both the input and output sequence length is not fixed. A traditional architecture of seq2seq predictive networks has two RNs and a predictor [1]. The first RN maps the whole input sequence of the length into a single inner state vector , this vector is repeatedly ( times) fed into the second RN and each its output is used by the predictor to generate the output sequence. In this approach the same output should be also retained as current inner state of the second RN to be updated at the next input of the vector . This means that one has to maintain several copies of the vector as well as to reserve memory for the inner states of the second RN. Again it is not clear whether these conditions can be satisfied in the neuroscience context.
In this manuscript the author first considers the traditional seq2seq algorithm with two RNs and a predictor. It is shown that if the predictive network employing such an algorithm is well trained (i.e., the deviation of the predicted value sequence from the ground truth one is negligibly small) there exists a nontrivial functional equation relating the parameters of both RNs and the predictor. In other words, knowledge of the parameters of the first RN and the predictor determines the parameters of the second RN. This relation can be used to improve the prediction quality of the whole network.
The author also shows that there exists a natural extension of the ML approach reported in [2] that allows design of a seq2seq ML algorithm. The numerical simulations show that this algorithm is robust and its predictive quality is not worse and in some cases is even better than demonstrated by the traditional one. The same time it has a clear advantage from the point of view of its application in the natural neural systems.
2 Traditional seq2seq RNN
The traditional seq2seq recurrent network architecture is actually comprised of two independent RNs and the linear predictor. The input sequence of -dimensional elements is fed into the first RN made of neurons that generates the corresponding states sequence . The elements of are -dimensional vectors representing inner states of RN computed using a recurrent relation
(1) |
which describes a simple rule β the current inner state of the RN depends on the previous inner state and the current input signal . This rule corresponds to an assumption that the neural network does not store its state but just updates it with respect to the submitted input signal and its previous state. The final state is replicated times producing the input sequence that is fed into the second RN which -dimensional inner states are determined by the relation
(2) |
All inner states are linearly transformed by the predictor P to produce
(3) |
a sequence of predicted -dimensional values approximating the ground truth ones . We assume that the predictive network is well trained, i.e., the deviations between and can be neglected. This -to- network is a generalization of -to- predictive networks that employs only a single recurrent network and the predictor P. The described algorithm requires memory sufficient to hold states in proper order to be transformed into the predicted sequence of .
3 Dependence of the recurrent networks
Consider first few prediction rounds of the expanding window algorithm. In what follows the round number is denoted as the superscript of the corresponding quantity.
Round . The input sequence . The first RN state sequence produced by . The second RN inner states are computed by and used further to generate
(4) |
Round . The input sequence is produced by appending the first predicted element to the sequence . Assuming that the added element in can be replaced by the ground truth value we have . The last element of the first RN state sequence is replicated and used as input to the second RN and used further to generate
(5) |
Round . The input sequence is produced by appending the second predicted element to the sequence and we have . The last element of the first RN state sequence is replicated and used as input to the second RN and used further to generate
(6) |
Round . The input sequence is produced by appending the -th predicted element to the sequence and we have . The last element of the first RN state sequence is replicated and used as input to the second RN and used further to generate
(7) |
From (4) and (5) it follows that the element predicted in both the first () and the second () prediction rounds. Compare the values for . From (4) we obtain where , and , so that
(8) |
On the other hand (5) leads to where , and we obtain
(9) |
Using
in the above relation we arrive at
(10) |
For the well trained predictive network the values with and should be very close to each other and we assume them to be equal. As the predictor P performs in both cases the same linear transformation we conclude that and we arrive at
(11) |
Repeating the same steps for a pair of for and we find similar to (11)
(12) |
By induction the following relation holds
As the input sequence generating the inner values can be selected from a large number of samples we conclude that the above relation must also be valid for every hidden vector corresponding to any input value that belongs to sequences used for network training
(13) |
This implies that for the well trained seq2seq predictive network there exists a set of nontrivial relations (13). Given the function determining the first RN and the linear transformation for the predictor the relations (13) restrict and actually define the function . In other words, the RNs are not independent β the functional equation (13) represents a condition on parameters of the ideal predictive network and can be viewed as a tool for network improvement. It can be done as follows β first the network is trained using standard backpropagation algorithm fixing the parameters of all three components of the network. Then the parameters of any two of the three components (preferentially, the predictor and the first RN generating values) are fixed and the parameters of the remaining RN are tuned to satisfy the relation (13) as good as possible.
4 Memoryless algorithm
The traditional seq2seq network architecture and the corresponding algorithm lead to a specific memory requirements that can be easily implemented in silico but in the author opinion is quite difficult to satisfy in natural neural networks.
First, one has to produce exact copies of and feed them one by one into the second RN. It can be done if existence time of the inner state is equal or larger than an interval required to process copies of this state times through the second RN. Second, each inner state of the second RN should be used as an input in two independent processes β nonlinear transformation (2) and linear transformation (3) of the predictor P. It can be done by making a copy of before feeding it into the predictor.
On the other hand it is possible to simplify network architecture and use a memoryless (ML) algorithm introduced recently [2, 3] for the -to- predictive networks. The essence of the method is that for the well trained RN (with ) one can produce a sequence of using a simple relation for the nonlinear transformation of the single RN:
(14) |
without constructing new input sequences required by the EW or MW approach. Notice that in ML algorithm a computation of each new predicted element naturally leads to used for prediction of the next element while no memory is required in this recursive process.
The relation (14) allows to produce sequence of predicted values , compare it to the sequence of the ground truth values and compute the training error (defined below) used in backpropagation training algorithm.
After the network is trained to predict values it is easy to extend it for prediction of a sequence having of elements reusing (14) recursively, and one can define a prediction error defined as
(15) |
where denotes an Euclidean norm (-norm) of vector . The training error is a particular case of (15) for .
5 Numerical simulations
It is instructive to compare the two architectures of the seq2seq predictive networks described in the previous Sections. First we consider the traditional algorithm (Section 2) and then turn to the ML approach (Section 4).
5.1 Traditional seq2seq network
As the traditional networks employ two RNs with the number of neurons equal to it is interesting to learn what ratio for fixed total number leads to the smallest error defined by (15). To address this problem we train networks to predict the time series of the phase modulated 1D noisy signals β sine wave and trapezoid wave
where is the wave period, is the amplitude of white noise , is the offset and is the wave amplitude. The phase modulation is implemented by following argument replacement , where is the amplitude of the phase modulation and defines its periodicity.
The training set construction is performed as follows: for given function or we create a set of points with where and and noise amplitude . The parameters of phase modulation read , while the trapezoid parameters are , so that . Then from each set a pairs of input and output sequences are generated β contains the values with and the ground truth sequence contains the values with . We use , while the length of the output sequence is equal to . For each type of signals 4000 training samples are produced and merged into a single training set. The networks are trained using the Adam algorithm for epochs with of data used as a validation set.
The analysis of the simulation results are presented in Fig. 1. First we observe that the sine wave prediction quality (Fig. 1a) does not depend significantly on the total number of neurons. On the other hand for trapezoid wave (Fig. 1b) both the ratio and the total neuron number influence the training and prediction errors. We observe in this case that when the total number of neurons is small the prediction quality improves for larger ratios (solid curves). These trends are reproduced when one recursively repeats the prediction algorithm (dashed curves). When the total number of neurons is large () the error demonstrates average growth for increasing with local minima and maximum around . Finally in the intermediate case the minimal error is observed for the ratios .
![]() |
![]() |
Another important trend (Fig. 2) demonstrates that the error dependence on the number of the neurons in the first basic RN is on average the same (with some local deviations) for different total number of the neurons in predictive network. We observe that for the sine wave the error does not change significantly for and starts to increase with . In case of trapezoid wave the error decreases when is below but for larger it starts to increase but this behavior is nonmonotonic.
![]() |
![]() |
5.2 Memoryless seq2seq network
To compare the prediction quality of the traditional and the memoryless networks we construct a predictive network with a single basic RN having neurons and train it on the same data set that was used for the traditional one. We observe that the error estimates in ML networks are consistently lower than those for the traditional one (Fig. 3). The same time the trends for the sine and trapezoidal noisy waves are opposite β for the sine wave the ML algorithm reports smaller error for medium and large ratios (Fig. 3a), while for the trapezoidal signal it becomes significantly lower at small ratios (Fig. 3b).
![]() |
![]() |
We illustrate these observations in Fig. 4 showing the input sequence curve, its ground truth continuation and the predicted curve obtained by employing both algorithms in the networks with .
![]() |
![]() |
![]() |
![]() |
We confirm that for large values of the ML network predicts the sine wave better than the traditional one. On the other hand the ML network predicts much better the trapezoid wave better than the traditional one for smaller ratios while for large ratios the predicted curves effectively coincide.
6 Discussion
In this manuscript the author considers the traditional architecture and training algorithm of seq2seq predictive network that includes two RNs and a predictor. It appears that for this network the parameters of the second RN depend on those defining the first RN and the predictor. This dependence has a form of a functional vector equation satisfied for a very large number of the vector arguments . These vectors depend both of the parameters of the first RN and the sample input sequence, i.e., on the time series to be predicted.
It is important to underline that the established functional equation corresponds to the ideally trained predictive network and cannot be satisfied for all arguments. The same time it can serve as a tool to improve the predictive power of the network in the following manner. First the traditional network is trained using standard algorithms. Then for the fixed parameters of the first RN and the predictor one performs tuning of parameters of the second RN using arguments generated by feeding the input sequences from the training set into the first RN. The choice of the tuning algorithm will be discussed elsewhere.
The traditional seq2seq algorithm requiring memory to preserve the replicated inner state might be difficult to implement in neuroscience context. To overcome this difficulty one can use an alternative memoryless (ML) algorithm being an extension of the algorithm proposed recently in [2, 3]. The network implementing this approach employs only a single RN and a predictor is shown to successfully predict the phase modulated noisy periodic signals. The comparison to the traditional seq2seq networks demonstrates that the ML network has lower error, i.e., higher prediction quality.
Acknowledgements
The author wishes to thank Jay Unruh for fruitful discussions.
References
- [1] I. Sutskever, O. Vinyals, Q.V. Le, Sequence to sequence learning with neural networks, 2014, arxiv:14093215v3 [cs.CL].
- [2] B. Rubinstein, A fast noise filtering algorithm for time series prediction using recurrent neural networks, 2020, arxiv:2007.08063v3 [cs.LG].
- [3] B. Rubinstein, A fast memoryless predictive algorithm in a chain of recurrent neural networks, 2020, arxiv:2010.02115v1 [math.DS].