sOmm\IfBooleanTF#1 \mleft. #3 \mright—_#4 #3#2—_#4
Sig-Splines: universal approximation and convex calibration of time series generative models
Abstract.
We propose a novel generative model for multivariate discrete-time time series data. Drawing inspiration from the construction of neural spline flows, our algorithm incorporates linear transformations and the signature transform as a seamless substitution for traditional neural networks. This approach enables us to achieve not only the universality property inherent in neural networks but also introduces convexity in the model’s parameters.
1. Introduction
Constructing and approximating generative models that fit multidimensional time series data with high realism is a fundamental problem with applications in probablistic prediction and forecasting [Rasul et al., 2020] and in synthetic data simulation in areas such as finance [Arribas et al., 2020, Buehler et al., 2020, 2022, Ni et al., 2020, 2021, Wiese et al., 2021, 2020]. The sequential nature of the data presents some unique challenges. It is essential that any generative model captures the temporal dynamics of the time series process - that is, at each time step, it is not enough to simply reflect the marginal distribution but instead we need to model the conditional density where the filtration represents all information available at time . Therefore, the model requires an efficient encoding of the history of the path as the conditioning variable.
In this paper, we present a novel approach to time series generative modelling that combines the use of path signatures with the framework of normalizing flow density estimation. Normalizing flows [Kobyzev et al., 2020, Papamakarios et al., 2019] are a class of probability density estimation models that express the data as the output of an invertible, differentiable transformation of some base noise :
A special instance of such models is the neural spline flow [Durkan et al., 2019a] which construct a triangular chain of approximate conditional CDFs of the components of , by using neural networks to output the width and height of a sequence of knots that are used to construct a monotonically increasing CDF function for each dimension, through spline interpolation which could be linear, rational quadratic or cubic [Durkan et al., 2019a, b]. Such models may be estimated by finding neural network parameters that minimize the forward Kullback-Leibler divergence between the target density and the flow-based model density . Thus, training such models may face the known challenges in neural netowrk training. In particular, learning the conditional density for time series data would in general require some recurrent structure in the neural network to account for the path sequence, unless some specific Markovian assumptions were made.
To overcome these challenges, we replace the neural network in the neural spline flow with an alternative function approximator, via the signature transform. First introduced in the context of rough path theory [Friz and Hairer, 2020, Lyons, 2014], the signature gives an efficient and parsimonious encoding of the information in the path history (i.e. the filtration). This encoding provides a feature map which exhibits a universal approximation property - any continuous function of the path can be approximated arbitrarily well by a linear combination of signature features. This makes signature-based methods highly computationally efficient for convex objective functions.
By incorporating the signature transform, our algorithm, termed the signature spline flow, possesses two significant properties:
-
•
Universality: We leverage the universal approximation theorem of path signatures, allowing our model to approximate the conditional density of any time series model with arbitrary precision.
-
•
Convexity: By replacing the neural network with the signature transform, we demonstrate that our optimization problem becomes convex in its parameters. This means that gradient descent or convex optimization methods lead to a unique global minimum (except for potential linearly dependent terms within the signature).
2. Related work
The seminal paper of Goodfellow et al. on Generative Adversarial Networks (GANs) [Goodfellow et al., 2014] has led to a rapid expansion of the literature related to generative modelling. While the algorithm may seem blatant at first different optimisation techniques have led to the convergence towards equilibria that give satisfying results for types modalities of data including images [Brock et al., 2019], (financial) time series data [Wiese et al., 2020, Yoon et al., 2019], music [Engel et al., 2019] and text to image synthesis [Reed et al., 2016].
Normalizing flows are an alternative approach to constructing a generative model by creating expressive bijections allowing for tractible conditional densities. In a somewhat chronological naming the most impactful papers, NICE [Dinh et al., 2014] proposed additive coupling layers and real-valued non-volume preserving (real NVP) transformations authored by Dinh et al. presented chaining affine coupling transforms for constructing expressive bijections which were shown to be universal [Teshima et al., 2020]. A bit later the idea of constructing a triangular map and leveraging the theorem of Bogachev et al. [Bogachev et al., 2005] got popular and coined by various authors autoregressive flow. Works such that presented algorithms that leverage autoregressive flows include Papamakarios et al. [Papamakarios et al., 2017], Wehenkel and Louppe [Wehenkel and Louppe, 2019] and finally Durkan et al. [Durkan et al., 2019b, a] in [2019a]. While the invertibility property of normalizing flows at first sight can seem limiting as a flow can not construct densities on a manifold, this problem was addressed with Riemannian manifold flows [Gemici et al., 2016], where an injective decoder mapping is consstructed to the high-dimensional space. An efficient algorithm to guarantee the injectiveness was introduced in [Brehmer and Cranmer, 2020].
Our research paper centers around constructing a conditional parametrized generative density for time series data, where the condition is specified by the current filtration . Previous studies have explored the estimation and construction of such densities using real data.
Ni et al. [Ni et al., 2020] propose a conditional signature-based Wasserstein metric and estimate it through signatures and linear regression. The combination of GANs and autoencoders is utilized by [Yoon et al., 2019] to learn the conditional dynamics. Wiese et al. [Wiese and Murray, 2022, Wiese et al., 2021] employ neural spline flows as a one-point estimator for estimating the conditional law of observed spot and volatility processes.
Other approaches focus on estimating the unconditional law of stochastic processes using Sig-SDEs [Arribas et al., 2020], neural SDEs [Gierjatowicz et al., 2020], Sig-Wasserstein metric [Ni et al., 2021], and temporal convolutional networks [Wiese et al., 2020]. Among these, Dyer et al. [Dyer et al., 2021] have work most closely related to ours, utilizing deep signature transforms [Kidger et al., 2019] to minimize the KL-divergence.
3. Signatures
When working with time series data in real-world applications we typically have a dataset which may have been sampled at discrete and potentially irregular intervals. Whilst many models take the view that the underlying process is a discrete-time process, the perspective in rough path theory is to model the data as discrete observations from an unknown continuous-time process. The trajetories of these continuous-time processes define paths in some path space. Rough path theory, and signatures in particular, give a powerful and computationally efficient way of working with data in the path space.
First introduced by Chen [Chen, 1957, 2001], the signature has been widely used in finance and machine learning. We provide here only a very brief introduction to the mathematical framework we will be using to define the signature, and refer the reader to e.g. [Chevyrev and Kormilitzin, 2016, Lyons et al., 2007] for a more thorough overview.
The signature of a -valued path takes values in the tensor algebra space of . In this section, we assume for simplicity that is the real-valued Euclidean -dimensional vectors space; i.e. .
We now may define the signature of a -dimensional path, as a sequence of iterated integrals which exists in the tensor algebra .
Definition 0 (Signature of a path).
Let be continuous. The signature of evaluated at the word is defined as
Furthermore, the signature of is defined as the collection of iterated integrals
Remark 1 (Signature of a sequence).
Let be a sequence. Let be continuous, such that , and linear on the intervals in between. For convenience, we denote the signature of the linearly embedded sequence for any word by .
The signature is a projection from the path space into a sequence of statistics of the path. Therefore, it can informally be thought of as playing the role of a basis on the path space. Each element in the signature encodes some information about the path, and moreover, some have clear interpretations, particularly in the context of financial time series - the first term represents the increment of the path over the time interval, oftern referred to as the drift of the process. Different path transformations, such as the lead-lag transformation [Chevyrev and Kormilitzin, 2016, Page 20.] can be applied before computing the signature to obtain statistics such as the realized volatilty.
To state the universality property of signatures we have to introduce the concept of time augmentation. Let denote the space of -valued paths with bounded variation. Let be a function which is defined for as . We call
the space of time-augmented paths starting at zero. We have the following representation property of signatures.
Proposition 0 (Universality of signatures).
Let be a real-valued continuous function on continuous piecewise smooth paths in and let be a compact set of such paths. Let . Then there exists an order of truncation and a linear functional such that for all ,
This result shows that any continuous function on a compact set of paths can be approximated arbitrarily well simply by a linear combination of terms of the signature. For a proof, see [Király and Oberhauser, 2019]. It can be thought of as a universal approximation property akin to the oft-cited one for neural networks [Cybenko, 1989]. The primary difference is that neural networks typically require a very large number of parameters which must be optimized through gradient-based methods, of a loss function which is non-convex in the network parameters, whereas the approximation via signatures amounts to a simple linear regression in the signature features. Hence, once the signature is calculated, function approximation becomes extremely computationally efficient and can be done via second order methods. Furthermore, as we will see later, we may apply further convex transformations of the signature and retain an objective that is convex in the parameters.
4. Linear neural spline flows
In this section, we revisit neural spline flows as prerequisites. Specifically, we explore their application to multivariate data in subsection 4.1 and further delve into their relevance in the context of multivariate time series data in subsection 4.2.
4.1. Multivariate data
Without loss of generality let be a -valued random variable.111Note that any random variable can be defined on the by applying the integral probability transform (IPT). Note that this any bijection applied to make a random variable -valued does not impact the KL-divergence objective that is later derived. Furthermore, denote by the cumulative distribution function (CDF) of and for by
the conditional CDF of given that the value of is . The set of conditional CDFs completely defines the joint distribution of the random variable . For completeness we restate the inverse sampling theorem in multiple dimensions:
Theorem 1 (Inverse sampling theorem).
Let be a uniformly distributed random variable on . Furthermore, let and define for the random variables
Then the random variables and are equal in distribution, i.e. .
Proof.
Various methods exist to approximate a set of conditional CDFs . In this paper, we are interested in a spline-based approximation
parametrised by model parameters and the conditioning vector . Crucially, the constructed spline has to satisfy the properties of a CDF: it has to be (1) monotonically increasing and (2) span from to . In order to satisfy both requirements the following construction is used.
Let be the number of knots used to construct the spline. Furthermore, denote by the softmax transform, which is defined as , and let be a potentially non-linear function which we shall coin the feature map. We call the composition of the softmax transform with the feature transform the increment function and denote it by
The parametrised spline with linear interpolation is then defined as
Note that due to the application of the softmax transform the constructed spline satisfies the properties of a CDF.
Using the above definition of a linear spline CDF we can construct a linear spline flow which approximates the set of CDFs :
Definition 0 (Spline flow with linear interpolation).
For let be linear spline CDFs. Then we call the function defined as
where , neural spline flow with linear interpolation.
Various options for the feature transform exist. The most wide-spread in Machine Learning literature is a neural network, resulting in a linear neural spline flow.
Definition 0 (Linear neural spline flow).
For let be a neural network. We call a spline flow taking as a feature map a neural spline flow.
Remark 2 (Interpolation schemes).
Other interpolation schemes that ensure that is monotonically increasing exist (see for example [Durkan et al., 2019a]). For the sake of this paper, we will restrict ourselves to linear interpolation.
Utilizing normalizing flows, such as linear spline CDFs, offers the advantage of allowing for analytical evaluation of the likelihood function. This characteristic can be expressed explicitly as an autoregressive flow [Papamakarios et al., 2019], employing the chain rule of probability (defining ):
(1) |
The density can be further expanded by using the definition of a parametrised linear CDF
(2) | ||||
(3) |
where is the indicator function for the -the bin which is if falls in to the bin and else. Due to the tractability of the conditional density the calibration of the parameters can be performed by minimizing the Kullback-Leibler (KL) divergence. The KL-divergence is defined as the expected ratio of the log-densities under the true density
and for a neural spline flow with linear interpolation enjoys the explicit representation:
where represents a constant term that is independent of the parameters .
In practice, the density is generally unknown and the expectation in the above derived KL-divergence needs to be estimated via Monte Carlo (MC) for a sample of size . In this case, the MC approximation is given as
where we drop the constant term. The calibration problem is then defined as the minimization of the loss function
Updates of the parameters are performed via batch or stochastic gradient descent [Ruder, 2016] for and initial parameters
where is the step size or learning rate and the gradients of the neural network’s parameters are computed via the backpropagation algorithm [Rumelhart et al., 1986].
4.2. Discrete-time stochastic processes
In the case of time series data one is interested in approximating the transition dynamics - that is, the conditional density of the next observation, conditional on past states of the time series. Let be a filtered probability space and let be a time series observed at discrete timestamps, which for ease of notation we assume to be regular but they need not be. As in the previous section, we assume without loss of generality that the time series is -valued. Assume further that the process is generative in the sense that the filtration is generated by the process .
Our objective is to approximate a conditional model density that minimizes an adapted version of the KL-divergence to the true conditional density
To accomodate for the conditional information of past samples, i.e. the filtration, we need to generalise neural spline flows by including the condition, i.e. past states of the time series.
Definition 0 (Discrete-time linear spline flow).
Fix and for let be a non-linear function and be linear spline CDFs taking as the feature map. We call the function defined as
(4) |
a discrete-time linear spline flow.
Following the defintion of a neural spline flow the discrete-time linear neural spline flow, simply is defined as a discrete-time linear spline flow where the feature maps are neural networks.
Remark 3 (Markovian dynamics).
In case the time series is Markovian with -lagged memory a single conditional neural spline flow can be constructed to approximate the conditional law at any time . The objective reduces to the “simpler” adapted KL-divergence
5. Signature spline flows
In this section, we introduce signature splines flows and adopt the notation from subsection 4.2. Before we proceed, we first define a couple of helper augmentations which will be useful to define the sig-spline.
Definition 0 (Mask augmentation).
The function defined for as
where is defined as
is called mask augmentation.
Thus, the mask augmentation restrains any information beyond the coordinate of the term of a sequence. This is useful to define any conditional density approximator for discrete-time data.
To obtain the universality property of signatures the sequence has to start at and needs to be time-augmented. Both following definitions come useful:
Definition 0 (Basepoint augmentation).
Let defined as basepoint augmentation.
Definition 0 (Time augmentation).
We call the function defined as
time augmentation.
Last, let be the composition of the basepoint, time and the mask augmentation defined as where is the mask augmentation applied at the coordinate.
The signature spline flow is defined in the same spirit as the neural spline flow; except that the neural network-based CDF approximator is replaced by a signature-based one and augmentations are applied to the raw time series.
Definition 0 (Signature spline flow).
Let be a discrete-time series of length , be the order of truncation and let be the parameter space. Furthermore, let be defined as
where . We call a spline flow using as a feature map a signature spline flow.
As before, we define our objective to be to minimize an adapted version of the KL-divergence of our model density with respect to the true density.
For a finite set of realizations we obtain the the Monte Carlo approximation
(5) |
where we denote the parameters as .
The following theorem states that calibrating the cost functions parameters is convex. The proof can be found in Appendix A.
Theorem 5 (Convexity).
The objective function is convex.
When working with a limited size dataset optimizing with respect to the cost function may lead to an overfitting of the density’s parameters. The following corollary shows that regularizing the model’s parameters using a convex penalty function maintains the convexity property of the calibration problem.
Corollary 0.
Let and let be a convex function of . Then the regularized objective is convex.
Note that the regularized objective includes the penalty functions and , so we may retain the convexity of our objective through these standard penalty functions. The regularized objective can be particularly helpful to avoid overfitting in the context of time series generation where we may only have one single realization of the time series.
We now present our main theoretical result, that the sig-spline construction is able to approximate the conditional transition density of any time series arbitrarily well, given a high enough signature truncation order, and sufficiently many knots in the spline. Again the proof can be found in Appendix A.
Theorem 7 (Universality of Sig-Splines).
Let be a -valued Markov process with -lagged memory. Let . Then there exists an order of truncation , a number of bins and a set of linear functionals such that for all paths of length
with respect to the norm.
6. Numerical results
This section delves into the evaluation of sig- and neural splines, examining their performance through a series of controlled data and real-world data experiments in subsections 6.2 to 6.4. Subsection 6.2 focuses on assessing generative performance using a VAR process with known parameters. In contrast, subsections 6.3 and 6.4 evaluate the performance of these splines on realized volatilities of multiple equity indices and spot prices, respectively.
6.1. Experiment outline
6.1.1. Training
The neural spline flow serves as the benchmark model to surpass in all experiments. Each neural spline flow consists of three hidden layers, each with 64 hidden dimensions which are used to construct the linear spline CDF. On the other hand, the sig-spline models are calibrated for orders of truncation ranging from 1 to 4, allowing us to observe how the performance of the sig-spline model varies with higher orders. Throughout all experiments, the models are calibrated using 2-dimensional time series data. The number of parameters used in each sig-spline model, assuming a 2-dimensional path, is reported in Table 1.
Before training, the dataset is divided into a train and test set. To account for the randomness in the train and test split and its potential impact on model performance, the models are calibrated using 10 different seeds. Performance metrics are computed as averages across all 10 calibration runs.
Within each calibration, early stopping [Prechelt, 2002] is applied to each individual conditional density estimator to mitigate overfitting on the train set. Early stopping halts the fitting of a conditional density estimator when the test set error increases consecutively for a certain number of times. In these experiments, the patience (the number of consecutive test set errors allowed before the fitting process stops) is set to 32. All models are trained using full-batch gradient descent.
Order of truncation | 1 | 2 | 3 | 4 |
Parameters | 512 | 1664 | 5120 | 15488 |
6.1.2. Evaluation
After training, all calibrated models are evaluated using a set of test metrics. This involves sampling a batch of time series of length 4 from the calibrated model, computing standard statistics from the generated dataset, and comparing them with the empirical statistics of the real dataset. This comparison is done by calculating the difference between the statistics and applying the norm. Specifically, four statistics are compared for both the return process and the level process: the first two lags of the autocorrelation function (ACF), the skewness, the kurtosis, and the cross-correlation. Additionally, for the multi-asset spot return dataset, the ACF of the absolute returns is compared as a lower discrepancy would indicate that the generative model is capable of capturing volatility clustering.
All numerical results can be found in Appendix B. Each table presents the performance metrics of the models, with the best-performing metric highlighted in bold font. It is important to note that the designation of the best-performing model is merely indicative, as in several cases, the presence of error bounds prevents the identification of a clear best-performing model.
6.2. Vector autoregression
Assume a -dimensional process taking the form
where are matrices and is an adapted normally distributed -dimensional random variable. The process is assumed to be latent in the sense that it is not observed. The observed process is assumed to be -dimensional and defined as
where is an unknown non-linear function.
In the controlled experiment the dimensions and are assumed. The decoder is represented by a neural network with two hidden layers, hidden dimensions initialized randomly using the He initialization scheme [He et al., 2015] and parametric ReLUs as activation functions and the VAR dynamic is governed by the autoregressive matrices
and the covariance matrix
A total of 4096 lags of the latent process are sampled to create a simulated empirical dataset. These lags serve as the basis for generating the observed empirical time series using a randomly sampled decoder (refer to Figure 1). Subsequently, an autoencoder is trained to compress the generated time series back to its original latent dimension. This compression step is employed to enhance scalability and demonstrate that autoencoders can produce a lower-dimensional representation of the time series.

and the compressed path (bottom) .
Utilizing the calibrated autoencoder, the encoder component is applied to obtain the compressed time series. This compressed representation is then used to calibrate the conditional density approximators, enabling the modeling of the conditional density of the original time series based on the compressed data obtained from the autoencoder.
Tables B.1.1 - B.1.4 report the performance metrics for the compressed level and return process, and the observed level and return process respectively. Upon examining all the tables, it becomes evident that the performance of sig-spline models consistently improves with higher truncation orders. This holds particularly true for the autocorrelation function, revealing that a truncation order below 3 fails to adequately capture the proper dependence of the process.
When comparing all the tables, it becomes apparent that the neural spline model holds a slight advantage over the sig-spline model truncated at order 4. However, when considering the performance on the compressed process, the two models demonstrate very comparable performance.
6.3. Realized volatilities
The subsequent case study explores the effectiveness of the sig-spline model in approximating the dynamics of a real-world multivariate volatilities dataset derived from the MAN AHL Realized Library. To conduct the numerical evaluation, the med-realized volatilities of Standard & Poor’s 500 (SPX), Dow Jones Industrial Average (DJI), Nikkei 225 (N225), Euro STOXX 50 (STOXX50E), and Amsterdam Exchange (AEX) are extracted from the dataframe. These volatilities form a 5-dimensional time series spanning from January 1st, 2005, to December 31st, 2019. Figure 2 provides a visual representation of the corresponding historical volatilities.
An observation from Figure 2 reveals a strong correlation between both the levels and their returns (refer to Figure 3 for the corresponding cross-correlation matrices). Due to these high cross-correlations, an autoencoder is calibrated to compress the -dimensional time series into a -dimensional representation. This -dimensional time series, depicted in Figure 2, serves as the basis for calibrating the neural and sig-spline conditional density estimators.


The performance metrics for the compressed 2-dimensional return and level process are presented in Table B.2.1 and Table B.2.2, while Table B.2.3 and B.2.4 display the performance metrics for the original 5-dimensional process. Analysis of Table B.2.1 and B.2.2 indicates that, in most metrics, the neural spline demonstrates superior performance compared to the sig spline for the compressed process. This suggests that the neural network exhibits better ability to capture complex dependence structures in more intricate real-world datasets. However, it is interesting to note that this performance advantage of the neural spline does not directly translate to an advantage in the observed process, as observed in Table B.2.3 and B.2.4. In fact, the signature spline truncated at order 4 frequently exhibited better performance compared to the neural spline flow.
It is noteworthy that Table B.2.3 and Table B.2.4 highlight the kurtosis metrics, which stand out prominently. This observation arises from the high kurtosis exhibited by the med-realized volatilities, indicating that both the neural and sig-spline models struggle to capture the heavy-tailed nature of the data.
6.4. Multi-asset spot returns
The following real-world multi-asset spot return dataset is sourced from the MAN AHL Realized Library. It focuses on the SPX and DJI stock indices, covering the period from January 1st, 2005, to December 31st, 2021 (refer to the top figure of Figure 4).
During the calibration process, it was noted that sig-splines struggled to capture the strong cross-correlations in the returns of SPX and DJI. To address this issue, the spot time series underwent preprocessing by applying Principal Component Analysis (PCA) to the index returns, resulting in whitened returns. The preprocessed time series is depicted in Figure 4. Subsequently, the models were calibrated using the transformed return series.
Table B.3.1 and B.3.2 present the performance metrics for both the preprocessed return series and the observed return process. Notably, the cross-correlation metric for all sig-spline models closely compares with the neural spline baseline, thanks to the application of PCA transform during the preprocessing of the spot return series. Both tables demonstrate that sig- and neural splines perform relatively well, with sig-splines showcasing superior performance. Moreover, it is observed that higher orders of truncation lead to improved performance in autocorrelation metrics, which detect serial independence and volatility clustering.

7. Conclusion
This paper introduced signature spline flows as a generative model for time series data. By employing the signature transform, sig-splines are constructed as an alternative to the neural networks used in neural spline flows. In section 5, we formally demonstrate the universal approximation capability of sig-splines, highlighting their ability to approximate any conditional density. Additionally, we establish the convexity of sig-spline calibration with respect to its parameters.
To assess their performance, we compared sig-splines with neural splines in section 6 using a simulated benchmark dataset and two real-world financial datasets. Our evaluation, based on standard test metrics, reveals that sig-splines perform comparably to neural spline flows.
The convexity and universality properties of sig-splines pique our interest in future research directions. We believe that pursuing these avenues could yield valuable insights and fruitful outcomes:
-
•
Sig-ICA (Signature Independent Component Analysis): Exploring the application of a signature-based ICA introduced in [Schell and Oberhauser, 2023] could enhance the scalability of the sig-spline generative model. A Sig-ICA preprocessing would allow calibrating a sig-spline model on each coordinate for each coordinate of the process, allowing for less parameters and more interpretablity.
-
•
Regularisation techniques: This paper has not extensively addressed methods for improving the generalization of sig-splines. Investigating regularization techniques specifically tailored to sig-splines could prove beneficial in enhancing their performance and robustness, particularly in scenarios with limited training data or complex dependencies.
References
- Rasul et al. [2020] Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann, and Roland Vollgraf. Multivariate probabilistic time series forecasting via conditioned normalizing flows. arXiv preprint arXiv: Arxiv-2002.06103, 2020.
- Arribas et al. [2020] Imanol Perez Arribas, Cristopher Salvi, and Lukasz Szpruch. Sig-sdes model for quantitative finance, 2020.
- Buehler et al. [2020] Hans Buehler, Blanka Horvath, Terry Lyons, Imanol Perez Arribas, and Ben Wood. A data-driven market simulator for small data environments, 2020.
- Buehler et al. [2022] Hans Buehler, Phillip Murray, Mikko S. Pakkanen, and Ben Wood. Deep hedging: Learning to remove the drift under trading frictions with minimal equivalent near-martingale measures, 2022.
- Ni et al. [2020] Hao Ni, Lukasz Szpruch, Magnus Wiese, Shujian Liao, and Baoren Xiao. Conditional sig-wasserstein gans for time series generation, 2020.
- Ni et al. [2021] Hao Ni, Lukasz Szpruch, Marc Sabate-Vidales, Baoren Xiao, Magnus Wiese, and Shujian Liao. Sig-wasserstein gans for time series generation. In Proceedings of the Second ACM International Conference on AI in Finance, ICAIF ’21, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450391481. doi: 10.1145/3490354.3494393. URL https://doi.org/10.1145/3490354.3494393.
- Wiese et al. [2021] Magnus Wiese, Ben Wood, Alexandre Pachoud, Ralf Korn, Hans Buehler, Phillip Murray, and Lianjun Bai. Multi-asset spot and option market simulation, 2021.
- Wiese et al. [2020] Magnus Wiese, Robert Knobloch, Ralf Korn, and Peter Kretschmer. Quant gans: Deep generation of financial time series. Quantitative Finance, 20(9):1419–1440, 2020.
- Kobyzev et al. [2020] Ivan Kobyzev, Simon JD Prince, and Marcus A Brubaker. Normalizing flows: An introduction and review of current methods. IEEE transactions on pattern analysis and machine intelligence, 43(11):3964–3979, 2020.
- Papamakarios et al. [2019] George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762, 2019.
- Durkan et al. [2019a] Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. Advances in Neural Information Processing Systems, 32:7511–7522, 2019a.
- Durkan et al. [2019b] Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Cubic-spline flows. arXiv preprint arXiv:1906.02145, 2019b.
- Friz and Hairer [2020] Peter K Friz and Martin Hairer. A course on rough paths. Springer, 2020.
- Lyons [2014] Terry Lyons. Rough paths, signatures and the modelling of functions on streams. arXiv preprint arXiv:1405.4537, 2014.
- Goodfellow et al. [2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
- Brock et al. [2019] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=B1xsqj09Fm.
- Yoon et al. [2019] Jinsung Yoon, Daniel Jarrett, and Mihaela Van der Schaar. Time-series generative adversarial networks. Advances in Neural Information Processing Systems, 32, 2019.
- Engel et al. [2019] Jesse Engel, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, and Adam Roberts. GANSynth: Adversarial neural audio synthesis. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=H1xQVn09FX.
- Reed et al. [2016] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In International conference on machine learning, pages 1060–1069. PMLR, 2016.
- Dinh et al. [2014] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation, 2014. URL https://arxiv.org/abs/1410.8516.
- Dinh et al. [2016] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
- Teshima et al. [2020] Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, and Masashi Sugiyama. Coupling-based invertible neural networks are universal diffeomorphism approximators. Advances in Neural Information Processing Systems, 33:3362–3373, 2020.
- Bogachev et al. [2005] Vladimir Igorevich Bogachev, Aleksandr Viktorovich Kolesnikov, and Kirill Vladimirovich Medvedev. Triangular transformations of measures. Sbornik: Mathematics, 196(3):309, 2005.
- Papamakarios et al. [2017] George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. Advances in neural information processing systems, 30, 2017.
- Wehenkel and Louppe [2019] Antoine Wehenkel and Gilles Louppe. Unconstrained monotonic neural networks. Advances in Neural Information Processing Systems, 32:1545–1555, 2019.
- Gemici et al. [2016] Mevlana C Gemici, Danilo Rezende, and Shakir Mohamed. Normalizing flows on riemannian manifolds. arXiv preprint arXiv:1611.02304, 2016.
- Brehmer and Cranmer [2020] Johann Brehmer and Kyle Cranmer. Flows for simultaneous manifold learning and density estimation, 2020.
- Wiese and Murray [2022] Magnus Wiese and Phillip Murray. Risk-neutral market simulation. arXiv preprint arXiv:2202.13996, 2022.
- Gierjatowicz et al. [2020] Patryk Gierjatowicz, Marc Sabate-Vidales, David Siska, Lukasz Szpruch, and Zan Zuric. Robust pricing and hedging via neural sdes. Available at SSRN 3646241, 2020.
- Dyer et al. [2021] Joel Dyer, Patrick W Cannon, and Sebastian M Schmon. Deep signature statistics for likelihood-free time-series models. In ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2021. URL https://openreview.net/forum?id=OOlxsoRPyFL.
- Kidger et al. [2019] Patrick Kidger, Patric Bonnier, Imanol Perez Arribas, Cristopher Salvi, and Terry Lyons. Deep signature transforms. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/d2cdf047a6674cef251d56544a3cf029-Paper.pdf.
- Chen [1957] Kuo-Tsai Chen. Integration of paths, geometric invariants and a generalized baker-hausdorff formula. Annals of Mathematics, pages 163–178, 1957.
- Chen [2001] Kuo-Tsai Chen. Iterated integrals and exponential homomorphismsť. Collected Papers of KT Chen, page 54, 2001.
- Chevyrev and Kormilitzin [2016] Ilya Chevyrev and Andrey Kormilitzin. A primer on the signature method in machine learning. arXiv preprint arXiv:1603.03788, 2016.
- Lyons et al. [2007] Terry J Lyons, Michael Caruana, and Thierry Lévy. Differential equations driven by rough paths. Springer, 2007.
- Király and Oberhauser [2019] Franz J Király and Harald Oberhauser. Kernels for sequentially ordered data. Journal of Machine Learning Research, 20(31):1–45, 2019.
- Cybenko [1989] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989.
- Ruder [2016] Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
- Rumelhart et al. [1986] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 323(6088):533–536, 1986.
- Prechelt [2002] Lutz Prechelt. Early stopping-but when? In Neural Networks: Tricks of the trade, pages 55–69. Springer, 2002.
- He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
- Schell and Oberhauser [2023] Alexander Schell and Harald Oberhauser. Nonlinear independent component analysis for discrete-time and continuous-time signals. The Annals of Statistics, 51(2):487–518, 2023.
- Böhning [1992] Dankmar Böhning. Multinomial logistic regression algorithm. Annals of the institute of Statistical Mathematics, 44(1):197–200, 1992.
- Vu [2016] Trung Vu. Multinomial logistic regression: Convexity and smoothness, 2016. URL https://trungvietvu.github.io/notes/2016/MLR.
Appendix A Proofs
A.1. Proof of Theorem 5
See 5
Proof.
222The proof follows ideas from [Böhning, 1992] paper [Böhning, 1992]. A short summary of the proof was uploaded by Vu [Vu, 2016].The sum of two convex functions remains convex. We therefore consider the cost function (5) without loss of generality for a single sample and only a single conditional density ; i.e. we consider the cost function of the conditional density defined as
is the signature of the masked path, and are the indicator functions
(6) |
where is the predefined partition of the -axis of the constructed CDF. We obtain the full loss function as
For legibility we drop the dependence on and write instead. To ease the notation we project the linear functionals and the signature to the real-valued vector space , thus the linear functionals can be thought of as weights hereafter.
To show that the cost function is convex we derive the first- and second-order gradients with respect to the model parameters . The first-order derivative is defined for a single set of weights is
where and can be expressed in matrix-vector notation as
where denotes the Kronecker product. Furthermore, the second-order gradient is given for any as
which in matrix-vector notation can be expressed as
(7) |
where denotes the weight matrix with diagonal entries
Finally, we need to show that the Hessian of the cost function with respect to is positive semidefinite. To demonstrate this, first recall that if the Eigenvalues of the square matrices are and respectively, then the Eigenvalues of are . Thus, it suffices to show that the matrices and are positive semidefinite. For the latter matrix this is straightforward since for any we have . For the former we can show that it is diagonally-dominant and conclude that it is positive semidefinite CITE. Thus,
(8) |
which concludes the proof. ∎
Remark 4.
Note that the convexity of the cost function does not hold if one makes the -partition.
A.2. Proof of Theorem 7
See 7
Proof.
We first note that by factoring the true and model densities into the product of the conditionals , we can write (dropping the explicit reference to the filtration to ease notation)
To use this factorisation, we make use of the following lemma.
Lemma 0.
Let and be two bounded sequences of real numbers. Then there exists such that
Proof.
Write . Then taking absolute values and applying the triangle inequality, taking we obtain the result. ∎
Hence, there exists such that we can write
Thus, since to prove the claim, it would be sufficient to establish that we can approximate each conditional arbitrarily well, without loss of generality we can focus on the univariate case . For any , define the piecewise constant approximation of the true conditional density as
with
and defined as above. Such piecewise constant functions are known to be dense in , hence given any we can find such that
according to the norm. For such , consider the log probabilities . Due to the universality of signatures, for each we can find a trunctation order and a weight vector such that . Define and , then take the weight matrix to have rows concatenated with zeros whenever . Thus, writing we have
Defining the vectors and similarly, by the Lipschitz property of the softmax function, with constant less than for , we have
Hence, with an application of the triangle inequality we obtain our result
∎
Appendix B Numerical results
B.1. VAR model
Model / Test metrics | ||||
---|---|---|---|---|
Sig-Spline (Order 1) | ||||
Sig-Spline (Order 2) | ||||
Sig-Spline (Order 3) | ||||
Sig-Spline (Order 4) | ||||
Neural spline flow |
Model / Test metrics | ||||
---|---|---|---|---|
Sig-Spline (Order 1) | ||||
Sig-Spline (Order 2) | ||||
Sig-Spline (Order 3) | ||||
Sig-Spline (Order 4) | ||||
Neural spline flow |
Model / Test metrics | ||||
---|---|---|---|---|
Sig-Spline (Order 1) | ||||
Sig-Spline (Order 2) | ||||
Sig-Spline (Order 3) | ||||
Sig-Spline (Order 4) | ||||
Neural spline flow |
Model / Test metrics | ||||
---|---|---|---|---|
Sig-Spline (Order 1) | ||||
Sig-Spline (Order 2) | ||||
Sig-Spline (Order 3) | ||||
Sig-Spline (Order 4) | ||||
Neural spline flow |
B.2. Realized volatilities
Model / Test metrics | ||||
---|---|---|---|---|
Sig-Spline (Order 1) | ||||
Sig-Spline (Order 2) | ||||
Sig-Spline (Order 3) | ||||
Sig-Spline (Order 4) | ||||
Neural spline flow |
Model / Test metrics | ||||
---|---|---|---|---|
Sig-Spline (Order 1) | ||||
Sig-Spline (Order 2) | ||||
Sig-Spline (Order 3) | ||||
Sig-Spline (Order 4) | ||||
Neural spline flow |
Model / Test metrics | ||||
---|---|---|---|---|
Sig-Spline (Order 1) | ||||
Sig-Spline (Order 2) | ||||
Sig-Spline (Order 3) | ||||
Sig-Spline (Order 4) | ||||
Neural spline flow |
Model / Test metrics | ||||
---|---|---|---|---|
Sig-Spline (Order 1) | ||||
Sig-Spline (Order 2) | ||||
Sig-Spline (Order 3) | ||||
Sig-Spline (Order 4) | ||||
Neural spline flow |
B.3. Multi-asset spot returns
Model / Test metrics | |||||
---|---|---|---|---|---|
Sig-Spline (Order 1) | |||||
Sig-Spline (Order 2) | |||||
Sig-Spline (Order 3) | |||||
Sig-Spline (Order 4) | |||||
Neural spline flow |
Model / Test metrics | |||||
---|---|---|---|---|---|
Sig-Spline (Order 1) | |||||
Sig-Spline (Order 2) | |||||
Sig-Spline (Order 3) | |||||
Sig-Spline (Order 4) | |||||
Neural spline flow |