Pricing American options under rough volatility using deep-signatures and signature-kernels
Abstract
We extend the signature-based primal and dual solutions to the optimal stopping problem recently introduced in [BPS23], by integrating deep-signature and signature-kernel learning methodologies. These approaches are designed for non-Markovian frameworks, in particular enabling the pricing of American options under rough volatility. We demonstrate and compare the performance within the popular rough Heston and rough Bergomi models.
Keywords: Signature, optimal stopping, rough volatility, deep learning, kernel learning.
MSC2020 classifications: 60G40, 60L10, 91G20, 91G60.
1 Introduction
Starting with the seminal paper [KLA20], (rough) path signatures [Che57] have been increasingly recognized as a powerful tool for numerical approximations to solutions of stochastic optimal control problems when the underlying system does not have the Markov property. While the workhorse theoretical methods for stochastic optimal control – dynamic programming / HJB equations on the one hand, Pontryagin maximum principle / BSDEs on the other hand – can, in principle, be formulated and analysed even when the underlying system is not Markovian, this comes at the expense of tractability.
For example, in the Markovian case, we can assume (under mild conditions) that optimal controls are of feedback form, i.e., can be expressed as functions of the underlying state variable . If we do not make the Markovian assumption, we only know that optimal controls are, say, progressively measurable w.r.t. the governing filtration, i.e., , provided that said filtration is generated by a process . Similar remarks can be made for the value function as well as the solutions to the corresponding BSDE systems.
From a numerical point of view, this observation increases the computational challenge considerably for solving non-Markovian stochastic optimal control problems. In essence, an approximation problem for a finite-dimensional function, say, , is replaced with one for a function on pathspace, say (with additional measurability constraints).
Kalsi, Lyons and Perez Arribas suggested a general framework for solving stochastic optimal control problems in a model-free way (in particular, without assuming a Markovian problem) for the example of an optimal execution problem, see [KLA20]. In a nutshell, the approach consists of approximating the strategy as well as the value function as linear functionals of the signature of the underlying process. The linearization – based on the signature’s universality – allows them to rephrase the optimal execution problem as a deterministic optimization problem in terms of the expected signature.
Full linearization as in [KLA20] is only feasible if the unknown function (value function, control, …) is smooth enough, so as not to require a very deep degree of signature approximation – comparable to approximations with polynomials of high degree in finite dimensional spaces. Otherwise, several other strategies have been considered in the literature:
-
•
Signatures can be interlaced with non-linear transformations of the path, e.g., deep neural networks, to increase the signature’s “expressivity”, see, for instance, [TBO21].
-
•
Signatures – or, alternatively, log-signatures to reduce the dimension – can be used as features for other, non-linear approximation methods, e.g., deep neural networks. In the context of optimal stopping, this was first suggested in [BHRS23].
In addition, signature kernels, see [CO22], provide a classical kernel learning approach to regression problems in general, and can also be used for stochastic optimal control, in particular. [SCF+21] observed that the (non-truncated) signature kernel satisfies a Goursat PDE, and, hence, can be computed numerically, without requiring the (otherwise crucial) truncations of the signature.
Rough volatility models, see [BFF+23], are a popular class of stochastic volatility models in finance, i.e., with a stock price process following a dynamics , where the stochastic variance process is “rough”, e.g., an exponential of fractional Brownian motion with Hurst index . Crucially, such models do not have the Markov property, so signature methods are ideally suited for computing approximate solutions to the optimal stopping problem in – in financial terms: to compute the prices of American or Bermudan options.
This paper builds on [BPS23], where adaptations of the classical Longstaff–Schwartz and dual algorithms for Bermudan option pricing based on linear functionals of the signature were introduced and analysed. In this paper, we further extend these algorithms by introducing versions based on non-linear functions of the signature as well as signature-kernel versions. We then apply them to the Bermudan options pricing problem for popular rough volatility models (specifically, the rough Bergomi model [BFG16] and the rough Heston model [EER19]), and numerically analyse their performance for these models under realistic model parameters.
The goal of this paper is to to compute American (more precisely, Bermudan) option prices in the aforementioned rough volatility models, with special emphasis on numerical performance as well as comparisons between the methods outlined before. Specifically, we solve primal and dual formulations of the optimal stopping problem – giving us lower and upper bounds of the option price, respectively – using three different signature methods each:
-
1.
linear functionals of the truncated signature;
-
2.
deep neural networks applied to truncated signatures;
-
3.
linear combinations of the signature kernel.
Compared to the literature (see, for instance, [BPS23] based on linear functionals of the signature alone), we provide considerably sharper bounds for the option price, in the sense of the length of the interval between lower and upper bounds. More specifically, we find that for a realistic rough volatility model (the rough Bergomi model with Hurst index , as suggested in [BFG16]), we obtain a tight gap of about .
Regarding the comparison between the different methods, no method seems to consistently outperform the others. For the primal problem, all three methods tend to perform very well, with mild advantages for the linear and the signature kernel methods. The dual formulation, however, seems to lead to more difficult approximation and optimization problems, and the neural-network-based method has some advantages. Overall, we see the biggest improvements compared to [BPS23] for the dual formulation.
This conclusion is also supported by various other numerical studies performed, including a study of the dependence of the training error on the number of training samples. We also study the relative importance of the different components in the signature, specifically in the DNN-based method. For the primal formulation, while the the most important signature components are those corresponding to powers of the increments, generally all components matter, and there is no apparent sparse structure. On the other hand, for the dual problem, only very few signature components actually are important, and there seems to be great potential for dimension reduction.
Acknowledgments
All authors gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). CB and PL acknowledge support from DFG CRC/TRR 388 “Rough Analysis, Stochastic Dynamics and Related Fields”, Project B03.
2 A review on Monte-Carlo methods for optimal stopping
In this section we introduce the optimal stopping problem in a fairly general framework, and recall two general simulation based procedures to derive lower and upper bounds of the optimal value. Let be a probability space supporting a state-process , which we only assume to be -Hölder continuous almost surely, . Moreover, we consider an -adapted, continuous process , the cash-flow process, which for technical reasons we assume to fulfill . Finally, the optimal stopping problem reads
(1) |
where denotes the set of -stopping times on .
2.1 A general Longstaff-Schwartz algorithm
Replacing the continuous time interval by some finite grid , the dynammic programming principle [PS06, Page eq. (2.1.7)] for the discrete optimal stopping reads
(2) |
and is optimal. Motivated by the famous Longstaff and Schwartz algorithm [LS01], the first optimal stopping time can be obtained as of the following recursion
(3) |
The remaining question is, how to compute the continuation values for possibly non-Markovian state-processes , which will be the main topic of the forthcoming Section 3. For now, assume we are given a suitable family of basis-functions , such that with some coefficients . A general version of the Longstaff-Schwartz algorithm goes as follows:
-
1.
Draw samples and and initialize .
-
2.
Then, for
-
•
solve the minimization problem
-
•
Set if , and otherwise.
-
•
-
3.
Finally, draw independent sample-paths and , and compute the stopping times and the lower-biased estimator
Notice that the resimulation in the final step ensures, that the are indeed stopping times, since the computed coefficients are now independent of the samples, and thus is a true lower-bound. The Monte-Carlo approximation is therefore lower-biased and strictly speaking not a true lower-bound, due to the Monte-Carlo error with respect to . Nevertheless, this error is typically chosen to be very small, and we will refer to as lower-bound anyways.
2.2 A general dual procedure
While the Longstaff-Schwartz procedure described in the last section provides us with lower-bounds for the optimal value, it is desirable to additionally have an algorithm producing upper-bounds. To this end, it is useful to consider the (discretized) dual formulation of (1) due to [Rog02]
(4) |
where the minimization is over discrete -martingales with respect to the filtration . By the Martingale Representation Theorem [KS91, Theorem 4.5], if the underlying filtration is generated by a Brownian motion , such martingales are of the form for some progressive process . On the other hand, due to the progressive measurability, the integrands are of the form for some (deterministic) function . Assume again that we are given some family of basis-functions such that for some coefficients . If the family is rich enough, it is reasonable to parametrize the space by the span of basis-martingales , which leads to the following dual algorithm:
-
1.
Draw sample-paths of , and compute the basis martingales (e.g. using an Euler-scheme)
- 2.
-
3.
Finally, draw independent sample-paths and compute the martingales and the upper-biased estimator
Similar as in the primal case, the independent resimulations ensure that we get true martingales , since is independent of the samples, and therefore is a true upper-bound. By the same abuse of terminology as in the primal case, we call the Monte-Carlo approximation upper-bound, keeping in mind that it is strictly speaking only upper-biased due to the Monte-Carlo error with respect to .
3 Signature stopping policies and dual martingales
To numerically solve the two algorithms presented in the last section, we require to learn functionals , where is an infinite-dimensional path space. As motivated in the introduction, we will use the path-signature, resp. the corresponding signature-kernel, to tackle this problem. For the rough path theoretical details in this section, we refer to the excellent books [FV10, FH20], but see also [CS24] with a focus on machine learning.
Let be an -valued, -Hölder continuous path, . The rough path signature of is given by the path , defined (at least formally) as the collection of iterated integrals
taking values in the extended tensor algebra
an algebra under a (naturally defined) non-commutative product . If , the meaning of the iterated integrals is in the sense of Riemann-Stieltjes, see [FV10, Chapter 7.2], and less obvious becomes the case , since does not have a meaning a-priori. However, Lyons’ theory of rough paths [Lyo98] allows us to give the latter a meaning, but higher-order information of the path is required. More precisely, we have to lift to a so-called , where and , see [FV10, Chapter 9]. Having such a rough path lift at hand, the notion of rough integration allows to make sense of integrating against , and a signature lift of can be defined, sharing all the properties of the path-signature for smooth paths, which is due to Lyons’ extension theorem [Lyo98, Theorem 3.7].
Remark 3.1
The authors of [BGLY16] show that the map is injective up to a certain equivalence class on path spaces, which can be eliminated when adding a monotone component to the path, e.g. the time-augmentation . Moreover, by applying a so-called tensor-normalization to the signature, the authors of [CO22] introduce the notion of robust signatures , which can be seen as a bounded version of the signature, sharing all of its structural properties. For the theoretical results in this section, we always assume that our underlying process is already augmented by some monotone function, and this pair can be lifted to a geometric rough path , see [FV10, Definition 9.16], and we denote by the unique robust rough path signature for some fixed normalization .
3.1 Linear signature learning
One of the most important algebraic properties of the signature is the following: For any two linear functionals on the tensor-algbera, that is , we can find (in fact, a closed-form expression can be given), such that , for all signatures . In words, the space of linear functionals of the signature forms an algbera, which is point-seperating (since we assume to have a unique signature, see Remark 3.1). An application of a general Stone-Weierstrass result, see [CO22, Section 2.1], tells us that the set of linear functional of the robust signature is dense in the set of bounded continuous functions on the corresponding path space. This fact can be used to deduce a global -approximation for linear functionals of the signature, see [BPS23, Theorem 2.8], which then yields the following convergence results. We will always assume to be in the framework described in the beginning of Section [BPS23, Section 3.1], where certain assumptions on the path space, as well as on the payoff process are made precise. The following proposition summarizes the results [BPS23, Proposition 3.3 and 3.8], to which we refer for more precise statements and detailed proofs.
Proposition 3.2
Using the same notation from Section 2, we consider independent sample-paths of , and we define the martingales for some truncation level . We define the estimators
(5) | ||||
Then, for new samples , independent of , define the stopping times and martingales . Then
(6) |
as , where the convergence with respect to is almost sure convergence.
While the last result ensures converges for the primal and dual procedures, is does not come with quantitative statements about convergence rates. Additionally, these algorithms can become computationally expensive, as the size of the signature grows exponentially with . In [BPS23] several numerical experiments were performed for these two algorithms in non-Markovian frameworks, in particular for models driven by fractional Brownian motion with small Hurst parameters. The accuracy of the method can be measured by the duality gap between lower and upper bounds, and in some examples these gaps were observed to be quite significant even for high truncation levels. Two important sources for this gap can be described as follows: First, especially in very rough regimes, the considered signature levels might not suffice to capture the relevant past of the non-Markovian processes, so that information is lost when truncating the signature. Secondly, the functionals on the path spaces we try to learn are highly non-linear, so that more sophisticated learning technologies are required to improve the performance. The goal of the next sections is to introduce non-linear extensions of the primal and dual algorithm, based on deep-, resp. kernel-learning methodologies, with the objective of improving the duality gap.
3.2 Deep signature learning
Applying Deep Neural Networks (DNNs) on top of the signature was considered several times in the literature, see for instance [KBPA+19], where even more generally, the signature can act as a layer in a network. Here, we simply consider the truncated signature as the input for a DNN of the form
where are affine, , for , and is an activation function. We call a signature DNN with hidden layers, each of which having neurons. The signature map represents the input layer, and the first hidden layer acting on the signature, can be seen as a vector of linear functionals , and . The remaining hidden layers are all of the form , where , and the output layer is given by . We denote by the set of all such signature DNNs with hidden layers, and activation function .
It is well known, that already hidden layer DNNs are universal, see, e.g. [LLPS93]. For this reason, and to ease the notation for the theoretical results, in the remainder of this section we fix . Nevertheless, all the results trivially extend to multiple hidden layers , and in the numerical experiments we mostly choose more than one hidden layer. Setting , we can explicitly write the set as
(7) |
where denotes the set of all linear functionals, and are stopped -Hölder rough paths, see [BPS23, Section 2] for details. For fixed and any pair , where are linear functionals on the -step signature , denote by the corresponding functional in the set . Revisiting the Longstaff & Schwartz algorithm presented in Section 2.1, it is tempting to define a sequence of deep stopping times , recursively defined similar to (3), but with conditional expectations replaced by , where for
(8) |
In words, we recursively learn the continuation values as neural networks of truncated signatures. Similarly, we approximate the Doob-martingale from Section 2.2 by deep martingales , where
(9) |
Thanks to the universality of both signatures and DNNs, the following result should not come as a surprise. Similar as in the last section, we adopt the assumptions described in the beginning of [BPS23, Section 3.1].
Proposition 3.3
Assuming that the filtration is Brownian, that is , and the activation function is non-polynomial, we have
An outline of the proof can be found in Appendix A. Similar to the linear case, in practice we solve the sample average approximation of (8)-(9), that is
(10) | ||||
for i.i.d sample-paths . The latter (non-convex) optimization problems can then be solved using stochastic gradient descent. More details about this and the DNN architecture will be discussed in Section 4. Let us conclude this section with a remark.
Remark 3.4
To address the complexity issue coming from the size of the signature, it can be helpful to consider the so-called log-signature. The latter is defined by , where is the bijection
It can be shown that the dimension of the truncated log-signature , grows much slower than the one of , see, e.g., [Rei17], and denoting by the inverse of , we have . Although the log-signature itself is not longer universal, Proposition 3.3 remains through for log-signature DNNs, that is replacing the signature in (7) by its log-signature transform. In Remark A.1, after the proof of Proposition 3.3, we briefly explain why this is true and how to modify the proof.
3.3 Signature-kernel learning
Let us start by recalling some notions from general RKHS theory, see, e.g. [Ste08]. Given a feature map , where is a Hilbert space (the so-called feature space), one can define the associated kernel
(11) |
If the kernel is positive-definite, there exists a unique RKHS with reproducing kernel , see [Ste08, Theorem 4.21], in the sense that reproduces elements , that is , and it generates the Hilbert space, . In our case, the feature map , maps a path to its path-signature , and therefore signature kernel is naturally defined by
where is the natural extension of the inner products to the tensor algebra , see [CS24, Definition 2.1.1]. It has been shown in [SCF+21], that the signature-kernel solves a Goursat-type PDE with respect to the time variables. This kernel trick allows us to evaluate the signature-kernel, representing the whole signature, by numerically solving a second-order PDE. An important extension, designed for rougher inputs, was discussed in the recent work [LL24].
An important observation is that universality of the signature feature map is equivalent to the universality of the signature-kernel, see for instance [CO22, Section 6], but also [CS24, Chapter 2.1]. Returning to optimal stopping, this motivates us to seek for functionals in the primal and dual problems, solving the regularized minimizing problems on the RKHS
(12) |
where the loss is either the primal or the dual loss function. More precisely, in the primal case, at each exercise date , the loss is simply the mean-square error . In the dual case, it is given by , where . Then, replacing the losses by their sampled version, the general representer theorem [SHS01, Theorem 1] reveals that the minimizers are of the form . Thus, let us define the minimizers
(13) |
for all exercise date . Similarly,
(14) |
for the kernel-martingales The algorithms in Section 2 can then easily be translated to the kernel-learning framework, by replacing the minimizers in the second step accordingly.
Remark 3.5
We expect that the convergence results in Proposition 3.2 can be obtained analogously in this kernel-learning framework, when sending and . For instance in the the dual case, it follows by universality of the signature-kernel, that the closure of the span of the family of kernel-martingales corresponds to the space of -martingales. For the sample average approximation and existence of minimizers, one can then argue similar as in [BPS23, Appendix A.2.], and use the representer theorem to conclude.
Remark 3.6
The obvious advantage of this method is that no truncation for the feature map is necessary, and therefore theoretically it does not suffer from a loss of information, see Remark 3.1. Modulo evaluation of the signature-kernel, the approximation error only depends on the regularization and the number of samples . Moreover, at least for the mean-square loss, it seems possible to theoretically study convergence rates of such algorithms. This, however, involves a more precise understanding of the signature RKHS and integral operators therein, a problem that is outside the scope of this paper, but planned for future research.
Having mentioned the theoretical advantages of the kernel-method, evaluating the signature-kernel, which means solving the Goursat-PDE [SCF+21], becomes the main difficulty in this procedure. Due to the recursive nature of the regression problems in the Longstaff and Schwartz algorithm in Section 2.1, typically large sample size are required to ensure stability. Moreover, as we are especially interested in paths of low regularity, a fine discretization grid is required to solve the Goursat PDE. Combining these observations with the fact, that the kernel-ridge-regressions involve computing and inverting the Gram-matrices , it becomes clear that a reduction of the computational costs is required.
To this end, we propose the following approach, related to the so-called Nyström-method for kernel-learning [DMC05, WS00]. We randomly select subsamples, denoting by the set of those indices, and in both the primal and dual optimization problems we restrict the minimization to functions of the form . It follows that we only need to compute the -matrices
For the primal example, that is, for the kernel ridge regression, we can additionally observe that the explicit solutions is given by
where . Further details about the implementation will be presented in the next section.
4 Pricing American options under rough volatility
In this section we test the signature-based procedures for the problem of pricing American options in rough volatility models. In such models, the asset-price dynamics is given by
(15) |
where and are two independent Brownian motions, the volatility is adapted and continuous, and the interest rate. We denote by the log-price, that is . Compared to classical diffusion models, we are interested in volatility process driven by fractional Brownian motion, turning both the volatility and the price into non-Markovian processes. We will focus on the following two, arguably most popular examples of rough volatility specifications.
Example 1
(Rough Bergomi [BFG16]) The rough Bergomi volatility is given by
where denotes the stochastic exponential. In all the numerical examples, we will choose and .
Example 2
(Rough Heston [EER19]) The rough Heston volatility, resp. its variance , is defined as (weak) solution to the Volterra-tye CIR equation
(16) |
In all the numerical examples, we will choose .
The problem of pricing American (resp. Bermudan) options consists of solving the discrete optimal stopping problem
(17) |
where is the set of stopping times taking values in the set of possible exercise-dates , and is the payoff function. We will focus on so-called put-options, which corresponds to the payoff function for a given strike price .
4.1 Implementation details
The code accompanying this section can be found in https://github.com/lucapelizzari/Optimal_Stopping_with_signatures. Before discussing numerical experiments, let us specify in detail how the procedures introduced in Section 3 are implemented. All the signatures are computed using the package iisignature, see [RG18]. For the signature kernel, we rely on the PDE solvers from the package sigkernel to obtain the kernel as finite-difference solution to the Goursat PDE derived in [SCF+21]. For the simulation of the rough Bergomi, we use https://github.com/ryanmccrickerd/rough_bergomi related to [MP18], and for the simulation of rough Heston https://github.com/SimonBreneis/approximations_to_fractional_stochastic_volterra_equations related to [BB24].
Linear signature stopping
This approach was already studied in detail in [BPS23], and we adopt the choices made there. In particular, for the primal procedure we use the signature of , and for the dual procedure the signature of , and in both we add Laguerre polynomials to the set of basis functions. For more details about this choice we refer to [BPS23, Section 4].
Deep signature stopping
Here we make a slightly different choice for the basis compared to the linear approach, exploiting both the universality of the DNNs and the signature. First, it was observed for instance in [BBFP23], that the price has a partial Markovian nature in , and only depends on the past through the non-Markovian volatility process . Leaving rigorous arguments to [BBFP23], one can for instance note that , so that conditional expectations for some measurable function . To capture the relevant memory of the dynamics, in the primal case we lift the variance-augmented process the the standard signature . In the dual case we simply lift the time-augmentation . The partial Markovianity motivates to simply apply DNNs on .
Let us now specify the DNN architecture: For the Longstaff and Schwartz algorithm, similar to [LL21], we rely on the Leaky ReLu activation function and the ADAM optimizer to fit the models at each exercise date, with a batch-size of and learning rate . Inspired by [LL21], we use epochs to learn the conditional expectation at the last exercise date (first regression in the algorithm), and then use the trained weights to initialize the DNN at the next exercise-date. Doing this allows to reduce the epochs to for , which in turn reduces the computation time significantly. It is sufficient to consider hidden layers with each neurons. For the dual algorithm we rely on the classical ReLu activation function , and again using the ADAM optimizer with batch size and learning rate to fit the model. Compared to the primal algorithm, deeper neural networks with layers and are required, which is due to the non-linear nature of the integrand of the Doob-martingale.
Signature-kernel stopping
For the kernel ridge regressions in the Longstaff and Schwartz algorithm based methods, we choose the kernel of the time-augmented path . At each exercise date , we randomly select subsamples, where the -th sample is selected with probability , noting that the latter only requires the diagonal of the Gram matrix .
For the dual procedure, we randomly select the subsamples according to the probabilities, build on the quadratic variation of the kernel martingales , that is
Having at hand, we noticed that best performance can be achieved when solving (14) using a simple version of the neural network technology before, with hidden-layer and ReLu activation.
4.2 American put option prices in rough Bergomi and rough Heston
In this section we compare all the signature-based methods to price Bermudan put options. In Table 1-2, we present the rough Bergomi model with Hurst parameters , resp. , and for a range of strikes and . We choose the contract duration of year and early exercise options and interest rate and correlation . In the first column we report the European price, that is, the price for no early exercise opportunity, . The second columns correspond to lower-bounds obtained in [GMZ20], see also [BTW20]. Finally, in the remaining columns, we compare the point estimate, which is the biased-estimator from the Longstaff and Schwartz algorithm based on the training samples, with the lower- and upper-bounds obtained from independent testing samples. While the intervals in the linear case are taken from [BPS23], we derive the intervals for the deep signature using truncation level , discretization for the signature and martingales , and both samples for training and testing, to ensure stability of the procedure. For the signature-kernel, we use samples, with discretization points for solving the Goursat PDE. For each strike, we highlight the best lower-, resp. upper-bound, and in the last column present the relative duality gap with respect to these two values. In Table 3 we present the same considerations for the rough Heston model with and , a choice motivated in [BB24], with smaller regularization parameter. The trainings were repeated times and the overall Monte-Carlo error is below . Finally, in Table 4 we compare the computational time (in seconds) of one training each, as well as the offline computation of the signature, resp. signature-kernel.


In Figure 1 we present the dependence of the lower-bounds, resp. point estimates with respect to . The optimal choice is made for highest possible lower bounds (blue lines), so that the distance to the point estimate is reasonable, since larger difference suggest overfitting. According to this remark, we choose for rough Bergomi, and in the rough Heston model.
First and most visibly, we observe that the the deep signature method delivers significantly better upper-bounds than both other methods, with reasonable training and offline costs, see Table 4. Moreover, the deep method also mostly produces the smallest duality gap, that is the smallest distance between lower- and upper-bound. However, one can also observe that for both the point estimates and lower-bounds, the linear and kernel methods, which are based on conventional convex optimization problems, slightly outperform the deep signature method in general. It should be noted that, however, this comes at the price of offline computational costs, see Table 4, where we recall that the DNNs allow to only lift the augmented volatility process. As discussed in [BPS23], for the linear procedure it is necessary to lift the three-dimensional path and add polynomials of the state, which explains the increased computational time in the linear case. While the signature-kernel method improves the linear approach, the complexity of deriving the Gram-matrix, i.e. solving the Goursat-PDE, see Table 4, leaves the signature-kernel procedure as the most expensive approach to solve the optimal stopping problem. To handle large data sets, it is therefore necessary to solve the complexity issue for the kernel, and a potential direction could be the random Fourier signature features by [TOS23], but this is outside the scope of this paper.
In summary, for our goal of achieving minimal duality gaps for arbitrary large sample sizes, the best possible results can be achieved by combining a Longstaff & Schwartz algorithm based on the linear signature or the signature-kernel, exploiting the simplicity of the training procedure, together with the deep-signature approach for the upper-bounds, to capture the non-linearity of the Doob martingale integrand. If one has to choose one method, we recommend the deep-signature method, simply as it shows the smallest duality gaps and overall computational times.
Strike | [GMZ20] | Linear [BPS23] | Deep signature | Signature kernel | Best Gap | ||||
---|---|---|---|---|---|---|---|---|---|
Point est. | C.I. | Point est. | C.I. | Point est. | C.I. | ||||
70 | 0.78 | 1.84 | 1.847 | [1.83, 1.90] | 1.831 | [1.82, 1.85] | 1.835 | [1.83, 1.89] | 1.08 % |
80 | 1.55 | 3.10 | 3.101 | [3.08, 3.19] | 3.086 | [3.07, 3.12] | 3.080 | [3.08, 3.16] | 1.28 % |
90 | 3.11 | 5.08 | 5.086 | [5.07, 5.17] | 5.058 | [5.04, 5.08] | 5.061 | [5.06, 5.15] | 0.78 % |
100 | 6.10 | 8.19 | 8.188 | [8.15, 8.27] | 8.132 | [8.11, 8.17] | 8.159 | [8.15, 8.26] | 0.24 % |
110 | 11.41 | 13.00 | 12.991 | [12.97, 13.09] | 12.922 | [12.87, 12.98] | 12.944 | [12.94, 13.03] | 0.07 % |
120 | 18.65 | 20.28 | 20.219 | [20.21, 20.51] | 20.161 | [20.14, 20.25] | 20.162 | [20.16, 20.27] | 0.19 % |
Strike | E | [GMZ20] | Linear [BPS23] | Deep signature | Signature kernel | Best Gap | |||
---|---|---|---|---|---|---|---|---|---|
Point est. | Interval | Point est. | Interval | Point est. | Interval | ||||
70 | 0.88 | 1.88 | 1.929 | [1.92, 1.99] | 1.921 | [1.91, 1.95] | 1.926 | [1.92, 2.01] | 1.53 % |
80 | 1.67 | 3.25 | 3.289 | [3.27, 3.37] | 3.281 | [3.26, 3.31] | 3.286 | [3.27, 3.36] | 1.20 % |
90 | 3.11 | 5.34 | 5.394 | [5.37, 5.50] | 5.383 | [5.35, 5.44] | 5.397 | [5.37, 5.53] | 1.28 % |
100 | 5.83 | 8.53 | 8.586 | [8.57, 8.77] | 8.555 | [8.52, 8.66] | 8.589 | [8.56, 8.75] | 1.03 % |
110 | 10.94 | 13.28 | 13.314 | [13.29, 13.59] | 13.281 | [13.21, 13.45] | 13.326 | [13.27, 13.46] | 1.18 % |
120 | 18.37 | 20.20 | 20.267 | [20.24, 20.66] | 20.163 | [20.14, 20.63] | 20.276 | [20.24, 20.44] | 0.97 % |
Strike | Linear [BPS23] | Deep signature | Signature kernel | Best Gap | ||||
---|---|---|---|---|---|---|---|---|
Point est. | Interval | Point est. | Interval | Point est. | Interval | |||
70 | 0.16 | 0.438 | [0.42, 0.53] | 0.426 | [0.42, 0.44] | 0.430 | [0.43, 0.46] | 2.27 % |
80 | 0.42 | 0.966 | [0.94, 1.11] | 0.961 | [0.95, 0.97] | 0.960 | [0.96, 1.01] | 1.03 % |
90 | 1.02 | 2.034 | [1.99, 2.28] | 2.031 | [2.02, 2.07] | 2.027 | [2.02, 2.11] | 2.42 % |
100 | 2.66 | 4.245 | [4.16, 4.66] | 4.248 | [4.24, 4.36] | 4.219 | [4.21, 4.44] | 2.75 % |
110 | 8.05 | 9.715 | [9.63, 10.52] | 9.660 | [9.66, 10.42] | 9.693 | [9.68, 10.14] | 4.53 % |
120 | 16.68 | 19.500 | [19.49, 20.04] | 19.487 | [19.48, 20.00] | 19.502 | [19.50, 19.56] | 0.30 % |
Method | Training primal | Training dual | Offline costs |
---|---|---|---|
Linear | 1.73 | 610.10 | 508.32 |
Deep | 100.23 | 520.34 | 50.92 |
Kernel | 1.69 | 390.98 | 5564.53 |
On the other hand, it is important to note that we observe the primal signature-kernel method to be more stable compared to deep-signatures, with respect to smaller data sets. This unsurprising advantage of the kernel-method can become important when one works with real-word rather than synthetic data, where the number of samples cannot be arbitrary large, in which case the signature-kernel might be favorable. This is illustrated in Figure 2 (a) for the rough Bergomi model with , where we can observe more severe instability for the neural network training for smaller , both in the sense of small lower-bounds and high training variance. The Monte-Carlo errors reflect this variance, which is obtained after independently repeating the training times for each sample size. For completeness, we show a similar plot in the dual case in Figure 2 (b), where it is confirmed again that the deep-signature method heavily outperforms the signature-kernel approach, also in every data-size regime. The variance in the dual procedure is much smaller, as we only solve one optimization problem in the training, rather than at each exercise date.


In Figure 3 we compare the options prices in the rough Bergomi model, with respect to the correlation parameter , for the two (very different regimes) regimes and . The blue region reflects the pricing interval derived from the deep-signature approach, and once again we present the point-estimates of all methods. In both cases we can observe, that the pricing method perform worse – in the sense of duality gap – when we come closer to the no-correlation case , while the intervals get tighter in more correlated regimes.


In Figure 4 we compare the duality gap in the deep-signature method, with respect to the number of discretization points and different Hurst regimes. More precisely, let us define the relative duality gap
where (resp. ) is the upper (resp. lower) bound obtained from the deep-signature method in the rough Bergomi model with Hurst parameter and . All the other parameters are chosen the same as in the Tables 1-2. We can observe that in general higher Hurst parameters show smaller duality gaps and also faster convergence with respect to . Notice that, apart from optimization and Monte-Carlo errors, the two main sources for the gap here are the discretization error from approximating iterated and stochastic integrals, and the non-Markovianity for regimes. When is close to , the convergence rate seems to be around , and since this regime is ”almost Markovian” (and Markovian for ), this can be interpreted as the strong error occuring when approximating the stochastic integrals with an Euler-scheme and rough integrand. In general, however, it is not possible to seperate these two error sources, so that it becomes more difficult to interpret the observed convergence rate.

Finally, in Figure 5


we compare the importance of different entries of the level- signature in the deep-signature pricing methodologies. This is simply done by first training the primal and dual model, and then for each fixed feature, we shuffle the corresponding samples and measure the error between the point-estimate and the shuffled-estimate. This procedure we repeat independently times for each feature and in Figures 5 we plot the averaged error errors. Notice that the higher this value is, the more influence the corresponding feature has for computing the options price. In both methods, we observe that the ”Markovian” state features , , appear to have the biggest influence. Apart from that, we can observe that the integrals (resp. ), and (resp. ) seem to carry the most important information about the past of the volatility.
Appendix A Proof deep-signature optimal stopping
Proof.
of Proposition 3.3 The first step is to generalize the global approximation result [BPS23, Theorem 2.8]. Leaving details to [BPS23, Section 2], we are given a measure space , where is the space of stopped, time-augmented, geometric -Hölder rough paths ([BPS23, Definition 2.1]), the Borel -algebra, and a measure fulfilling certain assumptions ([BPS23, Assumption 2.6]). Thanks to [BPS23, Theorem 2.8], for any and , we can find a linear functional on the signature , such that . But due to -universality of DNNs, see [LLPS93, Proposition 2], denoting by the truncation level of in , for large enough we can find such that . An application of the triangle inequality then proves that for any , we can find a sequence such that as .
Now in the primal case, it is in fact possible to show that for all , we have as in , from which the result follows immediately. Using backward induction, one can notice that the result clearly holds true for . Assuming that the claim holds for , we denote by the element with respect to the minimizer in (8). We can estimate exactly as it was done [BPS23, Appendix A.1] until equation (A.1), to reduce the claim to the convergence of
There is, however, a subtle difference here compared to the conclusion used in the linear case: The space is not convex, which means that with respect to (8) is not necessary the orthogonal projection of . We can still conclude by introducing the minimizer similar to (8), but replacing by . Then we have
where we used the triangle inequality in the first and third inequality, the fact that minimizes the distance to in the second inequality, and the contraction property of conditional expectations in the last inequality. Now the last term in the inequality converges by induction hypothesis, while the first one converges thanks to the global approximation result as .
Finally, let us outline also the proof of the dual case, which closely follows the techniques used in [BPS23, Appendix A.2]. First, an application of the global approximation allows us to equivalently write (4) as
(18) |
Indeed, leaving the detailed technique to the proof of [BPS23, Theorem 3.7], this follows from the following observation: As already discussed in the beginning of Section 2.2, since , by martingale representation it is enough to minimize (4) over -progressive processes, which in turn can by approximated arbitrary well by thanks to the global approximation result. Next we define , where we recall was defined with respect to the minimizer (9), and note that . Then, by (18), for every , there exists an element such that , so that
where the last inequality holds for large enough, such that for some .
∎
Remark A.1
The same results remain true when working with the log-signature introduced in Remark 3.4. For the global approximation result, one can simply note that for any , we know that for large enough, where is the (continuous!) inverse of the introduced in Remark 3.4. Using again [LLPS93, Proposition 2], we can approximate this exponential on the truncated log-signature by a DNN. In summary, in the sense of the global approximation result discussed in the beginning of the proof. Then, all the arguments can be repeated, relying on the set , which replaces the signature with the log-signature.
References
- [BB24] Christian Bayer and Simon Breneis. Efficient option pricing in the rough Heston model using weak simulation schemes. Quantitative Finance, pages 1–15, 2024.
- [BBFP23] Peter Bank, Christian Bayer, Peter K Friz, and Luca Pelizzari. Rough PDEs for local stochastic volatility models. arXiv preprint arXiv:2307.09216, 2023.
- [Bel13] Denis Belomestny. Solving optimal stopping problems via empirical dual optimization. The Annals of Applied Probability, 23(5):1988 – 2019, 2013.
- [BFF+23] Christian Bayer, Peter K Friz, Masaaki Fukasawa, Jim Gatheral, Antoine Jacquier, and Mathieu Rosenbaum. Rough volatility. SIAM, 2023.
- [BFG16] Christian Bayer, Peter Friz, and Jim Gatheral. Pricing under rough volatility. Quantitative Finance, 16(6):887–904, 2016.
- [BGLY16] Horatio Boedihardjo, Xi Geng, Terry Lyons, and Danyu Yang. The signature of a rough path: uniqueness. Advances in Mathematics, 293:720–737, 2016.
- [BHRS23] Christian Bayer, Paul P Hager, Sebastian Riedel, and John Schoenmakers. Optimal stopping with signatures. The Annals of Applied Probability, 33(1):238–273, 2023.
- [BPS23] Christian Bayer, Luca Pelizzari, and John Schoenmakers. Primal and dual optimal stopping with signatures. arXiv preprint arXiv:2312.03444, 2023.
- [BTW20] Christian Bayer, Raúl Tempone, and Sören Wolfers. Pricing American options by exercise rate optimization. Quantitative Finance, 20(11):1749–1760, 2020.
- [Che57] Kuo-Tsai Chen. Integration of paths, geometric invariants and a generalized Baker-Hausdorff formula. Annals of Mathematics, 65(1):163–178, 1957.
- [CO22] Ilya Chevyrev and Harald Oberhauser. Signature moments to characterize laws of stochastic processes. The Journal of Machine Learning Research, 23(1):7928–7969, 2022.
- [CS24] Thomas Cass and Cristopher Salvi. Lecture notes on rough paths and applications to machine learning. arXiv preprint arXiv:2404.06583, 2024.
- [DFM12] Vijay V Desai, Vivek F Farias, and Ciamac C Moallemi. Pathwise optimization for optimal stopping problems. Management Science, 58(12):2292–2308, 2012.
- [DMC05] Petros Drineas, Michael W Mahoney, and Nello Cristianini. On the Nyström method for approximating a gram matrix for improved kernel-based learning. journal of machine learning research, 6(12), 2005.
- [EER19] Omar El Euch and Mathieu Rosenbaum. The characteristic function of rough Heston models. Mathematical Finance, 29(1):3–38, 2019.
- [FH20] Peter K Friz and Martin Hairer. A course on rough paths. Springer, 2020.
- [FV10] Peter K Friz and Nicolas B Victoir. Multidimensional stochastic processes as rough paths: theory and applications, volume 120. Cambridge University Press, 2010.
- [GMZ20] Ludovic Goudenege, Andrea Molent, and Antonino Zanette. Machine learning for pricing American options in high-dimensional Markovian and non-Markovian models. Quantitative Finance, 20(4):573–591, 2020.
- [KBPA+19] Patrick Kidger, Patric Bonnier, Imanol Perez Arribas, Cristopher Salvi, and Terry Lyons. Deep signature transforms. Advances in Neural Information Processing Systems, 32, 2019.
- [KLA20] Jasdeep Kalsi, Terry Lyons, and Imanol Perez Arribas. Optimal execution with rough path signatures. SIAM Journal on Financial Mathematics, 11(2):470–493, 2020.
- [KS91] Ioannis Karatzas and Steven Shreve. Brownian motion and stochastic calculus, volume 113. Springer Science & Business Media, 1991.
- [LL21] Bernard Lapeyre and Jérôme Lelong. Neural network regression for Bermudan option pricing. Monte Carlo Methods and Applications, 27(3):227–247, 2021.
- [LL24] Maud Lemercier and Terry Lyons. A high order solver for signature kernels. arXiv preprint arXiv:2404.02926, 2024.
- [LLPS93] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.
- [LS01] Francis A Longstaff and Eduardo S Schwartz. Valuing American options by simulation: a simple least-squares approach. The review of financial studies, 14(1):113–147, 2001.
- [Lyo98] Terry J Lyons. Differential equations driven by rough signals. Revista Matemática Iberoamericana, 14(2):215–310, 1998.
- [MP18] Ryan McCrickerd and Mikko S Pakkanen. Turbocharging Monte Carlo pricing for the rough Bergomi model. Quantitative Finance, 18(11):1877–1886, 2018.
- [PS06] Goran Peskir and Albert Shiryaev. Optimal stopping and free-boundary problems. Springer, 2006.
- [Rei17] Jeremy Reizenstein. Calculation of iterated-integral signatures and log signatures. arXiv preprint arXiv:1712.02757, 2017.
- [RG18] Jeremy Reizenstein and Benjamin Graham. The iisignature library: efficient calculation of iterated-integral signatures and log signatures. arXiv preprint arXiv:1802.08252, 2018.
- [Rog02] Leonard CG Rogers. Monte Carlo valuation of American options. Mathematical Finance, 12(3):271–286, 2002.
- [SCF+21] Cristopher Salvi, Thomas Cass, James Foster, Terry Lyons, and Weixin Yang. The signature kernel is the solution of a Goursat PDE. SIAM Journal on Mathematics of Data Science, 3(3):873–899, 2021.
- [SHS01] Bernhard Schölkopf, Ralf Herbrich, and Alex J Smola. A generalized representer theorem. In International conference on computational learning theory, pages 416–426. Springer, 2001.
- [Ste08] Ingo Steinwart. Support Vector Machines. Springer, 2008.
- [TBO21] Csaba Toth, Patric Bonnier, and Harald Oberhauser. Seq2tens: An efficient representation of sequences by low-rank tensor projections. In International Conference on Learning Representations, 2021.
- [TOS23] Csaba Toth, Harald Oberhauser, and Zoltan Szabo. Random Fourier signature features. arXiv preprint arXiv:2311.12214, 2023.
- [WS00] Christopher Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. Advances in neural information processing systems, 13, 2000.