Temporal Point Process Graphical Models
Abstract
Many real-world objects can be modeled as a stream of events on the nodes of a graph. In this paper, we propose a class of graphical event models named temporal point process graphical models for representing the temporal dependencies among different components of a multivariate point process. In our model, the intensity of an event stream can depend on the historical events in a nonlinear way. We provide a procedure that allows us to estimate the parameters in the model with a convex loss function in the high-dimensional setting. For the approximation error introduced during the implementation, we also establish the error bound for our estimators. We demonstrate the performance of our method with extensive simulations and a spike train data set.
keywords: graphical models; temporal point processes; high-dimensionality
1 Introduction
Graphical modeling of point processes is drawing increasing attention, as data in the form of point processes on a graph are emerging in scientific and business applications. Examples include biological neural networks with neural spike train data (see e.g. Paninski et al., 2007; Pillow et al., 2008; Bolding and Franks, 2018); social media data recording timestamps of each user’s actions (Perry and Wolfe, 2013; Zhou et al., 2013; Fox et al., 2016); high frequency financial data recording times of market orders (Bacry et al., 2013; Aït-Sahalia et al., 2015; Bacry et al., 2015). Understanding the connectivity structure of these point processes, i.e, the edges on the graphs, is a challenging problem of growing interest. Our work is motivated by a recent spike train data set from Bolding and Franks (2018) . This data set collected spike train data from two important brain regions for olfactory of mice, namely the olfactory blub and the piriform cortex. The neural activity in the olfactory bulb is highly concentration-dependent, while that in the piriform cortex is concentration-invariant. The study of Bolding and Franks (2018) suggests that there exists a complex excitation-inhabitation mechanism between the olfactory blub and the piriform cortex that allows the nervous system to produce a concentration-invariant representation of odors.
In this article, we propose a new temporal point process graphical model to address this problem. We model the spike train data of the neurons as a multivariate temporal point process, where each component of the point process represents the activity of a neuron. We allow future events be excited or inhabited by the past events in a non-additive manner. We organize the transfer coefficients into a matrix and impose a sparse structure, which is welcomed for interpretation in neuroscience and other applications (Zhao and Yu, 2006; Zou, 2006). Our loss function is convex, so many highly scalable algorithms, such as alternating direction method of multipliers and coordinate descent method, can be applied to parameter estimation. We establish a non-asymptotic error bound of the estimator and allow the number of nodes in the graph to diverge. We also analyze the approximation error introduced during the implementation phase.
There have been a great number of works focusing on graphical models, including the Gaussian graphical model (Yuan and Lin, 2007; Liu, 2013) and directed acyclic graph (Kalisch and Bühlman, 2007; Van de Geer et al., 2013; Zheng et al., 2018). In this models, each node of the graph is corresponding to a random variable and the data are realizations of the random variables. The graphs of classical graphical models are used to represent the structure of conditional independence among the random variables. In contrast, we aim at the case where each node of the graph represents a point process and the graph can be viewed as a multivariate point process. In our setting, the graph encodes the local independence relationship among the components of the multivariate point process (Didelez, 2008). The basic idea of local independence is that once we know about specific past events, the intensity of the future event is independent of other past events. In our model, local independence is fully characterised by the parameters (see Proposition 1).
There have been a family of models targeting temporal point processes. A Hawkes process (Hawkes, 1974) assumes that historical events can trigger future events and has been widely used in modelling neuronal spike train data and social network data. The model we proposed differs from existing research of the Hawkes process in many ways. Most prior art has focus on parameter estimation in the linear case, where the transfer function of the multivariate Hawkes process is the identity function. Hansen et al.(2015) considered estimating parameters in a linear Hawkes process of fixed number of nodes with least-square loss function and regularization. Bacry et al.(2020) organized the transferring coefficients as a matrix and impose a low-rank structure. A crucial weakness of linear Hawkes process is that it does not allow inhibitory relations, i.e, the past events can only trigger more future events, not silence future events. Our model extends to a nonlinear setting where inhibitory influences are allowed and the influence of past events from different nodes can accumulate in a non-additive manner.
Tang and Li(2021) also considered parameter estimation of a nonlinear Hawkes process in their work. Our work differs from Tang and Li(2021) in several aspects. First, we focus on the estimation of sparse graphical models while they focus on the estimation of a low-rank tensor of regression parameters. Second, we provide a stronger theoretical guarantee by proving the global minimum of our penalized loss function has a convergence rate while their theory only shows that there exists a local minimum of the penalized loss function in the neighborhood of the true parameter. Finally, our loss function is convex while theirs is not. A convex loss function with restricted strong convexity allows us to find global minimum efficiently.
The paper is organized as follows. We will finish this section with notation. We briefly introduce the temporal point process and our model in Section 2. In Section 3, we present the construction of our estimator and our main theorem. In Section 4, we analyze the approximation error introduced during the implementation phase. We show the results of simulations in Section 5 and an application on a spick train data set in Section 6. We conclude with a discussion in Section 7. Technical proofs are provided in the Appendix.
Some notations will be employed through out this paper. Let denote the set . For a set and a vector , we define as . Let denote the -norm of a vector and denote the number of non-zero entries of a vector. For a matrix , let denote the Frobenius norm of a matrix. We use for a ball with radius in . We use to denote positive numbers that do not depend on or but can still depend on some values that we consider as constants in this paper, and their value can change from line to line.
2 The Temporal Point Process Graphical Model
We will begin with a very brief review of temporal point processes. For a more systematic and comprehensive discussion of point processes, we refer to Brémaud (1981) and Daley and Vere-Jones (2007).
Let be a simple point process. is corresponding to a sequence of real-valued random variables , with and almost surely. are called the event times of . For a set in the Borel -field of the real line, . We write as , where denotes an arbitrarily small increment of . Let denote the -field generated by . The intensity of is defined as
Now suppose we have simple point processes with denoting the event times of the process . Then we employ to denote a -variate point process. Let denote the -field generated by . The intensity process is a -variate -predictable process defined as
In our model, we assume this intensity process takes the form
(1) |
where are link functions and can be nonlinear. are the background intensities of the components of the multivariate point process. are the transfer functions. is called the intensity process. The convolutional expression summaries the historical information of the -th component. In this paper, we consider parameterized transfer functions that can be written as
where are known kernel functions.
In graphical models, the edges encodes conditional independence structures among the random variables (Maathuis et al., 2018; Qiao et al., 2019; Engelke and Hitz, 2020). For multivariate point processes, the question of interest is that whether the intensity of one component depends on the history of another component, which can be represent by the local independence for multivariate point process (Didelez, 2008).
Definition 1 (local independence for multivariate point process)
Let be a multivariate point process. Let further and be disjoint subsets of . We then say that a subprocess is locally independent of given over if holds for all and , where denotes the -field generated by .
In temporal point process graphical models, the local independence structures are encoded in the transfer matrix , because for index set as long as . We have the following proposition:
Proposion 1
For a temporal point process graphical model , let be disjoint subsets of and . is locally independent of given if and only if .
Thus by estimating the transfer matrix , we can recover the local independence relationships in .
3 Estimation
In this section, we present an approach for estimating the temporal point process graphical models. We will begin with the construction of our estimator in Section 3.1, then provide a detailed comparison among our method with the widely employed method in prior arts, the least-square estimation and the maximum likelihood estimation in Section 3.2. In Section 3.3, we will present theoretical analysis that guarantees the performance of our method.
3.1 Our Method
In this paper, we will focus on the temporal point process graphical models with bounded link functions and compactly supported transfer functions, as explained in Assumption 1.
Assumption 1
There exists a constant that and have continuous derivatives. are supported on for some constant with bounded derivatives (and naturally we can pick such that also holds).
Similar versions of assumptions on the link function and transfer kernels are required in existing theory (Hansen et al., 2015; Chen et al., 2017b; Bacry et al., 2020). As we allow a nonlinear link function, the transfer kernels are no longer restricted to be non-negative. For simplicity of the theory, we assume that is bounded. This assumption covers most of the cases in the applications.
Under Assumption 1, the temporal point process is Markovian, i.e., given the event times in , the intensity process is independent of . Together with the fact that the intensity process is bounded by , it is straightforward to show that the empty state , where there is no event on , is a positive recurrent and the temporal point process is stable in distribution. For more discussion of stability for the temporal point process, we refer interested readers to Brémaud and Massoulié (1996) and Costa et al. (2018).
Suppose that we observe the event times of a stationary process on . Under Assumption 1, we can rewrite the intensity in a compact form:
where , and . Inspired by the log-linear models in Negahban et al.(2012), we considered the following loss function for estimating :
(2) |
where is a primitive function of , i.e., , and , is a weighting process. If and is intrinsically sparse due to some practical reasons, we can adopt the following penalized loss minimization (Tibshirani,1996):
(3) |
In the generalized linear models, Negahban et al.(2012) employed a similar method to derive a convex loss function. The negative log-likelihood loss function for generalized linear model is convex but not strongly convex when , i.e., there are some directions that have zero gradient and the Hessian matrix is not of full rank. Negahban et al. (2012) addressed this problem by introducing the conception of restricted strong convexity. For a decomposable penalty (such as ) and a convex loss function, the solutions of optimization will always fall in a ’restricted set’, restricted to which the loss function is strongly convex. We show that our loss function also admits a restriction strong convexity condition. This implies that the obtained estimator, regardless of algorithm used or initial value, will provide the convergence rate we proved. In contrast, the estimator provided by Tang and Li (2021) is a local estimator, which only retain the statistical property as long as the initial value is close enough to the true parameters.
3.2 Examples
For estimating a temporal point process graphical with linear link functions, there are two widely used estimating strategies, the least-square estimator (Bacry et al.,2020; Hansen et al.,2015):
(4) |
and the maximum likelihood estimator (Ozaki,1979; Tang and Li, 2021)
(5) |
The score functions for least square loss and negative log-likelihood loss are
and
It is seen that optimizing the least-square risk function is equivalent to optimizing (2) with iterative reweigh and optimizing the negative log-likelihood risk function is equivalent to optimizing (2) with iterative reweigh . Directly optimizing the risk function for least square estimator and maximum likelihood estimator are non-convex problems. But our iterative reweigh procedure permits us to approximate the non-convex problem with a sequence of convex problems.
3.3 Theoretical Guarantees
Now we return to the -penalized loss minimization (3). Theoretical result indicates that our estimator is consistent even in high-dimensional case. When unrestricted, it is possible to cook up extreme graphs, where, for instance, some components of the influence process are constantly pushed towards infinity, which makes the model hardly identifiable. To avoid such cases, we pose the following regularity conditions.
Assumption 2
(1)The mean influence, are uniformly bounded:
for some constant for every .
(2) The covariance matrix of the influence process is positive definite and has bounded eigenvalue:
where and indicate the smallest and the largest eigenvalue of a matrix. That is, for any we have
and there exists some such that
(6) |
(3) The derivative is bounded away from on a compact interval: there exists such that
where is introduced above.
(4) There exist that with probability for large enough.
We briefly discuss these assumptions. Assumption 2 (1) restricts the average influence of process and it is equivalent to having bounded intercept terms in vanilla Lasso problems (Tibshirani, 1996). Having Non-degenerate covariates (Assumption 2 (2)) is a standard requirement for estimation problems (condition GLM1 in Negahban et al.,2012, regularity condition 4 in Tang and Li, 2021). Here (6) asks for a version of non-degenerate that looks stronger. It is not truly stronger. If we let , we will have
This involves a trade-off between constants and . One can show that (6) holds if has a sub-exponential tail. Assumption 2 (3) is employed to ensure identifiability. To conclude, Assumption 2 (2) and 2 (3) are required mainly for technical reasons. We propose Assumption 2 (4) to make sure that the weights are well-behaved.
Now we are ready to introduce the main theorem.
Theorem 1
(8) |
hold with possibility greater than , where and a star as a superscript denotes the true value of parameters.
Theorem 1 guarantees that with high probability, the estimator will converge in the order of , which is similar to the established ones in prior works (Bacry et al., 2020; Hansen et al., 2015; Tang and Li, 2021). It is important to note that Theorem 1 considers an oracle procedure, where the tuning parameters depend on unknown parameters. When the linear functions are linear, Hansen et al. (2015) explored the selection guidelines in the case that the number of nodes is fixed but the dictionary for sieving transfer kernels is growing. A general theory for tuning parameter selection when still remains an open problem. A detailed proof of Theorem 1 is provided in the Appendix.
4 Implementation
4.1 Theory for Approximation
Implementation lies between the beautiful statistical theory and the successful application of statistical methods. The concrete details of implementation are important but seldom mentioned in prior articles. In this section, we present a comprehensive analysis of the implementative version of our estimation.
For temporal point processes, the algorithm usually proceeds in a discrete manner (Tang and Li, 2021). The observation interval is equally divided into subintervals and the length of each interval is and the integral is replaced by numerical integral. Let denote the number of events in of node and let for . Let . Then our loss function in implementation becomes
(9) |
Generally, the discrete version (9) has additional approximation error compared to (2). To characterize the additional error, we pose the following error decomposition:
and
where . Thus we have
where . The choice of is of great importance because it leads a trade-off between computation efficiency and approximation accuracy. To ensure the approximation error is dominated by the statistical noise , we assume that,
Assumption 3
, the number of subintervals is large enough that .
The penalized loss minimization in implementation is
(10) |
By carefully analysing the approximation error, we can provide a similar bound for convergence rate for :
Theorem 2
For estimators in (10), under Assumption 1-3, we have there exists constants that with , we have
(11) |
(12) |
hold with possibility greater than , where and a star as a superscript denotes the true value of parameters.
When satisfies Assumption 3, the approximation error is dominated by the statistical noise. There are other graphical models of point process that divided the observation window into a number of bins and model the number of events in each bin, with a generalized linear model (Zhang et al.,2016) or with Gaussian graphical model (Vinci et al., 2018). Our method is fundamentally different from these methods because discretization is employed only in the implementation of the method. Our final goal is to estimate the parameters in the generalized Hawkes process instead of other models. A detailed proof is included in the supplementary material.
5 Simulation
In this section, we investigate the performance of our estimation procedure in a simulation study. We propose two estimators, the naively weighted estimator with and the iterative reweighted maximum likelihood estimator where is iteratively updated.
We set the number of nodes to . We consider two popular graphical structures: block and chain. For the block structure, is a () blockwise diagonal matrix of 12 (6) identical symmetrical Toeplitz matrices whose first row is for (). For the chain structure, is a symmetrical Toeplitz matrix whose first row is . Their explicit forms are shown in Figure 1 and 2 for and , where the excitatory connection are shown in blue with entry and the inhibitory connection are shown in red with entry . The background intensity is . We consider two sets of link function and transfer kernel. For setting 1, we consider an function as the link function and a restricted linear transfer kernel.
For setting 2, we consider a sigmoid function as our link function and a exponential transfer kernel.


The length of observation is and the number of sub-intervals . We use a 5-fold cross-validation to select the penalty level. We evaluate the estimation accuracy by the relative error and the relative mean square error of the estimated transferring coefficient matrix . Three methods are compared: the penalized naively weighted estimator (Sparse-Naive) the penalized iteratively reweighted maximum likelihood estimator (Sparse-MLE) and the vanilla maximum likelihood estimator. The average relative error and error based on 50 replications with the standard errors in the parenthesis are reported.
Setting | Structure | p | T | Sparse-Naive | Sparse-MLE | vanilla-MLE |
---|---|---|---|---|---|---|
Setting 1 | Block | 30 | 200 | 0.718(0.049) | 0.667(0.025) | 1.510(0.057) |
400 | 0.525(0.024) | 0.547(0.032) | 0.942(0.025) | |||
60 | 200 | 1.010(0.077) | 0.805(0.022) | 4.460(0.134) | ||
400 | 0.776(0.037) | 0.656(0.011) | 2.128(0.040) | |||
Chain | 30 | 200 | 0.877(0.043) | 0.796(0.032) | 1.793(0.135) | |
400 | 0.645(0.028) | 0.656(0.034) | 1.066(0.048) | |||
60 | 200 | 1.240(0.086) | 0.959(0.025) | 5.277(0.170) | ||
400 | 0.916(0.049) | 0.787(0.022) | 2.403(0.063) | |||
Setting 2 | Block | 30 | 200 | 0.895(0.037) | 0.884(0.033) | 2.812(0.163) |
400 | 0.716(0.023) | 0.723(0.026) | 1.612(0.052) | |||
60 | 200 | 1.035(0.043) | 1.048(0.032) | 14.62(0.611) | ||
400 | 0.862(0.025) | 0.872(0.025) | 3.906(0.083) | |||
Chain | 30 | 200 | 1.063(0.057) | 1.054(0.033) | 3.571(0.213) | |
400 | 0.883(0.037) | 0.882(0.038) | 1.826(0.086) | |||
60 | 200 | 1.174(0.044) | 1.176(0.026) | 15.77(0.546) | ||
400 | 1.031(0.029) | 1.024(0.024) | 4.451(0.137) |
Setting | Structure | p | T | Sparse-Naive | Sparse-MLE | vanilla-MLE |
---|---|---|---|---|---|---|
Setting 1 | Block | 30 | 200 | 0.235(0.018) | 0.241(0.018) | 0.495(0.042) |
400 | 0.149(0.012) | 0.150(0.012) | 0.191(0.011) | |||
60 | 200 | 0.300(0.015) | 0.307(0.014) | 2.235(0.143) | ||
400 | 0.174(0.005) | 0.194(0.007) | 0.483(0.019) | |||
Chain | 30 | 200 | 0.326(0.029) | 0.318(0.026) | 0.843(0.301) | |
400 | 0.200(0.017) | 0.181(0.018) | 0.250(0.034) | |||
60 | 200 | 0.418(0.027) | 0.399(0.021) | 3.523(0.296) | ||
400 | 0.224(0.011) | 0.242(0.011) | 0.687(0.077) | |||
Setting 2 | Block | 30 | 200 | 0.485(0.030) | 0.475(0.032) | 1.772(0.268) |
400 | 0.285(0.020) | 0.281(0.020) | 0.555(0.036) | |||
60 | 200 | 0.575(0.029) | 0.566(0.024) | 24.53(1.682) | ||
400 | 0.348(0.020) | 0.341(0.019) | 1.652(0.074) | |||
Chain | 30 | 200 | 0.672(0.083) | 0.620(0.045) | 3.575(0.729) | |
400 | 0.389(0.033) | 0.371(0.027) | 0.736(0.120) | |||
60 | 200 | 0.769(0.064) | 0.724(0.058) | 27.85(1.561) | ||
400 | 0.484(0.023) | 0.462(0.022) | 2.561(0.266) |
Setting | Structure | p | T | Sparse-Naive | Sparse-MLE |
---|---|---|---|---|---|
Setting 1 | Block | 30 | 200 | 0.995(0.0026) | 0.997(0.0015) |
400 | 1.000(0.0001) | 1.000(0.0001) | |||
60 | 200 | 0.995(0.0016) | 0.997(0.0014) | ||
400 | 1.000(0.0001) | 1.000(0.0001) | |||
Chain | 30 | 200 | 0.954(0.0074) | 0.965(0.0063) | |
400 | 0.974(0.0025) | 0.978(0.0027) | |||
60 | 200 | 0.968(0.0041) | 0.976(0.0032) | ||
400 | 0.985(0.0012) | 0.988(0.0012) | |||
Setting 2 | Block | 30 | 200 | 0.914(0.0143) | 0.916(0.0139) |
400 | 0.979(0.0044) | 0.981(0.0042) | |||
60 | 200 | 0.915(0.0137) | 0.918(0.0138) | ||
400 | 0.983(0.0039) | 0.985(0.0035) | |||
Chain | 30 | 200 | 0.837(0.0174) | 0.841(0.0189) | |
400 | 0.924(0.0090) | 0.925(0.0100) | |||
60 | 200 | 0.840(0.0134) | 0.844(0.0145) | ||
400 | 0.944(0.0056) | 0.948(0.0052) |
Table 1 and 2 summarize the results based on data replications. It is seen that both of our proposed methods outperform the vanilla maximum likelihood estimator, which is known to be asymptotically efficient (Ogata,1978) in estimating parameters when the transfer function is linear. Overall, the iteratively reweighted penalized maximum likelihood estimator achieves better performance. But the loss surface for the penalized MLE is not convex, so there is no permission that the working algorithm will find a feasible solution. The naive weighted estimator provides a convex loss function that guarantees that the algorithm returns a solution in the neighborhood of a feasible solution.
To demonstrate the accuracy of variable selection for our proposed methods, we calculated true-positive rates and false-positive rates, i.e, the edges of the network that were correctly identified for each method and tuning parameter. Plotting true positive rate versus false positive rate over a fine grid of values of the tuning parameter produces a ROC curve, with curves near the top left corner indicating a method that performs well. Figure 3 shows the median of the ROC curves for our two proposed methods and table 3 provides the area under the ROC curve (average over the 50 simulation runs). It is seen that both methods are successful in recovering the graphical structure in all the settings we considered.




Finally, we report the computation time. The naively weighted estimator has a convex loss function and does not need to calculate the reweigh parameters and runs faster than the iteratively reweigh MLE. The simulation code is programmed in Python. For the simulation example in setting 1 with , and , the average computing time was about 1.5 minutes for Naive estimator and 1.7 minutes for MLE. For the simulation example in setting 2 with , and , the average computing time was minutes for Naive estimator and minutes for MLE. All computations were performed with a Intel(R) Core(TM) i9 10900K [email protected].
6 Application
We consider the task of recovering the connectivity network among the neurons using the spike train data from the optogenetic study of Bolding and Franks(2018) discussed in the Section 1. In this experiment, the spike trains are recorded at 30 kHz on the olfactory bulb (OB) and the piriform cortex (PCx) of the mice, while a laser pulse is applied directly on the OB cells. For specially designed transgenic mice, light pulses will elicited an increase in OB spiking and remain elevated for the duration of the stimulus. We consider the spike train data collected at three intensity levels, , and from OB cells and PCx cells. For each intensity level, the observation lasts about seconds. We choose the number of sub-intervals by pre-experiments.

Figure 4 shows the number of firings of OB cells and PCx cells. The firing numbers vary largely among cells. We use an adaptive link-function to better characterize the average intensity of different neurons:
where . Based on the research of the duration of influence among the neurons in Bolding and Franks(2018), we choose . Since our goal was to provide interpretable visualizations and investigate the influence of the laser on the neural connectivity, we computed sparse graphs with approximately connected edges. We performed a bootstrap procedure by randomly selecting sub-intervals with replacement from the original data, finding a tuning parameters to achieve sparsity level, fitting the point process graphical model and repeating 50 times. The final graph was constructed from the excitatory and inhibitory edges that occurred in at least of the bootstrap replications.



Figure 5 Shows the networks estimated using a point process graphical model for three groups. The bootstrapped graphical model estimated a sparser network with sparsity level for the group, for the group and for the group. We observe a few apparent patterns. First, PCx cells are densely connected in all three groups but the group has increased connectivity relative to other groups, suggesting that there may be a multi-layer excitatory-inhibitory neuron network in the piriform cortex to stabilize the odor signal. Second, when the laser was applied to OB cells, the number of edges from other cells to the OB cells increase and the sign of if several connections changed, indicating that the laser affected the connectivity structure of OB cells and PCx cells and this effect was heterogenous. Finally, we observed that a few neurons fired less frequently and had little effect on other cells (such as OB-9, PCx-1, PCx-15, PCx-42, PCx-44, etc). Their role in the odor-processing mechanism needs to be further explored.
7 Conclusion and Future Work
In this paper, we have proposed the temporal point process graphical model, a new class of graphical models for learning the temporal dependencies among different components of a multivariate point process. We have shown that the locally independent structure of the process is fully encoded in the transfer matrix. We have provided a class of estimators with a non-asymptotic estimation error bound, which extends the classical estimators of temporal point process. Our model naturally includes the case of sparse temporal point process regression, where the predictors and the responses are not the same process. The performance of our estimator has been tested by simulations and a spike train data set.
Several challenges still remain in the analysis of the temporal point process graphical models. Using cross-validation for selecting tuning parameters is proved to be successful in the classical Lasso estimators (Chetverikov et al., 2021). However, a detailed theoretical analysis is needed to demonstrate its performance in our case. The criteria for optimal tuning parameter selection for estimation and variable selection are left for future research. The second challenge is to assess the uncertainty of the estimated parameters, which is less addressed in the existing work.
References
- Aït-Sahalia et al. [2015] Yacine Aït-Sahalia, Julio Cacho-Diaz, and Roger JA Laeven. Modeling financial contagion using mutually exciting jump processes. Journal of Financial Economics, 117(3):585–606, 2015.
- Bacry et al. [2013] Emmanuel Bacry, Sylvain Delattre, Marc Hoffmann, and Jean-François Muzy. Some limit theorems for hawkes processes and application to financial statistics. Stochastic Processes and their Applications, 123(7):2475–2499, 2013.
- Bacry et al. [2015] Emmanuel Bacry, Iacopo Mastromatteo, and Jean-François Muzy. Hawkes processes in finance. Market Microstructure and Liquidity, 1(01):1550005, 2015.
- Bacry et al. [2020] Emmanuel Bacry, Martin Bompaire, Stéphane Gaïffas, and Jean-Francois Muzy. Sparse and low-rank multivariate hawkes processes. Journal of Machine Learning Research, 21(50):1–32, 2020.
- Bolding and Franks [2018] K. A. Bolding and K. M. Franks. Recurrent cortical circuits implement concentration-invariant odor coding. Science, 361(6407):eaat6904–, 2018.
- Brémaud and Massoulié [1996] Pierre Brémaud and Laurent Massoulié. Stability of nonlinear hawkes processes. The Annals of Probability, 24(3):1563 – 1588, 1996.
- Brémaud [1981] Pierre Brémaud. Point Processes and Queues. Springer-Verlag, 1981.
- Chen et al. [2017a] Shizhe Chen, Ali Shojaie, Eric Shea-Brown, and Daniela Witten. The multivariate hawkes process in high dimensions: Beyond mutual excitation. arXiv preprint arXiv:1707.04928, 2017a.
- Chen et al. [2017b] Shizhe Chen, Daniela Witten, and Ali Shojaie. Nearly assumptionless screening for the mutually-exciting multivariate hawkes process. Electronic journal of statistics, 11(1):1207, 2017b.
- Chetverikov et al. [2021] Denis Chetverikov, Zhipeng Liao, and Victor Chernozhukov. On cross-validated lasso in high dimensions. The Annals of Statistics, 49(3):1300–1317, 2021.
- Costa et al. [2018] Manon Costa, Carl Graham, Laurence Marsalle, and Viet Chi Tran. Renewal in hawkes processes with self-excitation and inhibition. Advances in Applied Probability, 52, 01 2018.
- Daley and Vere-Jones [2007] DJ Daley and David Vere-Jones. An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure. Springer Science & Business Media, 2007.
- Didelez [2008] Vanessa Didelez. Graphical models for marked point processes based on local independence. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):245–264, 2008.
- Engelke and Hitz [2020] Sebastian Engelke and Adrien S Hitz. Graphical models for extremes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82(4):871–932, 2020.
- Fox et al. [2016] Eric W Fox, Martin B Short, Frederic P Schoenberg, Kathryn D Coronges, and Andrea L Bertozzi. Modeling e-mail networks and inferring leadership using self-exciting point processes. Journal of the American Statistical Association, 111(514):564–584, 2016.
- Hansen et al. [2015] Niels Hansen, Patricia Reynaud-Bouret, and Vincent Rivoirard. Lasso and probabilistic inequalities for multivariate point processes. Bernoulli, 21, 08 2015.
- Hawkes [1971] Alan G Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83–90, 1971.
- Hawkes and Oakes [1974] Alan G. Hawkes and David Oakes. A cluster process representation of a self-exciting process. Journal of Applied Probability, 11(03):493–503, 1974.
- Kalisch and Bühlman [2007] Markus Kalisch and Peter Bühlman. Estimating high-dimensional directed acyclic graphs with the pc-algorithm. Journal of Machine Learning Research, 8(3), 2007.
- Liu [2013] Weidong Liu. Gaussian graphical model estimation with false discovery rate control. The Annals of Statistics, pages 2948–2978, 2013.
- Maathuis et al. [2018] Marloes Maathuis, Mathias Drton, Steffen Lauritzen, and Martin Wainwright. Handbook of graphical models. CRC Press, 2018.
- Negahban et al. [2012] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework for high-dimensional analysis of -estimators with decomposable regularizers. Statistical science, 27(4):538–557, 2012.
- Ogata [1978] Yoshiko Ogata. The asymptotic behaviour of maximum likelihood estimators for stationary point processes. Annals of the Institute of Statistical Mathematics, 30(2):243–261, 1978.
- Ozaki [1979] Tohru Ozaki. Maximum likelihood estimation of Hawkes’ self-exciting point processes. Annals of the Institute of Statistical Mathematics, 31(1):145–155, 1979.
- Paninski et al. [2007] Liam Paninski, Jonathan Pillow, and Jeremy Lewi. Statistical models for neural encoding, decoding, and optimal stimulus design. Progress in brain research, 165:493–507, 2007.
- Perry and Wolfe [2013] Patrick O Perry and Patrick J Wolfe. Point process modelling for directed interaction networks. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(5):821–849, 2013.
- Pillow et al. [2008] Jonathan W Pillow, Jonathon Shlens, Liam Paninski, Alexander Sher, Alan M Litke, EJ Chichilnisky, and Eero P Simoncelli. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995–999, 2008.
- Qiao et al. [2019] Xinghao Qiao, Shaojun Guo, and Gareth M James. Functional graphical models. Journal of the American Statistical Association, 114(525):211–222, 2019.
- Shorack and Wellner [2009] Galen R Shorack and Jon A Wellner. Empirical processes with applications to statistics. SIAM, 2009.
- Tang and Li [2021] Xiwei Tang and Lexin Li. Multivariate temporal point process regression. Journal of the American Statistical Association, 0(0):1–16, 2021.
- Tibshirani [1996] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996.
- Van de Geer et al. [2013] Sara Van de Geer, Peter Bühlmann, et al. -penalized maximum likelihood for sparse directed acyclic graphs. Annals of Statistics, 41(2):536–567, 2013.
- Vinci et al. [2018] Giuseppe Vinci, Valérie Ventura, Matthew A Smith, and Robert E Kass. Adjusted regularization in latent graphical models: Application to multiple-neuron spike count data. The annals of applied statistics, 12(2):1068, 2018.
- Yuan and Lin [2007] Ming Yuan and Yi Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94(1):19–35, 2007.
- Zhang et al. [2016] Chunming Zhang, Yi Chai, Xiao Guo, Muhong Gao, David Devilbiss, and Zhengjun Zhang. Statistical learning of neuronal functional connectivity. Technometrics, 58(3):350–359, 2016.
- Zhao and Yu [2006] Peng Zhao and Bin Yu. On model selection consistency of lasso. The Journal of Machine Learning Research, 7:2541–2563, 2006.
- Zheng et al. [2018] Xun Zheng, Bryon Aragam, Pradeep K Ravikumar, and Eric P Xing. Dags with no tears: Continuous optimization for structure learning. Advances in Neural Information Processing Systems, 31, 2018.
- Zhou et al. [2013] Ke Zhou, Hongyuan Zha, and Le Song. Learning triggering kernels for multi-dimensional hawkes processes. In International Conference on Machine Learning, pages 1301–1309. PMLR, 2013.
- Zou [2006] Hui Zou. The adaptive lasso and its oracle properties. Journal of the American statistical association, 101(476):1418–1429, 2006.
Appendix A Appendix
Here we show the proof of Theorem 1. The proof of Theorem 1 is divided in three steps. First, we show that the loss function admits the restricted strong convexity condition in Theorem 3. Second, we show that the estimator has a convergence rate of as long as dominate the statistical noise in Theorem 4. In the last step, we show that dominate the statistical noise with high probability in Lemma 1.
Theorem 3
Theorem 4
Lemma 1
A.1 Proof of Theorem 3
We will show the restricted strong convexity for the loss function.
proof. With a little abuse of notation, we let denote the centralized influence function and denote .
For , define . By the Taylor expansion, there exists some such that
Let be positive numbers to be determined later, , where is the constant in Assumption 2. Define the functions and .
Let , Note that for any and . Then for , , and
which implies that it suffices to show, there exist strictly positive constants and which depends only on the and such that
From this perspective, we only need to prove the inequality when .
Denote by the unit sphere and let . Let . For each , it suffices to show that
occurs with small probability. Note that only if .
Then
holds with probability greater than .
The last step is ”peeling”. For any , we have
Then we have
Then we conclude that by choosing and with some constant , the following
holds uniformly for all with probability greater than .
A.2 Proof of Theorem 4
Let be the difference of penalized loss function. Let be the optimal solution to the penalized loss function minimization and define . We first show some properties of the global optimizer .
Notice that is a convex function. Then
gives that
Owing to
where the subscript denotes that the vector is restricted to the set , we have
Consider the set for . With the restricted strong convexity condition provided by Theorem 3, we have for any ,
holds with probability greater than , where . Further, condition on the event
note that , we can derive that
where the last inequality comes form for large enough and the choice of . To derive the final result, we need to show that . Consider the choice of . In this case, for every . If , then there exists a vector with and , As is convex, then
Note that . We come to a contradictory and we conclude that .
A.3 Proof of Lemma 1
Recall the well known Bernstein type inequalities for point process which can be found in Shorack and Wellner(2009):
(16) |
where is a predictable process, is the compensator of the point process and .
Conditioning on defined in Lemma 3 and define the predictable process , and is a predictable process with . With Lemma 4, we have
holds simultaneously for all with probability greater than . On the event in Lemma 3, we have .
Take and notice that dominate , we will have
holds for all with probability greater than where are constants do not depend on or . The second inequation can be proved similarly.
A.4 Ancillary Results
Lemma 2
is the number of event on the node in the time interval and there exists a constant that only depends on that with probability greater than , we have
proof: By the idea of Poisson thinning, we have
and for ,
Because , we can pick only depending on such that with probability greater than , we have
Define the event and .
Lemma 3
proof:
where . Thus
The inequality we desire holds if we take for both large enough:
A similar proof can show that
For simplicity of notation, we define the events
and
Theorem 5
Suppose that is a temporal point process with intensity which satisfies those assumptions above. Suppose also that the functions have uniformly bounded support and
Then, for , we have
where
are positive constants that do not depend on or . Take and we have
The inequation above stems from the proof of Theorem 3 in Chen et al.(2017a) and is introduced when making assumption about the tail of transfer kernel (see Assumption 2 of Chen et al.(2017a) for more details). In our setting, the transfer kernels are compact supported and we can choose any .
Lemma 4
For large enough, we have constants that do not depend on and that
holds.
Proof.
Generally, is not a function only depend on . However, for all , is supported in and we observe that for , we have
both only depend on . Thus
Let
With Theorem 5, we have
As for and , we have
and
Similarly we have
Thus . We have
Recall the well known Bernstein type inequalities for point process which can be found in Shorack and Wellner [2009]:
where .
Notice that
where the last equality is by Fubini’s theorem. Let . Since the transition kernel is bounded and supported on a compact set, is bounded by a constant that does not depend on or . Thus we have
Take and we have
Lemma 6
For a fixed positive constant and , let
We have for any arbitrary constant ,
where and are positive constants that do not depend on or .
Note that for any ,
Then
Take and we have what we want.