Stopping Times Occurring Simultaneously
ABSTRACT
Stopping times are used in applications to model random arrivals. A standard assumption in many models is that they are conditionally independent, given an underlying filtration. This is a widely useful assumption, but there are circumstances where it seems to be unnecessarily strong. We use a modified Cox construction along with the bivariate exponential introduced by Marshall and Olkin (1967) to create a family of stopping times, which are not necessarily conditionally independent, allowing for a positive probability for them to be equal. We show that our initial construction only allows for positive dependence between stopping times, but we also propose a joint distribution that allows for negative dependence while preserving the property of non-zero probability of equality. We indicate applications to modeling COVID-19 contagion (and epidemics in general), civil engineering, and to credit risk.
1 INTRODUCTION
Probability models are ubiquitous in modern society. The timing of a random event is often crucial to the analysis of the reliability of a system or to the danger of a default within a credit risk context. Such random times are, of course, referred to as stopping times. Some examples of random times of intrinsic interest range from the quotidian banal such as bus arrivals, customers arriving at a restaurant, to the less banal, such as the time a cancer metastasizes, or an individual contracts a contagious disease such as COVID-19. Stopping times appear in Civil Engineering as they model metal fatigue in aircraft, the times of collapse of a bridge, or extreme events such as the recent collapse of the condominium towers in Surfside Florida. A particularly common use of stopping times is in the theory of Credit Risk, within the discipline of Mathematical Finance.
Indeed, we take the approach pioneered by researchers in Credit Risk, the seminal event being the publication of the book by David Lando in 2004 (see [25]). The “Cox Construction” that Lando uses begins with a filtration of observable events, satisfying the usual hypotheses (see Protter [30]) for the formal definition of the usual hypotheses). To fix notation, let be the probability space on which is the filtration of observable events. We will add a random time to our model by creating an exponential random variable (of parameter 1) that is independent of and all of . We choose an predictable increasing process , and we create the desired random time by writing:
(1) |
is then a totally inaccessible stopping time for an enlarged (by ) filtration , this is
With being a stopping time, the process is adapted and non-decreasing, hence it is a submartingale, whence there exists, via the Doob-Meyer decomposition theorem, a unique, -predictable, increasing process such that is a martingale. It is easy to show that the process is in fact the process of (1). The process is called the compensator of the stopping time .
If we repeat this procedure to construct two stopping time and , using two independent exponentials of parameter 1, and , which are independent of each other as well as the underlying filtration , then we have two totally inaccessible stopping times which are also conditionally independent of each other, and we have This is the approach of Lando and many others, with some notable exceptions such as Bielecki et al. [3] and Jiao and Li [22].
In applications, often the process is assumed to be of the form
(2) |
Sufficient conditions for to be of the form (2) are known (see, e.g., Ethier and Kurtz [14], Guo and Zeng [18], Zeng [36], and for a general result, Janson et al. [20]).
In this paper we are concerned with models where one can have , and some of the ramifications of such a model. This model arises in applications when and are constructed as functionals of Cox processes with independent exponentials and , but with an added complication: there is a third stopping time , and , while . This is a natural situation in Credit Risk for example, as we indicate in Section 3, but also in other domains, such as Civil Engineering, and disease contagion which we treat in Section 4. The resulting stopping times of interest, and , are no longer conditionally independent, and in simple cases, the bivariate exponential distribution of Marshall and Olkin [28] comes into play. These models do not have densities in , leading to a two dimensional cumulative distribution function with a singular component. We explore the consequences of such a phenomenon in some detail, and we explain its utility for various kinds of applications caused by the confluence of stopping times that arise naturally in the modeling of random events.
One could argue that it is not of vital importance to have two stopping times with a positive probability of being equal, but rather just to have them be close to each other, even arbitrarily close. We study this situation in Section 2.5.
In many easy to imagine examples, the times and are positively correlated. It is possible to imagine, however, situations where they would naturally be negatively correlated. To cover that situation, we slightly modify our constructions, as exhibited in Section 3.2. As an example, consider the recent scandal with the Boeing 737 Max airplanes. The two horrifically deadly crashes occurred with the confluence of the airplane’s software malfunctioning and the panic of inexperienced and poorly trained pilots. Since airplanes are so carefully constructed, software failures are rare, which made the Boeing 737 Max failures unanticipated; however, unfortunately, poorly trained pilots have recently become rather common, especially in airlines based in poorly regulated countries. (The two examples were Lion Air in Malaysia and Ethiopian Airlines, in Ethiopia). That duality between those “rare” and “common” events is due to their negative correlation.
Our work in this paper falls into a popular thread of prior research. Cox and Lewis [11] studied the case of multiple event types occurring in a continuum, but differently from us, they do not generalize the model proposed by Marshall and Olkin [28]. Moreover, they mostly consider “regular” processes, i.e., where events cannot occur at the same time, which is an important contribution of our work. Diggle and Milne [13] also proposed a multivariate version of the Cox Process but their model does not allow for either. Another example of a multivariate point process is given in Brown et al. [7].
There is existing work on modeling simultaneous defaults, but different from us, under Merton’s structural risk model (see Li [26] and Bielecki et al. [4]). Kay Giesecke [15], in a seminal paper concerning Credit Risk published in 2003, was the first (to our knowledge) to consider the Marshall and Olkin model of the bivariate exponential. Later, in 2013, Bielecki et al. [3] also worked in a similar model, which is also discussed in Chapter 8 of Crépey et al. [12]. Sun et al. [31] in 2017 developed a similar model to the one we present here, but restricted their attention to multidimensional Lévy subordinator processes. In this paper, we develop the ideas present in Giesecke [15], in Bielecki et al. [3], and in Crépey et al. [12], and go beyond them.
Brigo et al. [5] argued that a reasonable trade-off between applicability and model flexibility might consist in modeling a multivariate survival indicator process such that each subvector of it is a continuous-time Markov chain, which, as they show, is a property of the Marshall–Olkin distribution. Hence, they claimed that the Marshall–Olkin distribution is the natural and unique choice for modeling default times. This strengthens our choice of joint distribution of the stopping times.
Another closely related work to ours is Lindskog and McNeil [27] who consider a Poisson shock model of arbitrary dimension with both fatal and not-necessarily-fatal shocks. Jiao and Li [22] also allow for simultaneous defaults without using a pure Cox process construction. Inspired by the Jacod Criterion, they use what is known as a density approach, and they can include cases where a stopping time meets another stopping time, specified in advance.
There are also other types of generalizations of Cox Processes (see Gueye and Jeanblanc [16]), where they generalize, not the number of stopping times, but the form of the process . They assume that is not necessarily of the form given in equation (2).
The organization of this paper is as follows. Section 2, presents the survival function of two conditionally dependent (as opposed to conditionally independent) stopping times, an interpretation of it, a decomposition of it into its singular and absolutely continuous parts, and a series of results exploring the special properties such a modeling approach has. For example, not only do we treat the case where , but more generally we study when the two stopping times are ‘close’ to each other, in various metrics. Section 3 provides two generalizations of our model. In the first one, we extend to an arbitrary (but finite) number of such stopping times and in the second one, we allow for a slightly different dependency between the stopping times. Section 4 shows the applicability of our results by providing examples in Epidemiology (such as the case of COVID-19 and its variants) and in Civil Engineering (e.g., the recent condo collapse of Champlain Towers in Florida).
2 THE SURVIVAL FUNCTION
As a starting point, we will consider the case of having two stopping times, which we will generalize to stopping times in Section 3.1. Let us fix a filtered probability space large enough to support an -valued càdlàg stochastic process and three i.i.d exponential random variables with parameter 1 () and independent of . Then, define:
(3) |
where
(4) |
and
Note that this characterization of implies that are continuous and strictly increasing.
Theorem 1 (Survival function).
Using the definitions above and assuming that a.s. and that . Then,
Proof.
Let . By definition of and , we have,
The result follows by taking the expectation. ∎
Remark 2.1.
Remark 2.2.
Note that when (i.e., there is no randomness coming from ), Theorem 1 is a generalization of the bivariate exponential (BVE) distribution introduced by Marshall and Olkin [28]. In the specific case where . i.e., the intensity is constant, we recover the Marshall and Olkin BVE with parameter . This is .
Remark 2.3.
Remark 2.4.
is usually known as the intensity of in the progressive enlargement of with . (See Corollary 2.26 in Aksamit and Jeanblanc [1]).
Remark 2.5.
The intensity of for in the progressive enlargement of with is .
Caveat. In the rest of the paper, for easiness of notation, sometimes we write instead of for .
2.1 Interpretation of Joint Distribution
A way to interpret the distribution given in Theorem 1 is the following: Suppose we have a two-component system where each component’s life is represented by and respectively. Any component dies after receiving a shock. Shocks are governed by three independent Cox processes , and , in other words, is an inhomogeneous Poisson process with stochastic intensity . For , represents the number of shocks through time that affect only component , while represents the number of shocks through time that affect both components.
Events in the process are shocks to component 1, events in the process are shocks to component 2 and events in the process are shocks to both components.
Hence, we get,
(7) |
Which coincides with Theorem 1. In the second line of the previous expression means .
2.2 Decomposition of the Joint Distribution
As found in Theorem 1 and assuming that , we have that the joint survival function is:
Theorem 2.
Under the conditions of Theorem 1 and assuming that , the absolutely continuous and singular parts of are given by:
(8) |
where
Proof.
Let be the waiting time to the first shock in the process defined in Section 2.1. (Recall that we are assuming that ). By the assumptions made in Section 2.1, are independent and the survival function of each one of them is given by:
(9) |
Hence, the density is equal to:
(10) |
can be written as,
(11) |
where and
Since , we get
(13) |
With all these elements, is obtained by subtraction.
We can verify that is a singular distribution since its mixed second partial derivative is zero when . Conversely, is absolutely continuous since its mixed second partial derivative is a density. ∎
Remark 2.6.
The value of corresponds to . This is,
(14) |
Corollary 2.1.
Let , where is a positive continuous function and is an -valued stochastic process adapted to the filtration and independent of . Then,
Proof.
By a similar calculation to get in the proof of Theorem 2, we get
The result follows by taking the expectation. ∎
2.3 Estimating the Probability of Equality in Two Stopping Times
Now, we are interested in finding estimates for (see equation (14) in Section 2.2) under different assumptions for (the intensity of ) or (the compensator of ). As in Theorem 1, we assume a.s. for every .
Example 2.1 (Constant intensity).
If for all , it follows that:
Example 2.2 (Same intensity).
If for all , we have that for all . Moreover, it is straighforward to get,
Example 2.3 (Bounded intensity).
Assume a.s. for all , for positive, non-random, and integrable with a.s., where are positive real random variables, independent of everything else, and such that . Then, we get
Example 2.4 (Intensity bounded by sum of intensities).
If for a.s. all where are positive real random variables, independent of everything else, and such that . Then:
The proofs of these last two examples are straightforward conditioning arguments and, hence, left to the reader.
2.4 Conditional Probabilities
Through this Section, let .
Proposition 2.1.
Proof.
By the definition of , we have
Recall that, by definition, are independent with density under equal to
This implies,
Take the expectation to get the result. ∎
Remark 2.7.
When taking the limit in the previous proposition, we recover, as expected, the result of Corollary 2.1.
The following two propositions are straightforward applications of Proposition 2.1. Hence, the proofs are left to the reader.
Proposition 2.2.
Proposition 2.3.
2.5 Distance Between Stopping Times
Proposition 2.4.
For ,
In particular, if and , we get
Proof.
The next proposition shows that the probability of the two stopping times happening over the next time interval with is equal to the expectation of , which is the common part of the intensities of and (recall Remark 2.5)
An anonymous referee has pointed out a relation of the following proposition to Aven’s Lemma. This is, of course, correct, see Aven [2], Ethier and Kurtz [14], and Zeng [36].
Proposition 2.5.
Proof.
Let and note that:
Using Proposition 2.4 with and , dividing by , and using L’Hôpital’s rule to get the limit:
Finally, given is bounded by 1 and using a conditioning argument, we can conclude that:
∎
Proposition 2.5 motivates the following one:
Proposition 2.6.
If
Proof.
As , we can always find a sufficiently small such that . Hence, without loss of generality, we assume and we proceed as in the proof of Proposition 2.5, but differently from there, we use L’Hôpital’s rule twice to get the desired limit. ∎
The next couple of propositions provide a measure, in two different metrics, of how close the two stopping are from each other.
Proposition 2.7.
(Distance in probability)
Proof.
Let and . Similarly, , , and be the second largest from . Also, let . Then,
where stands for the density of under , i.e., | ||||
Use this expression, along with the following equality, and then take the expectation to get the desired result
∎
Proposition 2.8 ( distance).
If and a.s. for (in other words, goes faster to 0 than goes to infinity when ), we have
Proof.
Let . and stand for the cumulative and the survival distribution functions of given . Similarly, and stand for the cumulative and the survival joint distribution functions of given .
We expand the square and handle each term separately. For , we use integration by parts in the following way:
To find , we exploit the result of Young [35] on integration by parts in two or more dimensions. If and is of bounded variation on finite intervals, then:
(15) |
This equality implies that, for :
(16) |
Hence,
(17) |
The result follows by taking the expectation. ∎
Remark 2.8.
In the case of independence of and , as for all , we get that:
(18) |
3 GENERALIZATIONS
3.1 Generalization to K Stopping Times.
Given the interpretation introduced in Section (2.1), there is a natural way to extend our model to more than two stopping times. We explicitly motivate and present the case of 3 stopping times. Suppose we have a three-component system where each component’s life is represented by , , and respectively. Any component dies after receiving a shock, which are governed by 6 Cox processes, which are independent given :
Events in the process are shocks to only component (for ), events in the process are shocks to component and (for and ), and events in the process are shocks to the 3 components.
In this way, considering and we have:
(19) |
By using a similar technique we can generalize to any number of stopping times and in Section 4.1, we will present an application of this natural extension of our model. However, as the number of stopping times increases, handling the expression presented in equation (19) becomes cumbersome. In Jarrow et al. [21], we propose and study in detail a slightly different approach that makes easier to generalize to more than two stopping times.
3.2 A More General Distribution
In this Section, we generalize the result from Section 2 to get a distribution that allows and to have a negative covariance 111We thank Philip Ernst and Guodong Pang for suggesting this idea in an earlier version of this paper.. Recalling Equation (2.5) and that , one can see that the covariance of and in the previous model is:
(20) |
As for all , it is clear that the covariance is always nonnegative.
To get the property of a negative covariance, we stop assuming that the underlying exponential random variables (i.e., ) are independent (recall equation (4) where we use these random variables to define and ). Now, we assume that follow the Gumbel Bivariate Distribution (see Gumbel [17]). This is, the joint survival function of is:
(21) |
While and it is independent of . The rest of the definitions given in (4) remain the same. Then, it is easy to check that:
(22) |
(23) |
Define and as in equation (3). Then, we can get the join distribution of :
Theorem 3 (Survival function).
Suppose for are non-random positive continuous functions, which implies that are continuous and strictly increasing. Assume further that a.s. Then,
Proof.
Let . By definition of and , we have,
The result follows by taking the expectation. ∎
Remark 3.1.
From Theorem 3, it is clear that is not independent of .
Remark 3.2.
Proposition 3.1 (Probability of the two stopping times being equal).
(25) |
which is greater than as long as is not identically equal to
The proof of this Proposition follows by a similar argument as the one used in the proof of Theorem 2 to get , so we leave it to the reader.
This is the general set-up. For the rest of this Section, we assume that for , we have for all to get tractable computations. Under this assumption, note the following:
-
1.
- 2.
-
3.
and , i.e., marginally .
Before showing that one can get a negative covariance between the stopping times, we show in the next proposition that, by allowing the possibility of a negative covariance, the probability of the two stopping times being equal is smaller.
Proposition 3.2.
Fix some values of . If , the probability of being equal to is smaller than the probability of being equal to for the case of
Proof.
Let and stand for the probability of an event under the law of when and when respectively
Recall from Example 2.1 that:
(26) |
Also, setting in (25), we get:
(27) |
where the second equality follows by completing the square and it is only valid if . The third equality is just a change of variable.
Then, using the well-known bound , we have that:
which shows, as desired:
(28) |
∎
Proposition 3.3 (Negative Covariance).
Suppose , and . If , then .
Proof.
Using the result of Young [35], as in the proof of Proposition 2.8 as well as Theorem 3, we have that:
(29) |
We reduce the previous equation even further by completing the square, for instance, take the first summand:
(30) |
Similarly, the second summand in (3.2) reduces to:
(31) |
Using the fact that and equations (30) and (31), we get that:
(32) |
Using the assumption and , we rewrite the previous expression to get:
(33) |
As , we have that and . Then:
(34) |
Hence, to get a negative covariance, it suffices to show:
(35) |
Set and note that our initial assumption implies that . Hence:
(36) |
By the proof in the Appendix, one can see that the previous integral is always smaller than one when , which is the case here because:
Hence:
(37) |
This shows that equation (35) holds and consequently, we have a negative covariance. ∎
Remark 3.3.
If , the condition of Proposition 3.3 reduces to which is equivalent to . The interpretation of this is that to get a negative covariance when , it suffices to have sufficiently large compared to .
4 APPLICATIONS
4.1 Application to Epidemiology
Suppose we are interested in knowing what is the probability of people getting infected with COVID-19 at the exact same time. The time to infection of each person can be modeled as a stopping time. Some current models (see Britton and Pardoux [6]) assume independence of these stopping times and thus the probability of them being equal is . However, using our model we can weaken the independence assumption and conclude that:
Theorem 4.
If follows the joint distribution described in Section 3.1, then:
(38) |
The innermost sum in the exponent is taken over all the possible combinations of . For example, if , could be , , etc; if , can only be
Remark 4.1.
To be more specific if , then we get,
(39) |
Proof.
Using the notation from Section 3, events in the process are shocks to the components of the system. Hence
Then, we find the distribution of:
under the measure which stands for
(40) |
Hence, , given , has a continuous distribution with density equal to:
(41) |
The innermost sums are taken over all the possible combinations of . Then,
(42) |
The result follows by taking the expectation. ∎
Corollary 4.1.
If for all ; for all size 2 combinations in ; for all size 3 combinations in ; ; . Then, we have:
(43) |
Corollary 4.2.
If for all ; for all size 2 combinations in ; for all size 3 combinations in ; ; . Then, we have:
(44) |
4.2 Application to Engineering
A classic problem in Operations Research, basically studied in queuing theory, is that of a complicated machine. The machine fails if one of its key parts fails. Knowing this, designers create a certain redundancy by doubling key components, so that if one fails, there is a back-up ready to assume its duties. To save money, however, one can have one back-up for two components. Suppose is the (random) failure time of one component, and is the (again, random) failure time of the second component. If they fail at the same time, then the machine itself will fail, since the solitary back-up cannot replace both components simultaneously. One usually considers such a situation to be unlikely, even very unlikely. If it were to happen, however, we would be interested in . In the conventional models, is zero, since they are each exponential, and conditionally independent. However if we consider a third time, , and if this third stopping time is the (once again, random) time of an external shock (such as the failure of the air conditioning unit, or a power failure with a surge when the power resumes, etc.), and if we let
(45) |
then we are in the case where , and in special cases we can calculate the probability with precision, and in other, more complicated situations, we can give upper and lower bounds, both, for .
A different kind of example, exemplified by the recent, and quite dramatic example is that of the collapse of the Champlain Towers South, in Surfside, Florida (just north of Miami Beach). The twelve story towers fell at night (1:30AM), and killed 98 people who were in their apartments at the time, and presumably even in their beds. The towers had a pool deck above a covered garage. The towers holding up the pool deck were too thin, and not strong enough to withstand the stresses imposed on them over four decades. Water was seen pouring into the parking garage only minutes before the collapse.
In the main building that collapsed, structural columns were too narrow to accommodate enough rebar, meaning that contractors had to choose between cramming extra steel into a too-small column (which can create air pockets that accelerate corrosion) or inadequately attaching floor slabs to their supports. Our model would have several stopping times, each one representing the failure of a different component of the structure. Choosing two important ones, such as: (1) The corrosion of the rebar supports within the concrete, due to the salt air and massive strains due to violent weather, which plagues the Florida coast during hurricane season; (2) The use of a low quality grade of concrete, violating regulations of the local government in the construction of the towers, leading to concrete integrity decay due to 40 years of seaside weather.
One way to model this is to take a vector of two Cox constructions, using two independent exponentials and to construct our failure times and . This then gives us that and are conditionally independent, given the underlying filtration . This leads to , even if they have the exact same compensators.
Other models have been proposed, such as the general framework presented in the book of Anna Aksamit and Monique Jeanblanc [1], where the random variables and are multivariate exponentials, but with a joint density describing how they relate to each other. However, even in this more general setting, we have .
Assuming it is the stopping times occurring simultaneously that causes the collapse, we want a model that allows us to have . Let’s assume we have three standard Cox Constructions, with independent exponentials , with different compensators ( for ).
Call the three stopping times . The time could be anything, such as a hurricane putting heavy stress on the building, or an earthquake, or some other external factor. (Current forensic analysis, which is ongoing as we write this, suggests that the collapse of the pool-deck created a seismic shock sufficient to precipitate the collapse of the south tower, and weakened the structural integrity of the north tower to such an extent that it was condemned and then deliberately destroyed.) However in this case we can take to be the time of the flooding of the parking structure under the swimming pool and pool deck in general.
Simulations commissioned by the newspaper The Washington Post and done by a team led by Khalid M. Mosalam of the University of California, Berkeley, show how that might have happened and indicate that it is a plausible scenario (Swaine et al. [32]).
As in (45), we define
(46) |
where we take the stopping time to be the time of the collapse of the pool deck. As the noted engineer H. Petroski has pointed out [29] it is often the case that multiple things happen at once in order to precipitate a disaster such as the fall of the Champlain Tower South. Indeed, the forensic engineer R. Leon of Virginia Tech is quoted in American Society of Civil Engineers, 2021 [34]: “I think it is way too early to tell,”said Roberto Leon, P.E., F.SEI, Dist.M.ASCE, the D.H. Burrows Professor of Construction Engineering in the Charles Edward Via Jr. Department of Civil and Environmental Engineering at Virginia Tech.“It’s going to require a very careful forensic approach here, because I don’t think the building collapsed just because of one reason. What we tend to find in forensic investigations is that three or four things have to happen for a collapse to occur that is so catastrophic.”
Professor Leon is a widely respected authority in forensic civil engineering, and the key insight for us is his last statement that three or four things have to happen simultaneously for a catastrophic collapse.
Less dramatic examples, but quite pertinent to the recent attention being paid to the decay of infrastructure around the US, provide more examples of the utility of this approach. A first example is the Interstate 10 Twin Span Bridge over Lake Pontchartrain north of New Orleans, LA. It was rendered completely unusable by Hurricane Katrina, but the naive explanation was demonstrated false by the fact that several other bridges that had the same structural design remained intact. Upon investigation, it was determined that air trapped beneath the deck of the Interstate 10 bridges was a major contributing factor to the bridge’s collapse. While major, it was not the only contributing factor (Chen et al. [8]).
A final example is the derailment of an Amtrak train near Joplin, Montana in September, 2021. 154 people were on board the train, and 44 passengers and crew were taken to area hospitals with injuries. The train was traveling at between 75 and 78 mph, just below the speed limit of 79 mph on that section of track when its emergency brakes were activated. The two locomotives and two railcars remained on the rails and eight cars derailed. Investigations of these types of events take years, but preliminary speculation is that the accident could have been caused by problems with the railroad or track, such as a rail that buckled under high heat, or the track itself giving way when the train passed over. Both might also be possible, leading to the two stopping times and , and a situation where . See Hanson and Brown [19].
5 APPENDIX
Showing that equation (36) is smaller than 1 for any is equivalent to showing the following inequality:
(47) |
This is equivalent to the problem of finding an upper bound for the complementary error (erfc) function (recall that ), which has been widely studied in the literature. A classical result is the Chernoff-Rubin bound (see Chernoff [9]). More recent work can be found in Chiani et al [10], Karagiannidis and Lioumpas [23], and in Tanash and Riihonen [33]. However, to our knowledge, there is no bound of the type we need here.
We will show a slightly tighter bound than the one in (36), namely, we would like to optimize the following bound in
(48) |
When we say optimize, we mean that is as close as possible (yet bigger) to for all .
In other words, define
(49) |
We want to find such that:
-
1.
for all and .
-
2.
for all
To get the first item in the previous list, we need to be as large as possible. This is because is decreasing in . However, if we wish to have , cannot be indiscriminately large.
To find , let us first look at .
(50) |
We can find that:
(51) |
Hence is increasing when , is maximized when and it is decreasing when .
Also note that
(52) |
Hence, to satisfy the second item in the list above (i.e., for all ), it suffices to pick such that , i.e.,
(53) |
Solving for , we find that:
(54) |
We can then conclude that the maximum of occurs when
(55) |
We can then numerically calculate that
(56) |
This last equation means that the maximum difference between and is
References
- Aksamit and Jeanblanc [2017] A. Aksamit and M. Jeanblanc. Enlargement of Filtration with Finance in View. SpringerBriefs in Quantitative Finance. Springer, Cham, 1 edition, 2017. doi: https://doi.org/10.1007/978-3-319-41255-9.
- Aven [1985] T. Aven. A Theorem for Determining the Compensator of a Counting Process. Scandinavian Journal of Statistics, 12(1):69–72, 1985. ISSN 03036898, 14679469. URL http://www.jstor.org/stable/4615974.
- Bielecki et al. [2013] T. Bielecki, A. Cousin, S. Crépey, and A. Herbertsson. In search of a grand unifying theory. Creditflux Newsletter Analysis, pages 20–21, July 2013.
- Bielecki et al. [2018] T. R. Bielecki, M. Jeanblanc, and A. D. Sezer. Joint densities of hitting times for finite state Markov processes. Turkish Journal of Mathematics, 42(2):586–608, 2018. doi: https://doi.org/10.3906/mat-1608-29. URL https://journals.tubitak.gov.tr/math/vol42/iss2/14.
- Brigo et al. [2016] D. Brigo, J.-F. Mai, and M. Scherer. Markov multi-variate survival indicators for default simulation as a new characterization of the Marshall–Olkin law. Statistics & Probability Letters, 114:60–66, 2016. ISSN 0167-7152. doi: https://doi.org/10.1016/j.spl.2016.03.013. URL https://www.sciencedirect.com/science/article/pii/S016771521530167X.
- Britton and Pardoux [2019] T. Britton and E. Pardoux. Chapter 1 Stochastic Epidemic Models, pages 5–19. Springer International Publishing, Cham, 2019. ISBN 978-3-030-30900-8.
- Brown et al. [1981] T. C. Brown, B. W. Silverman, and R. K. Milne. A class of two-type point processes. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 58:299–308, 1981. URL https://doi.org/10.1007/BF00542637.
- Chen et al. [2007] G. Chen, E. C. W. III, D. Hoffman, R. Luna, and A. Sevi. Analysis of the Interstate 10 Twin Bridges Collapse During Hurricane Katrina. Science and the storms-the USGS response to the hurricanes of 2005, pages 35–42, 2007. doi: 10.3133/cir13063D.
- Chernoff [1952] H. Chernoff. A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations. The Annals of Mathematical Statistics, 23(4):493 – 507, 1952. doi: 10.1214/aoms/1177729330. URL https://doi.org/10.1214/aoms/1177729330.
- Chiani et al. [2003] M. Chiani, D. Dardari, and M. K. Simon. New exponential bounds and approximations for the computation of error probability in fading channels. IEEE Transactions on Wireless Communications, 2(4):840–845, 2003. doi: 10.1109/TWC.2003.814350.
- Cox and Lewis [1972] D. R. Cox and P. A. W. Lewis. Multivariate Point Processes. Berkeley Symposium on Mathematical Statistics and Probability, pages 401–448, 1972. doi: doi:10.1525/9780520375918-024. URL https://doi.org/10.1525/9780520375918-024.
- Crépey et al. [2014] S. Crépey, T. R. Bielecki, and D. Brigo. Counterparty Risk and Funding: A Tale of Two Puzzles. Chapman & Hall, 2014.
- Diggle and Milne [1983] P. J. Diggle and R. K. Milne. Bivariate Cox Processes: Some Models for Bivariate Spatial Point Patterns. Journal of the Royal Statistical Society. Series B (Methodological), 45(1):11–21, 1983. ISSN 00359246. URL http://www.jstor.org/stable/2345617.
- Ethier and Kurtz [1986] S. N. Ethier and T. G. Kurtz. Markov Processes: Characterization and Convergence. Wiley, 1986. ISBN 9780471081869.
- Giesecke [2003] K. Giesecke. A Simple Exponential Model for Dependent Defaults. The Journal of Fixed Income, 13(3):74–83, 2003. ISSN 1059-8596. doi: 10.3905/jfi.2003.319362. URL https://jfi.pm-research.com/content/13/3/74.
- Gueye and Jeanblanc [2021] D. Gueye and M. Jeanblanc. Generalized Cox Model for Default Times. Working paper, June 2021. URL https://hal.archives-ouvertes.fr/hal-03264864.
- Gumbel [1960] E. J. Gumbel. Bivariate Exponential Distributions. Journal of the American Statistical Association, 55(292):698–707, 1960. ISSN 01621459. URL http://www.jstor.org/stable/2281591.
- Guo and Zeng [2008] X. Guo and Y. Zeng. Intensity Process and Compensator: A New Filtration Expansion Approach and the Jeulin-Yor Theorem. The Annals of Applied Probability, 18(1):120–142, 2008. ISSN 10505164. URL http://www.jstor.org/stable/25442749.
- Hanson and Brown [2021] A. B. Hanson and M. Brown. Cause of Montana Amtrak derailment that killed 3, injured dozens still under investigation. Great Falls Tribune, October 27, 2021 2021.
- Janson et al. [2011] S. Janson, S. M’Baye, and P. Protter. Absolutely Continuous Compensators. International Journal of Theoretical and Applied Finance, 14(03):335–351, 2011. doi: 10.1142/S0219024911006565. URL https://doi.org/10.1142/S0219024911006565.
- Jarrow et al. [2022] R. Jarrow, P. Protter, and A. Quintos. Computing the Probability of a Financial Market Failure: A New Measure of Systemic Risk. Annals of Operations Research, 2022. doi: 10.1007/s10479-022-05146-9. URL https://doi.org/10.1007/s10479-022-05146-9.
- Jiao and Li [2015] Y. Jiao and S. Li. Generalized density approach in progressive enlargement of filtrations. Electronic Journal of Probability, 20:1 – 21, 2015. doi: 10.1214/EJP.v20-3296. URL https://doi.org/10.1214/EJP.v20-3296.
- Karagiannidis and Lioumpas [2007] G. K. Karagiannidis and A. S. Lioumpas. An Improved Approximation for the Gaussian Q-Function. IEEE Communications Letters, 11(8):644–646, 2007. doi: 10.1109/LCOMM.2007.070470.
- Lando [1998] D. Lando. On Cox processes and credit risky securities. Review of Derivatives Research, 2:99–120, 1998. doi: https://doi.org/10.1007/BF01531332.
- Lando [2004] D. Lando. Credit risk modeling : theory and applications. Princeton University Press, 2004. ISBN 0691089299.
- Li [2016] W. Li. Probability of Default and Default Correlations. Journal of Risk and Financial Management, 9(3), 2016. ISSN 1911-8074. doi: 10.3390/jrfm9030007. URL https://www.mdpi.com/1911-8074/9/3/7.
- Lindskog and McNeil [2003] F. Lindskog and A. J. McNeil. Common Poisson Shock Models: Applications to Insurance and Credit Risk Modelling. ASTIN Bulletin, 33(2):209–238, 2003. doi: 10.1017/S0515036100013441.
- Marshall and Olkin [1967] A. W. Marshall and I. Olkin. A Multivariate Exponential Distribution. Journal of the American Statistical Association, 62(317):30–44, 1967. ISSN 01621459. URL http://www.jstor.org/stable/2282907.
- Petroski [2021] H. Petroski. What lessons will be learned from the Florida Condo Collapse? The deadly catastrophic failure has put a lens on building maintenance. American Scientist, 109:278–281, September–October 2021 2021.
- Protter [2005] P. Protter. Stochastic integration and differential equations, volume 21. Springer-Verlag Berlin Heidelberg, 2 edition, 2005. doi: 10.1007/978-3-662-10061-5.
- Sun et al. [2017] Y. Sun, R. Mendoza-Arriaga, and V. Linetsky. Marshall–Olkin distributions, subordinators, efficient simulation, and applications to credit risk. Advances in Applied Probability, 49(2):481–514, 2017. doi: 10.1017/apr.2017.10.
- Swaine et al. [2021] J. Swaine, E. Brown, J. S. Lee, A. Mirza, and M. Kelly. How a collapsed pool deck could have caused a Florida condo building to fall. The Washington Post, August 12, 2021 2021.
- Tanash and Riihonen [2021] I. M. Tanash and T. Riihonen. Improved Coefficients for the Karagiannidis–Lioumpas Approximations and Bounds to the Gaussian Q-Function. IEEE Communications Letters, 25(5):1468–1471, 2021. doi: 10.1109/LCOMM.2021.3052257.
- Walpole [2021] B. Walpole. Quest for answers begins following Florida building collapse. American Society of Civil Engineers, Civil Engineering Source, June 25, 2021.
- Young [1916] W. Young. On multiple integration by parts and the second theorem of the mean. Proceedings of the London Mathematical Society, 2(16):273–293, 1916. URL https://londmathsoc.onlinelibrary.wiley.com/doi/pdf/10.1112/plms/s2-16.1.273.
- Zeng [2006] Y. Zeng. Compensators of stopping times (Ph.D. thesis). Cornell University Mathematics Department, 2006.