Slack Reactants: A State-Space Truncation Framework to Estimate Quantitative Behavior of the Chemical Master Equation
Abstract
State space truncation methods are widely used to approximate solutions of the chemical master equation. While most methods of this kind focus on truncating the state space directly, in this work we propose modifying the underlying chemical reaction network by introducing slack reactants that indirectly truncate the state space. More specifically, slack reactants introduce an expanded chemical reaction network and impose a truncation scheme based on user defined properties, such as mass-conservation. This network structure allows us to prove inheritance of special properties of the original model, such as irreducibility and complex balancing.
We use the network structure imposed by slack reactants to prove the convergence of the stationary distribution and first arrival times. We then provide examples comparing our method with the stationary finite state projection and finite buffer methods. Our slack reactant system appears to be more robust than some competing methods with respect to calculating first arrival times.
1 Introduction
Chemical reaction networks (CRN) are a fundamental tool in the modeling of biological systems, providing a concise representation of known chemical or biological dynamics. A CRN is defined by a family of chemical reactions of the form
where and are the number of species consumed and produced in this reaction, respectively. The classical approach to modeling this CRN is to consider the concentrations , where is the concentration of species at time , and to use a system of nonlinear differential equations to describe the evolution of the concentrations.
Suppose we are interested in studying the typical enzyme-substrate system [30], given by the CRN
(1) |
Given an initial condition where the molecular numbers of both and are , a natural question to ask is how long the system takes to produce the first copy of . Similarly, there exists a time in the future where and are fully depleted, resulting in a chemically-inactive system, and one can ask when this occurs. These are both quantities that the classical deterministic modeling leaves unanswered – by considering only continuous concentrations, there is not a well-defined way to address modeling questions at the level of single molecules as the model assumes that all the reactions simultaneously occur within infinitesimally small time intervals.
Instead, by explicitly considering each individual copy of a molecule, we may formulate a continuous-time Markov chain. This stochastic modeling is especially important when the system consists of low copies of species, in which case the influence of intrinsic noise is magnified [34, 11, 2]. Rather than deterministic concentrations , we consider the continuous-time Markov chain describing the molecular number of each species at time .
The questions regarding the enzyme-substrate system (1), such as the time of the first production of , simply correspond to the first passage times of various combinations of states. For a continuous-time Markov process , the first passage time to visit a set of system states is formally defined as . One can directly compute by using the transition rate matrix . With few exceptions (such as CRNs with only zero- or first-order reactions [28]), most chemical reaction networks of any notable complexity will have an intractably large or infinite state-space, i.e. they exhibit the curse of dimensionality. This direct approach can, therefore, suffer from the high dimensionality of the transition rate matrix.
An alternative approach is to estimate the mean first passage time by generating sample trajectories with stochastic simulation algorithms such as the Gillespie algorithm [15]. This overcomes the curse of dimensionality since a single trajectory needs only to keep track of its current population numbers. Nevertheless, there still remain circumstances under which it is more efficient to numerically evaluate the exact solution of the chemical master equation – in particular, when the Markov process rarely visits so that significantly long simulations may be required to sample enough trajectories to estimate the mean first passage time [29].
Fortunately, a broad class of state space reduction methods have recently been developed that allow for direct treatment of the transition rate matrix. These methods are based on the truncation-and-augmentation of the state space [35, 36, 32, 16], the finite buffer method [7, 8] and linear programming [25, 26]. A recent review summarizes truncation-based methods [27]. The stationary finite state projection (sFSP)[16], and the finite buffer method [7, 8] are examples of such truncation-based methods, which we will describe in detail below. They all satisfy provable error estimates on the probability distributions when compared with the original distribution. On the other hand, each of these methods has potential limitations for estimating mean first passage times and other quantitative features. For instance, using sFSP depends on the choice of a designated state, which can significantly alter the estimate for first passage times.
In this paper we provide a new algorithm of state space reduction, the slack reactant method, for stochastically modeled CRNs. In this algorithm we generate a new CRN from an existing CRN by adding one or multiple new species, so that the associated stochastic system satisfies mass conservation relations and is confined to finitely many states. In order to ensure equivalent dynamics to the original system, we define a mild form of non-mass action kinetics for the new system. Since the state space reduction is implemented using a fully determined CRN, we can study the CRN using well-known tools of CRN theory such as deficiency zero , Lyapunov functions , etc, as long as they extend to this form of kinetics.
In Section 3.2 we provide an algorithm to produce a slack variable network given a desired set of mass conservation relations. In addition to its theoretical uses, this algorithm allows to implement existing software packages such as CERENA [23], StochDynTools [19] and FEEDME [24] for chemical reaction networks to generate quantities such as the moment dynamics of the associated stochastic processes, using the network structures as inputs.
We employ classical truncation Markov chain approaches to prove convergence theorems for slack networks. For fixed time , if a probability density of each slack system under conservation quantity converges to its stationary distribution uniformly in , then the stationary distribution of the slack system converges to the original stationary distribution as tends to infinity. We further prove that under a uniform tail condition of first passage times, the mean first passage time of the original system can be approximated with slack systems confined on a sufficiently large truncated state space. Finally we show that the existence of Lyapunov function for an original system guarantees that all the sufficient conditions for the first passage time convergence are satisfied. We also show that this truncation method is natural in the sense that a slack system admits the same stationary distribution up to a constant multiplication as the stationary distribution of the original system if the original system is complex balanced.
This paper is outlined as follows. In Section 3, we introduce the slack reactant method and include several illustrative examples. In Section 4, we demonstrate that the slack method compares favorably with other state space truncation methods (sFSP, finite buffer method) when calculating mean first passage times. We prove convergence in the mean first passage time, and other properties, in Section 5 In Section 7, we use slack reactants to estimate the mean first passage times for practical CRN models such as a Lotka-Volterra population model and a system of protein synthesis with a slow toggle switch.
2 Stochastic chemical reaction networks
A chemical reaction network (CRN) is a graph that describes the evolution of a biochemical system governed by a number of species and reactions . Each node in the graph represents a possible state of the system and nodes are connected by directed edges when a single reaction transforms one state into another. Each reaction consists of complexes and a reaction rate constant. For example, the reaction representing the transformation of complex to complex at rate is written as follows:
(2) |
A complex, such as , is defined as a number of each species . That is, representing a complex , where are the stochiometric coefficients indicating how many copies of each species belong in complex . The full CRN is thus defined by where and represent the set of complexes and reaction propensities respectively.
When intrinsic noise plays a significant role for system dynamics, we use a continuous-time Markov chain to model the copy numbers of species of a reaction network. The stochastic system treats individual reactions as discrete transitions between integer-valued states of the system. The probability density for is denoted as
where is the initial probability density. We occasionally denote by the probability omitting the initial state when the contexts allows. For each state , the probability density obeys the chemical master equation (CME), which gives the time-evolution of with a linear system of ordinary differential equations [14]:
(3) |
Here, the entry () is the transition rate at which the -th state transitions to -th state. Letting and be the -th and -th states, the transition rate from to is
where is the reaction intensity for a reaction . The diagonal elements of are defined as
Normally, reaction intensities hold a standard property,
(4) |
A typical choice for is mass-action, which defines for and ,
where is the rate constant. Here we used the notation for positive integers and .
3 Construct of Slack Networks
In this Section, we introduce the method of slack reactants, which adds new species to an existing CRN so that the state space of the associated stochastic process is truncated to a finite subset of states. This model reduction accurately approximates the original system as the size of truncation is large. We begin with a simple example to demonstrate the main idea of the method.
3.1 Slack reactants for a simple birth-death model
Consider a mass-action 1-dimensional birth-death model,
(5) |
For the associated stochastic process , the mass-action assumption defines reactions intensities as and for each reaction in (5). Note that the count of species could be any positive integer value as the birth reaction can occur unlimited times. Therefore the state space for this system consists of infinitely many states. Consider instead the CRN
(6) |
where we have introduced the slack reactant . This new network admits as a conservative law since for each reaction either one species is degraded by one while the other is produced by one.
Since the purpose of this new network is to approximate the qualitative behavior of the original system (5), we minimize the contribution of the slack reactant for modeling the associated stochastic system. Hence, we assign a special reaction intensity – instead of using mass-action, we choose
(7) |
and we use the same intensity for the reaction . By forcing to have “zero-order kinetics,” we ensure that the computed rates remain the same throughout the chemical network except for on the imposed boundary. This choice of reaction intensities not only preserves the conservative law but also prevent the slack reactant from having negative counts with the characteristic term .

3.2 Algorithm for constructing a network with slack reactants
In general, by using slack reactants, any conservative laws can be deduced in a similar fashion to encode the desired truncation directly in the CRN. We have found this perspective to be advantageous with respect to studying complex CRNs. Rather than thinking about the software implementation of the truncation, it is often easier to design the truncation in terms of slack reactants, then implement the already-truncated system exactly.
We now provide an algorithm to automatically determine the slack reactants required for a specified truncation. It is often the case that a “natural” formulation arises (typically by replacing zero-order complexes with slack reactants) but when that is not the case, one can still find a useful approximation by following our algorithm.
Consider any CRN . Suppose we wish to apply a number of conservation bounds to reduce the state-space of the associated chemical master equation, e.g. many equations of the form
(8) |
for . Collecting the conservation vectors into the rows of the matrix and each integer into the vector , we may concisely write
(9) |
where denotes an element-wise less-than-or-equal operator and is the column vector of reactants .
We augment the vector with a vector of slack reactants to convert the inequality to an equality:
(10) |
where is the identity matrix. This matrix multiplication implies conservative laws among ’s and slack reactants ’s. In other words, we inject the variables into the existing reactions in such a way as to enforce these augmented conservation laws.
To do so, we need to properly define several CRN quantities. Let be the connectivity matrix: if the -th reaction establishes an edge from the -th complex to the -th, then and the other entries in column are 0. Let be the complex matrix . Each column of corresponds to a unique complex and each row corresponds to a reactant. The entry is the multiplier of in complex (which may be implicitly 0). The stoichiometry matrix is simply , whose -th column indicates the reaction vector of the -th reaction. Note that the CRN admits a conservation law if and only if , where .
Now consider a modified network where we inject into the complexes. Let denote the (unknown) multiplier of in the -th complex. We require the following to be true:
(11) |
Then by using the same connectivity matrix and a new complex matrix , we obtain a new CRN that enforces the conservative laws (10). To find satisfying (11), we choose such that
(12) |
for an matrix with each row of the form for some positive integer . Since , this guarantees (11).
A new network can be generated with the complex matrix and the connectivity matrix . We call this network a regular slack network with species and complexes . For the -th reaction corresponding to the -th column of the matrix , we denote by the reaction corresponding to the -th column of the matrix . That is, is the reaction obtained from by adding slack reactants.
We model the slack network by where each of the entries represents the count of species in the new network. Note that the count of each slack reactant is fully determined by species counts ’s because of the conservation law(s). As such, we do not explicitly model the ’s.
The intensity function of for a reaction is defined as
(13) |
where , and is the reaction in . Then we denote by a new system with slack reactants obtained from the algorithm, where is the collection of kinetic rates . We refer this system with slack reactants to a slack system.
Here we highlight that the connectivity matrix and the complex matrix of the original network are preserved for a new network with slack reactants. Thus the original network and the new network obtained from the algorithm have the same connectivity. This identical connectivity allows the qualitative behavior of the original network, which solely depends on and , to be inherited to the new network with slack reactants. We particularly exploit the inheritance of accessibility and the inheritance of Poissonian stationary distribution in Section 6.
Remark 3.1.
A single conservation relation, such as with a non-negative vector , is sufficient to truncate the given state space into a finite state space. Hence, in this manuscript, we mainly consider a slack network that is obtained with a single conservation vector and a single slack reactant .
Remark 3.2.
Although we primarily think about bounding our species counts from above, we could also bound species counts from below by choosing negative integers for .
Example 3.1.
We illustrate our slack algorithm with an example CRN consisting of two species and five reactions indicated by edge labels on the following network:
(14) |
We enumerate the complexes in the order of and . We order the reactions according to their labels on the network. Thus the connectivity matrix and complex matrix are defined as follows:
Suppose we set a conservation bound for some . Then the matrix and by (12), we have .
When , the network with slack reactant is
(15) |
where we have the conservation relation , where .
When , the network with slack reactant is
(16) |
where we have the same conservation relation .
Here we explain why network (15) is preferred to (16) because it is less intrusive. Let and be the stochastic process associated with networks (15) and (16) respectively. Suppose the initial state of both and is . can reach the state by transitioning times with the reaction . This state corresponds to the state in the original network (14). On the other hand, cannot reach the state . This is because the states and are the only states from which jumps to . However, no reaction in (16) can be fired at the states since no species presents at those states.
Consequently, one state, that is accessible in the original network (14), is lost in the system associated with network (16). However, it can be shown that the stochastic process associated with network (15) preserves all the states of the original network. This occurs mainly because the matrix for network (15) is sparser than the matrix for network (16). We discuss how to minimize the effect of slack reactants in Section 3.3.
3.3 Optimal Slack CRNs for effective approximation of the original systems
The algorithm we introduced to construct a network with slack reactants provides is valid and unique up to any user-defined conservation bounds (8), and the outcome is the matrix , shown in (12), that indicates the stoichiometric coefficient of slack reactants at each complex.
As we showed in Example 3.1, to minimize the ‘intrusiveness’ of a slack network, we can simplify a slack network by setting as many as possible. To do that, we choose the entries of such that is the maximum entry of the -th row of . We further optimize the effect of the slack reactants by removing the ‘redundant’ stoichiometric coefficient of slack reactants. For example, for a CRN,
(17) |
the algorithm generates the following new CRN with a single slack reactant
(18) |
However, by breaking up the connectivity, we can also generate another network
(19) |
The network in (18) is more intrusive than the network in (19) in the sense of accessibility. At any state where , the system associated with (18) will remain unchanged because no reaction can take place. However, the reaction in (19) can always occur despite . Hence (19) preserves the accessibility of the original system associated with (17) as any state for is accessible from any other state in the original reaction system (17). We refer such a system with slack reactants generated by canceling redundant slack reactants to an optimized slack system. In Section 6, we explore the accessibility of an optimized reaction network with slack reactants in a more general setting.
Finally, we can make a network with slack reactants admit a better approximation of a given CRN by choosing an optimized conservation relation in (8). First, we assume that only a single conservation law and a single slack reactant are added to a given CRN. For the purpose of state space truncation onto finitely many states, a single conservation law is enough as all species could be bounded by as shown in (8). Let this single conservation law be
Then the matrix in (9) is a vector , and we denote this by . By the definition of the intensities (13) for a network with slack reactants, some reactions are turned off when , i.e. . Geometrically, a reaction outgoing from the hyperplane is turned off (Figure 2).
Hence we optimize the estimation with slack reactants by minimizing such intrusiveness of turned off reactions. To do that, we choose which minimizes the number of the reactions in .
4 Comparison to Other Truncation-Based Methods
In this Section we demonstrate that our method can potentially resolve limitations in calculating mean first passage times observed in other methods of state space truncation, namely sFSP and the finite-buffer method. Both methods require the user to make decisions about the state-space truncation that may introduce variability in the results. While all methods will converge to the true result as the size of the state space increases, we show our method is less dependent on user defined quantities. This minimizes additional decision-making on the part of the user that can lead to suboptimal results, especially in a context where the solution of the original system is not known.
4.1 Comparison to the sFSP method
A well known state truncation algorithm is known as the Finite State Projection (FSP) method [32]. For a given continuous-time Markov chain, the associated FSP model is restricted to a finite state space. If the process escapes this truncated space, the state is sent to a designated absorbing state (see Figure 1B). For a fixed time , the probability density function of the original system can be approximated by using the associated FSP model with sufficiently many states. The long-term dynamics of the original system, however, is not well approximated because the probability flow of the FSP model leaks to the designated absorbing state in the long run.
To fix this limitation of FSP, Gupta et al. proposed the stationary Finite State Projection method (sFSP)[16]. This method also projects the original state space onto a finite state space as the FSP method intended to. But sFSP does not create a designated absorbing state, as all outgoing transitions from the projected finite state space are merged to a single state ‘inside’ the finite state space (Figure 1C). The sFSP has been frequently used to estimate the long-term distribution of discrete stochastic models. However, if the size of the truncated state space is not sufficiently large, this method could fail to provide accurate estimation for the first passage time. To demonstrate this case, we consider the following simple 2-dimensional model. In the network shown in Figure 3A, two proteins are dimerized into protein while is being produced at relatively high rate. The state space of the original model is the full 2-dimensional positive integer grid. We estimate the time until the system reaches one of the two states indicated in red in Figure 3C, and we use alternative methods to do this.

For the sFSP, we project the original state space onto the rectangle by restricting and for some , and we fix the origin as the designated state (Figure 3C). If the process associated with the sFSP model escapes the rectangle, it transports to the designated state immediately. On the other hand, we also consider a slack network shown in Figure 3B, where we introduce the conservation law for some . Let be the first passage time we want to estimate.
By using the inverse matrix of the truncated transition matrix for each method, as detailed in [9], we obtain the mean of by using different values of . We also obtain an ‘almost’ true mean of by using Gillespie simulations of the original process. As shown in Figure 3B, the slack network model provides a more accurate mean first passage time estimation for the size of truncation in between 100 and 400 if the designated return state of sFSP is .
The inaccurate estimate from the sFSP is due to the choice of a return state. The sFSP model escapes the confined space often because the production rate of is relatively high. When it returns back to the origin, it is more likely to visit the target states than the original process.
Figure 3 shows that the mean first passage time of this system using sFSP depends significantly on the location of the chosen designated state. One of the two states is a particularly poor choice for sFSP, but it illustrates the idea that without previous knowledge of the system it can be difficult to know which states will perform well.
We display the behavior of individual solutions of the original model, the slack network, and the sFSP model in Figure 3D. The trajectory plots show that within the time interval , almost half of the 100 samples from both the original model and the slack network model stay far away from the target states, while all the 100 sample trajectories from the sFSP model stay close to the target states. We also illustrate this point in Figure 3E with heat maps of the three models at . Note that only in the case of the sFSP, the probability densities are concentrated at the target states.
4.2 Comparison to the finite buffer method
The finite buffer method was proposed to estimate the stationary probability landscape with state space truncation and a novel state space enumeration algorithm [7, 8]. For a given stochastic model associated with a CRN, the finite buffer method sets inequalities among species such as (8), so-called buffer capacities. Then at each state , the transition rate of a reaction is set to be zero if at least one of the inequality does not hold at . We note the algorithm, described in Section 3.2, for generating a slack network uses the same inequalities. Thus, the finite buffer method and the slack reactant method truncate the state space in the same way. We have shown, in Section 3.3 this type of truncation can create ‘additional’ absorbing states. These additional absorbing states change the accessibility between the states which means the mean first passage times cannot be accurately estimated. However, the regular slack systems preserve the network structure of the original network. Hence we are able to prove, as we already noted, that regular slack networks inherit the accessibility of the original network as long as the original network is ‘weakly reversible’ as we will define below.
We demonstrate this disparity between the the finite buffer and slack methods with the following network. Consider the mass-action system (14) with a fixed initial state . We are interested in estimating the mean first passage time to a target state . Note that the state space is irreducible (i.e., every state is accessible from any other state) as the network consists of unions of cycles. This condition, the union of cycles, is precisely what is meant by weakly reversible.[13, 5] Thus, the original stochastic system has no absorbing state and is accessible to the target state .
To use the finite buffer method on this network, we set as the buffer capacity, where is the associated stochastic process. (Here we choose so the state space contains the target state .) Hence when satisfies , the reactions , and cannot be fired as exceeds the buffer capacity. We now demonstrate the system has a new absorbing state. By first using reaction , to deplete all , and then , every state can reach the state in finite time with positive probability. The state is absorbing state because no other reactions can occur. Reactions and require at least one species, and any other reactions lead to states exceeding the buffer capacity. Therefore, the finite buffer method has introduced a new absorbing state not present in the original model so that it is not accessible to with positive probability.
Now, we show the explicit network structure of our slack network formulation will preserve the accessibility of the original system. We consider the same inequality as above with . We generate the slack network by using the algorithm shown in Section 3.2,
(20) |
Note that the associated stochastic process admits the conservation relation implying that . The state can not be reached as the only state that is accessible to is , but it violates the conservation law.
As we highlighted in Section 3.3, slack networks preserve the connectivity of the original network (14), hence the network (20) is also weakly reversible. Thus the state space of the stochastic process associated with the slack network is irreducible by Corollary 6.1. This implies that there is no absorbing state and the system is accessible to , unlike the stochastic process associated with the finite buffer relation.
5 Convergence Theorems for slack networks
In this section we establish theoretical results on the convergence of properties of a slack network to the original network. (Proofs of the theorems below are provided in Appendix B.) Many of these results rely on theorems from Hart and Tweedie[18] who studied when the probability density function of a truncated Markov process converges to that of the original Markov process. We employ the same idea of their proof to show the convergence of a slack network to the original network.
By assuming “uniforming mixing,” we show the convergence of the stationary distribution of the slack system to the stationary distribution of the original system as the conservation quantity grows. Furthermore, we show the convergence of mean first passage times for the slack network to the true mean first passage times. In particular, all these conditions hold when there is a Lyapunov function for the original system.
In this Section, assume that a given CRN is well defined for all time and let be an associated slack network obtained with a single conservative quantity . We denote by and the associated stochastic processes the original CRN and the slack network respectively. We fix the initial state for both systems, i.e., for some and for each . (This means we can only consider slack systems where is large enough so that .) Assume that both the original and slack systems are irreducible, and denote by and the state spaces for each respectively. (In Section 6, we prove accessibility properties that the slack system can inherit from the original system.)
Notice that every state in satisfies our conservation inequality. That is, for every we have . It is possible that for some . For simplicity, we assume that the truncated state space is always enlarged with respect to the conservative quantity , that is for each . (For the general case, we could simply consider a subsequence such that and for each .)
As defined in Section 2, is the intensity of a reaction for the associated stochastic system . We also denote by the intensity of a reaction for the associated stochastic system . Finally we let and be the probability density function of and , respectively. We begin with the convergence of the probability density functions of the slack network to the original network with increasing .
Theorem 5.1.
For any and , we have
A Markov process defined on a finite state space admits a stationary distribution. Hence admits a stationary distribution . If the slack system satisfies the condition of “uniform mixing”, that is the convergence rate of is uniform in , then we have the following result:
Theorem 5.2.
Suppose admits a stationary distribution . Suppose further that there exists a positive function , which is independent of , such that and . Then
We now consider convergence of the mean first passage time of . Recall, we assumed that both stochastic processes have the same initial state and both state spaces and are irreducible. Hence, for any , each state in is accessible to for sufficiently large .
Theorem 5.3.
For a subset of the state space of , let and be the first passage times to for and to for , respectively. Assume the following conditions,
-
1.
admits a stationary distribution , and
-
2.
.
Then for any ,
If we further assume that
-
3.
, and
-
4.
there exists such that for all and ,
then
Remark 5.1.
To obtain convergence of higher moments of the first passages time, we need only replace conditions and with
(21) |
respectively.
We now show that if a Lyapunov function exists for the original system, the conditions in Theorem 5.2 and Theorem 5.3 hold. The Lyapunov function approach has been proposed by Meyn and Tweedie [31] and it has been used to study long-term dynamics of Markov processes [6, 16, 4, 17] especially exponential ergodicity. Gupta et al [16] uses a linear Lyapunov function to show that the stationary distribution of an sFSP model converges to a stationary distribution of the original stochastic model and use the Lyapunov function explicitly compute the convergence rate. In particular, we show Lyapunov functions exist for the examples we consider in Section 7.
Theorem 5.4.
Suppose there exists a function and positive constants and such that for all ,
-
1.
for all ,
-
2.
is an increasing function in the sense that
for each and , and
-
3.
Then the conditions in Theorem 5.3 hold.
6 Inheritance of slack networks
As we showed in Section 4.2, not all state space truncations preserve accessibility of states in the original system. (For the example in Section 4.2, the truncation created a new absorbing state.) Thus, it is desirable to obtain reduced models that are guaranteed to maintain the accessibility of the original system to predetermined target states. In this Section, we show that under mild conditions, both a regular slack system and an optimized slack system preserve the original accessibility. The proofs of the theorems introduced in this section are in Appendix C. The key to these results is the condition of weak reversibility.
Definition 6.1.
A reaction network is weakly reversible if each connected component of the network is strongly connected. That is, if there is a path of reactions from a complex to , then there is a path of reactions to .
We note that the weakly reversible condition applies to the network graph of the CRN. The network graph consists of complexes (nodes) and reactions (edges). It is a sufficient condition for irreducibility of the associated mass-action stochastic process. Indeed, the sufficiency of weak reversibility holds even under general kinetics as long as condition (4) is satisfied.[33] Hence irreducibility of a regular slack network follows since it preserves weak reversibility of the original network and the kinetics modeling the regular slack system satisfies (4).
Corollary 6.1.
Let be a weakly reversible CRN with intensity functions satisfying (4). Then the state space of the associated stochastic process with a regular slack network is a union of closed communication classes for any .
In case the original network is not weakly reversible, we can still guarantee that optimized slack systems have the same accessibility as the original system provided all species have a degradation reaction ().
Theorem 6.1.
Let be a reaction network such that . Suppose that the stochastic process associated with and beginning at the point is irreducible. Let be the stochastic process associated with an optimized slack network such that for every large enough. Then, for any subset of the state space of , there exists such that reaches almost surely for .
This theorem follows from the fact that a slack system only differs from the original system when its runs out of slack reactants. However, in an optimized slack system, degradation reactions are allowable with no slack reactants. Hence, our proof of Theorem 6.1 relies on the presence of all degradation reactions.
A slack network may also inherit its stationary distribution from the original reaction system. When the original system admits a stationary distribution of a product form of Poissons under the complex balance condition, a slack system inherits the same form of the stationary distribution as well. A reaction system is complex balanced if the associated deterministic mass-action system admits a steady state , such that
where is the deterministic mass-action rate [12]. If a reaction system is complex balanced, then its associated stochastic mass-action system admits a stationary distribution corresponding to a product of Poisson distributions centered at the complex balance steady state.[3]. The following lemma show that the complex balancing of the original network is inherited by a regular slack network.
Lemma 6.1.
Suppose that is a reaction network whose mass-action deterministic model admits a complex balanced steady state . Then any regular slack network with slack reactants also admits a complex balanced steady state at .
Remark 6.1.
Note that a regular slack network also preserves the deficiency of the original network. Deficiency of a reaction network is an index such that
where is the number of the complexes, is the number of connected components and is the rank of the stoichiometry matrix of the reaction network. Deficiency characterizes the connectivity of the network structure, and surprisingly it can also determine the long-term behavior of the system dynamics regardless of the system parameters.[12, 20, 3] A regular slack network and original network have the same number of complexes , and the same connectivity matrix , which implies they have the same number of connected components . Furthermore, using the notation from Section 3.2, the stoichiometry matrices are for the original network and
for a slack network, which means that they have the same rank . Together, these imply that the original network and its regular slack network have the same deficiency.
Since the complex balancing is inherited with the same steady state values for , we have the following stochastic analog of inheritance of the Poissonian stationary distribution for regular slack systems.
Theorem 6.2.
Let be the stochastic mass-action system associated with a complex balanced with an initial condition . Let be the stochastic system associated with a regular slack system with . Then, for the state space of , there exists a constant such that
where and are the stationary distributions of and , respectively.
Example 6.1.
Consider two networks,
(22) | |||
(23) |
Let and be systems (22) and (23), respectively, where is the conservation quantity . Under mass-action kinetics, the complex balance steady state of (22) is . Under mass-action kinetics, is a complex balance steady state of (23).
Now let and be the stationary distribution of and , respectively. By Theorem 6.4, Anderson et al [3], is a product form of Poissons such that
By plugging into the chemical master equation 3 of and showing that for each
we can verify that is a stationary solution of the chemical master equation 3 of . Since the state space of is , we choose a constant such that
Then is the stationary distribution of .

7 Applications of Slack Networks
In this Section, we demonstrate the utility of slack reactants in computing mean first-passage times for two biological examples. For both examples, we compute the mean first passage time via the matrix inversion approach as shown in Appendix A.
7.1 A Lotka-Volterra model with migration
Consider a Lotka-Volterra model with migration shown in Figure 4A. In this model, species is the prey, and species is the predator. Clearly, the state space this model is infinite such that , . We will use slack reactants to determine the expected time to extinction of either species. More specifically, let . We will calculate the mean first arrival time to from an initial condition . (In our simulations in Figure 4, we chose .)
To generate our slack network, we choose a conservative bound with . As we discussed in Section 3.3, this minimizes the intrusiveness of slack reactants because the number of reactions such that is minimized. By using the algorithm introduced in Section 3.2, we generate a regular slack network (LABEL:eq:regular_slack) with a slack reactant ,
(24) |
As the slack reactant in reactions can be canceled, we further generate the optimized slack network shown in Figure 4A. We let , which is the conservation quantity of the new network.
Let be the first passage time from our initial condition to . First, we examine the accessibility of the set . Because our reaction network contains creation and destruction of all species (i.e., ) the original model is irreducible and any state is accessible to . Furthermore, Theorem 6.1 guarantees that the stochastic model associated with the optimized slack network is also accessible to from any state.
Next, by showing there exists a Lyapunov function satisfying the condition of Theorem 5.4 for the original model, we are guaranteed the first passages times from our slack network will converge to the true first passage times. (See Appendix E.1 for more detail.) Therefore, as the plot shows in Figure 4B, the mean first extinction time of the slack network converges to that of the original model, as increases. The mean first passage time of the original model was obtained by averaging sample trajectories. These trajectories were computed serially on a commodity machine and took hours to run. In contrast, the mean first passage times of the slack systems were computed analytically on the same computer and took at most 13 seconds. Figure 4 also shows that using only samples is misleading as the simulation average has not yet converged to the true mean first passage time. Finally, as expected from Theorem 5.1 the probability density of the slack network converges to that of the original network (see Figure 4C).
7.2 Protein synthesis with slow toggle switch
We now consider a protein synthesis model with a toggle switch, see Figure 5A. Protein species and may be created, but only when their respective genes or are in the active (unoccupied) state, and . Each protein acts as a repressor for the other by binding at the promoter of the opposite gene and forcing it into the inactive (occupied) state ( and ). In this system we consider only one copy of each gene, so that, for all time. Thus, we focus primarily on the state space of protein numbers only .
The deterministic form of such systems are often referred to as a “bi-stable switch” as it is characterized by steady states and ( “on” and “off”) and ( “off” and “on”). This stochastic form of toggle switch has been shown to exhibit a multi-modal long-term probability density due to switches between these two deterministic states due to rapid significant changes in the numbers of proteins and by synthesis or degradation (depending on the state of promoters) [1]. Figure 5C shows that the associated stochastic system admits a tri-modal long-term probability density. Thus the system transitions from one mode to other modes and, for the kinetic parameters chosen in Figure 5, rarely leaves the region . Significant departures from a stable region of a genetic switch may be associated with irregular and diseased cell fates. As such, the first passage time of this system outside of may indicate the appearance of an unexpected phenotype in a population. Because this event is rare, estimating first passage times with direct stochastic simulations, such as with the Gillespie algorithm [15], will be complicated by the long time taken to exit the region.
As in the previous example, slack systems provide a valuable tool for direct calculation of mean first passage times. In this example, we consider the time a trajectory beginning at state enters the target set and compute , the first passage time to .
Since the species corresponding to the status of promoters ( and ) are already bounded, we use the conservative bound to generate a regular slack network in Figure 5B with the algorithm introduced in Section 3.2. The original toggle switch model is irreducible (because of the degradation , and protein synthesis , reactions). Moreover by Theorem 6.1, the degradation reactions guarantee that the slack system is also accessible to from any state.
As shown in Figure 5D, the mean first passage time of the slack system appears to be converting to approximately . To prove that the limit of the slack system is actually the original mean first passage time, we construct a Lyapunov functions satisfying the conditions of Theorem 5.4. See Appendix E.2 for more details about the construction of the Lyapunov function.

8 Discussion & Conclusions
We propose a new state space truncation method for stochastic reaction networks (see Section 3). In contrast to other methods, such as FSP, sFSP and the finite buffer method, we truncate the state space indirectly by expanding the original chemical reaction network to include slack reactants. The truncation is imposed through user defined conservation relations among species. The explicit network structure of slack reactants facilitates proofs of convergence (Section 5) and allows the use of existing software packages to study the slack network itself [23]. Indeed, any user-defined choices for conservation laws, conservation amounts, and stochiometric structure, can be used to construct a slack network with our algorithm. We provide guidelines for optimal user choices that can increase the similarity between the slack system and the original model (see Section 3.3).
Slack systems can be used to estimate the dynamical behavior of the original stochastic model. In Section 4, we used a simple example to show that the slack method can lead to a better approximation for the mean first passage time than the sFSP method and the finite buffer method. In particular, in Section 5 we provide theorems that show the slack system approximates the probability density and the mean first passage time of the original system. Because slack networks preserve network properties, such as weak reversibility, the slack system is also likely to have the same accessibility to a target state as the original model (see Section 6). In particular, we note that weak reversibility guarantees our slack truncation does not introduce absorbing states.
In Section 6, we show that this truncation method is natural in the sense that the stationary distributions of the original and slack system are identical up to multiplication by a constant when the original system is complex balanced. Finally, in Section 7, we use slack networks to calculate first passage times for two biological examples. We expect that this new theoretical framework for state space truncation will be useful in the study of other biologically motivated stochastic chemical reaction systems.
ACKNOWLEDGMENTS
This work is partially supported by NSF grants DMS1616233, DMS1763272, BCS1344279, Simons Foundation grant 594598 and the by the Joint DMS/NIGMS Initiative to Support Research at the Interface of the Biological and Mathematical Sciences (R01-GM126548).
Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Appendix A First passage time for Markov processes with finitely many states
The Markov chain associated with a slack network has always a finite state space. There are many different methods to analytically derive the mean first passage time of a Markov chain with a finite state space [22, 21, 37, 9]. In this paper, we use the method of Laplace transform which is also used in [37].
For a continuous time Markov process, let be the finite state space and let be the transition rate matrix, i.e. is the transition rate from state to state if and .
For a subset , we define a truncated transition matrix that is obtained by removing the th row and column from for . Then the mean first passage time to set starting from the -th state is the -th entry of , where is a column vector with each entry .
Appendix B Proofs of Convergence Theorems
In this section, we provide the proofs of the theorems introduced in Section 5. We use the same notations and the same assumptions as we used in Section 5.
Proof of Theorem 5.1: We employ the main idea shown in the proof of Theorem 2.1 in Hart and Tweedie [18]. Let a state and time be fixed. We consider large enough so that .
We use an FSP model on of the original system with the designated absorbing state . Let be the probability density function of this FSP model.
Let be the first time for to hit . We generate a coupling of and the FSP model restricted on as they move together by and they move independently after . Then for because the FSP model has stayed in if and only if has never touched . Thus
(25) | ||||
Furthermore
where we used the fact that after , the FSP process is absorbed at . Thus
(26) |
Note that
Since increases to almost surely, as increases, monotonically increases in and converges to , as for each . Then by using the monotone convergence theorem, the term
converges to . Therefore, by taking in both sides of (26) the result follows.
Proof of Theorem 5.2: This proof is a slight generalization of the proof of Theorem 3.3 in Hart and Tweedie [18]. Since the convergence of is independent of , for any , we choose sufficiently large such that
for all . Then by using the triangle inequalities
(27) |
Note that as we mentioned in the proof of Theorem 5.1, we have monotone convergence of to for each , as . Hence by the monotone convergence theorem, the first summation in (LABEL:eq:conv_of_mfpt_final) goes to zero, as . Note further that from (25) we have
Hence the second summation in (LABEL:eq:conv_of_mfpt_final) satisfies
Note that by (25)
Furthermore and . Hence , as because almost surely, as .
Consequently, we have
Since we choose an arbitrary , the completes the proof.
In order to prove Theorem 5.3, we consider an ‘absorbing’ Markov process associated with and . Let and be Markov processes such that
That is, and are coupled process to and , respectively. Furthermore they are absorbed to once the coupled process ( for and for ) visits the set . These coupled process have the following relation.
Lemma B.1.
Let be fixed and be arbitrary. Suppose admits a stationary distribution . Suppose further that . Then there exists a finite subset such that and for sufficiently large .
Proof.
Since the probabilities of and are leaking to , we have for each and for each
(28) |
This can be formally proved as for each
In the same way, we can prove that for each .
Let be arbitrary. Then implies that for any subset , we have
(29) |
for sufficiently large . We use this property and (LABEL:eq:two_probs_bounded) combined with the ‘monotonicity’ of the chemical master equation to show the result.
First of all, note that we are assuming . Hence there is a constant such that
for all . Moreover by choosing sufficiently small in (29), we have the minimum . Thus there exists such that
for all sufficiently large and for each . Note that the chemical master equation is a system of ordinary differential equations, and and are the solution to the system with the initial conditions and . Hence the monotonicity of a system of ordinary differential equations, it follows that
(30) |
In the same way, for any it follows that
(31) |
The detailed proof about the monotonicity of the chemical master equation is shown in Lemma 3.2 of Enciso and Kim. [10]
Secondly, there exists a finite set such that because is a probability distribution, Furthermore by (29), for any sufficiently large .
We prove Theorem 5.3 by using the ‘absorbing‘ Markov processes and coupled to and , respectively.
Proof of Theorem 5.3: We first break the term to show that
Let be an arbitrarily small number. Let be a finite set we found in Lemma B.1. Then by using triangular inequalities,
(32) |
Note that by the same proof of Theorem 5.1, we have the convergence
Since the summation is finite, we have that by taking in (32)
(33) |
Now, to show the convergence of the mean first passage times, note that
where the integrand is bounded by .
Condition (iii) in Theorem 5.3 implies that and . Hence the dominant convergence theorem and (33) imply that
The existence of a special Lyapunov function ensures that the conditions in Theorem 5.3 hold. In order to use the absorbing Markov processes, as we did in the previous proof, we define a Lyapunov function for based on a given Lyapunov function . Let and denote the intensity of a reaction for and , respectively. For a given function such that for any , we define such that
so that for any .
Note that if . Hence for each ,
(34) |
because and for every reaction . Moreover, Since if by the definition of for each reaction . Hence, for each
(35) |
From (LABEL:eq:_relation_between_v_and_v_bar) and (35), therefore, we conclude that for any
(36) |
where and .
Proof of Theorem 5.4: In this proof, we show that all the conditions in Theorem 5.3 are met by using the given function . First, we show that admits a stationary distribution. This follows straightforwardly by Theorem 3.2 in Hard and Tweedie [18] because condition 3 in Theorem 5.4 means that is a Lyapunov function for .
The condition basically means that every outward reaction (i.e. and ) in gives a non-negative drift as . We use condition 2 in Theorem 5.4 to show that is also a Lyapunov function for for any N.
We denote by the intensity of a reaction in . Suppose that a reaction is obtained from by adding a slack reactant. That is, , where is a projection function such that . Then, by the definition (13) of the intensity in a slack network, we have when , because the is confined in . Furthermore, by condition 2 in Theorem 5.4, when because this means that for some . This implies if
(37) |
In case , we have that by the definition (13). Hence by condition 3 and (37) we have for any
(38) |
Thus is also a Lyapunov function for . Hence Theorem 6.1 (the Foster-Lyapunov criterion for exponential ergodicity) in Meyn and Tweedie [31] implies that for each there exist and , which are only dependent of and , such that
This guarantees that the condition in Theorem 5.2 holds with . Hence we have by Theorem 5.2.
Now to show that the first passage time has the finite mean, we apply the Foster-Lyapunov criterion to . Since (LABEL:eq:Lyapunov_condition_of_bar_x) meets the conditions of Theorem 6.1 in Meyn and Tweedie [31], the probability of converges in time to its stationary distribution exponentially fast. That is, for any subset , there exist and such that
where is the stationary distribution of . This, in turn, implies that
(39) |
where the second inequality follows as , which is because is eventually absorbed in as the original process is irreducible and closed.
Finally we show that for any , there exists such that and for the first passage time . By the same reasoning we used to derive (LABEL:eq:Lyapunov_condition_of_bar_x), we also derive by using (LABEL:eq:relation2_V_and_VN) that
(40) |
Hence in the same way as used for (39), we have the exponential ergodicty of by Theorem 6.1 in Meyn and Tweedie [31], and then we derive that
where is the stationary distribution of which also has zero probability in as is eventually absorbed in . Since and only depend on and , we let that satisfies that for any and .
Appendix C Proofs of Accessibility Theorems
In this section, we prove Theorem 6.1. We begin with a necessary lemma. In the following lemma, for a given reaction system , we generate a slack system admitting a single slack reactant . We denote by the vector for which the slack system admits a the conservation bound for each state of the slack system. We also let be the maximum stochiometry coefficient of the slack reactant in the slack network. Finally note that and has the one-to-one correspondence as every complex is obtained by adding the slack reactant to a complex with the stochiometric coefficient . That is, for some and . Hence as the proof of Theorem 5.4, we let be a one-to-one projection function such that where for some and .
Lemma C.1.
For a reaction system , let be a slack system with a single slack reactant and a conservation vector . Let be the maximum stochiometric coefficient of the slack reactant in . Then if for a state , then
where and are reactions such that and .
Proof.
Let such that with the stoichiometric coefficient of . By the definition of shown in (13),
so long as . Since , the result follows. ∎
Proof of Theorem 6.1: We denote by the irreducible and closed state space of such that . We also denote by the communication class of such that for each . Then , and the irreducibility of guarantees that . Therefore there exists such that if . This implies that it is enough to show that is closed for large enough because is a finite subset. To prove this, we claim that there exists such that if , then admits no out-going flow.
To prove the claim by contradiction, we assume that for a sequence such that , there is an out-going flow from to a state . We find a sequence of reactions with which returns to from for all large enough .
First of all, note because each reaction is converted to in any optimized slack network by its definition of an optimized slack network (See Section 3.3). Hence for each , by firing the reactions , can reach to from .
The next aim is to show that for any fixed large enough, there exist a sequence of reactions such that we have i) for each , and ii) . This means that can reach to from along the reactions in that order. For this aim, we make a use of the irreducibility of since so that there exists a sequence of reactions such that i) for each and ii) . By making large enough, we have if , then
Hence by Lemma C.1, for each reaction such that and we have
(41) |
Finally since and for each , we have
(42) |
Hence (41) and (42) imply that if , then can re-enter to from with positive probability.
Consequently, we constructed a sequence of reactions along which can re-enter from for any such that . Since each is a communication class, and ,in turn, this contradictions to the assumption that . Hence the claim holds so that is closed for any large enough.
Appendix D Proofs for Lemma 6.1 and Theorem 6.2
Proof of Lemma 6.1: In the deterministic model associated with , if we fix for all , then follows the same ODE system as the deterministic model associated with the original network does. Therefore is a complex balanced steady state for . It has been shown that the existence of a single positive complex balanced steady state implies that all other positive steady states are complex balanced [12], therefore completing the proof.
Proof of Theorem 6.2: Let and be the conservative vector and the conservative quantity of the slack network , respectively. Then is a stationary solution of the chemical master equation (3) of ,
Especially, as shown in Theorem 6.4, Anderson et al[3], satisfies the stochastic complex balance for : for each complex and a state ,
(43) |
Let be the one-to-one projection function that we defined for the proof of Theorem 6.1. Note that each reaction is defined as it satisfies the conservative law
(44) |
where and . Then also satisfies the stochastic complex balance for because (LABEL:eq:sto_cx_bal) and (44) imply that for each complex and a state ,
Then by summing up for each ,
and, in turn, is a stationary solution of the chemical master equation of . Since the state space of and differ, is a constant such that the sum of over the state space of is one. Then .
Appendix E Lypunov Functions for Example Systems
E.1 Lotka-Volterra with Migration
We construct a Lyapunov function satisfying the conditions of Theorem 5.4 for the Lotka-Volterra model with migration (Figure 4A). First, for both the original model and the slack network, we assume , that are not depend on .
Let with . The condition in Theorem 5.4 obviously holds. Note that the slack network of Figure 4A admits a conservation law and the state space of the system is . Then condition 2 in Theorem 5.4 also follows by the definition of .
Now, to show condition 3 in Theorem 5.4, we first note that for ,
(45) |
Since we assumed (see caption of Figure 4), each coefficient of and is negative. Thus, there exists such that for each , the left-hand side in (LABEL:eq:LV_detail1) satisfies
for some . Letting
condition 3 in Theorem 5.4 holds with and .
E.2 Protein Synthesis with Slow Toggle Switch
We also make a use of the Lyapunov approach shown in Theorem 5.4 to prove that the first passage time of the slack system in Figure 5B converges to the original first passage time, as the truncation size goes infinity. Let be the stochastic system associated with the slack system. Recall that the slack network admits a conservation relation with . Hence we define a Lyapunov function where . By the definition of , it is obvious that conditions 1 and 2 in Theorem 5.4 hold. So we show that condition 3 in Theorem holds.
For a reaction , it is clear that the term is negative. For each reaction in , it is also clear that the term is positive. However, the reaction intensity for a reaction in is linear in either and , while the reaction intensity for a reaction in is constant. Therefore there exists a constant such that if or , then
for some . Hence, letting
condition 3 in Theorem 5.4 holds with and .
References
- [1] M Ali Al-Radhawi, Domitilla Del Vecchio, and Eduardo D Sontag. Multi-modality in gene regulatory networks with slow promoter kinetics. PLoS computational biology, 15(2):e1006784, 2019.
- [2] David F Anderson and Daniele Cappelletti. Discrepancies between extinction events and boundary equilibria in reaction networks. Journal of mathematical biology, 79(4):1253–1277, 2019.
- [3] David F. Anderson, Gheorghe Craciun, and Thomas G. Kurtz. Product-form stationary distributions for deficiency zero chemical reaction networks. Bulletin of Mathematical Biology, 72(8):1947–1970, Nov 2010.
- [4] David F Anderson and Jinsu Kim. Some network conditions for positive recurrence of stochastically modeled reaction networks. SIAM Journal on Applied Mathematics, 78(5):2692–2713, 2018.
- [5] Elias August and Mauricio Barahona. Solutions of weakly reversible chemical reaction networks are bounded and persistent. IFAC Proceedings Volumes, 43(6):42–47, 2010.
- [6] Corentin Briat, Ankit Gupta, and Mustafa Khammash. Antithetic integral feedback ensures robust perfect adaptation in noisy biomolecular networks. Cell systems, 2(1):15–26, 2016.
- [7] Youfang Cao and Jie Liang. Optimal enumeration of state space of finitely buffered stochastic molecular networks and exact computation of steady state landscape probability. BMC Systems Biology, 2(1):30, 2008.
- [8] Youfang Cao, Anna Terebus, and Jie Liang. Accurate chemical master equation solution using multi-finite buffers. Multiscale Modeling & Simulation, 14(2):923–963, 2016.
- [9] Tom Chou and Maria R D’Orsogna. First passage problems in biology. In First-passage phenomena and their applications, pages 306–345. World Scientific, 2014.
- [10] German Enciso and Jinsu Kim. Constant order multiscale reduction for stochastic reaction networks. arXiv preprint arXiv:1909.11916, 2019.
- [11] German Enciso and Jinsu Kim. Embracing noise in chemical reaction networks. Bulletin of mathematical biology, 81(5):1261–1267, 2019.
- [12] Martin Feinberg. Complex balancing in general kinetic systems. Archive for rational mechanics and analysis, 49(3):187–194, 1972.
- [13] Martin Feinberg. Chemical reaction network structure and the stability of complex isothermal reactors—i. the deficiency zero and deficiency one theorems. Chemical engineering science, 42(10):2229–2268, 1987.
- [14] Daniel T Gillespie. A rigorous derivation of the chemical master equation. Physica A: Statistical Mechanics and its Applications, 188(1-3):404–425, 1992.
- [15] Daniel T Gillespie. Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem., 58:35–55, 2007.
- [16] Ankit Gupta, Jan Mikelson, and Mustafa Khammash. A finite state projection algorithm for the stationary solution of the chemical master equation. The Journal of chemical physics, 147(15):154101, 2017.
- [17] Mads Christian Hansen, Wiuf Carsten, et al. Existence of a unique quasi-stationary distribution in stochastic reaction networks. Electronic Journal of Probability, 25, 2020.
- [18] Andrew G Hart and Richard L Tweedie. Convergence of invariant measures of truncation approximations to markov processes. 2012.
- [19] João Pedro Hespanha. StochDynTools — a MATLAB toolbox to compute moment dynamics for stochastic networks of bio-chemical reactions. Available at http://www.ece.ucsb.edu/~hespanha, May 2007.
- [20] Fritz Horn. Necessary and sufficient conditions for complex balancing in chemical kinetics. Archive for Rational Mechanics and Analysis, 49(3):172–186, 1972.
- [21] Jeffrey J Hunter. Generalized inverses and their application to applied probability problems. Technical report, VIRGINIA POLYTECHNIC INST AND STATE UNIV BLACKSBURG DEPT OF INDUSTRIAL …, 1980.
- [22] Jeffrey J Hunter. The computation of the mean first passage times for markov chains. Linear Algebra and its Applications, 549:100–122, 2018.
- [23] Atefeh Kazeroonian, Fabian Fröhlich, Andreas Raue, Fabian J Theis, and Jan Hasenauer. Cerena: Chemical reaction network analyzer—a toolbox for the simulation and analysis of stochastic chemical kinetics. PloS one, 11(1), 2016.
- [24] Jae Kyoung Kim and Eduardo Sontag. FEEDME — a MATLAB codes to calculate stationary moments of feed forward network and complex balanced network. Available at http://www.ece.ucsb.edu/~hespanha, May 2007.
- [25] Juan Kuntz, Philipp Thomas, Guy-Bart Stan, and Mauricio Barahona. Approximations of countably-infinite linear programs over bounded measure spaces. arXiv preprint arXiv:1810.03658, 2018.
- [26] Juan Kuntz, Philipp Thomas, Guy-Bart Stan, and Mauricio Barahona. Bounding the stationary distributions of the chemical master equation via mathematical programming. The Journal of chemical physics, 151(3):034109, 2019.
- [27] Juan Kuntz, Philipp Thomas, Guy-Bart Stan, and Mauricio Barahona. Stationary distributions of continuous-time markov chains: a review of theory and truncation-based approximations. arXiv preprint arXiv:1909.05794, 2019.
- [28] Fernando López-Caamal and Tatiana T Marquez-Lago. Exact probability distributions of selected species in stochastic chemical reaction networks. Bulletin of mathematical biology, 76(9):2334–2361, 2014.
- [29] Shev MacNamara, Kevin Burrage, and Roger B Sidje. Multiscale modeling of chemical kinetics via the master equation. Multiscale Modeling & Simulation, 6(4):1146–1168, 2008.
- [30] Leonor Menten and MI Michaelis. Die kinetik der invertinwirkung. Biochem Z, 49:333–369, 1913.
- [31] Sean P Meyn and Richard L Tweedie. Stability of markovian processes iii: Foster–lyapunov criteria for continuous-time processes. Advances in Applied Probability, 25(3):518–548, 1993.
- [32] Brian Munsky and Mustafa Khammash. The finite state projection algorithm for the solution of the chemical master equation. The Journal of chemical physics, 124(4):044104, 2006.
- [33] Loïc Paulevé, Gheorghe Craciun, and Heinz Koeppl. Dynamical properties of discrete reaction networks. Journal of mathematical biology, 69(1):55–72, 2014.
- [34] Johan Paulsson. Summing up the noise in gene networks. Nature, 427(6973):415–418, 2004.
- [35] E Seneta. Finite approximations to infinite non-negative matrices. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 63, pages 983–992. Cambridge University Press, 1967.
- [36] Richard L Tweedie. Truncation procedures for non-negative matrices. Journal of Applied Probability, 8(2):311–320, 1971.
- [37] Romain Yvinec, Maria R d’Orsogna, and Tom Chou. First passage times in homogeneous nucleation and self-assembly. The Journal of chemical physics, 137(24):244107, 2012.