This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Power-engine-load form for
dynamic absolute concentration robustness

Badal Joshi 111Department of Mathematics, California State University San Marcos (bjoshi@@csusm.edu)   and Gheorghe Craciun 222Departments of Mathematics and Biomolecular Chemistry, University of Wisconsin-Madison (craciun@@wisc.edu)
Abstract

In a reaction network, the concentration of a species with the property of dynamic absolute concentration robustness (dynamic ACR) converges to the same value independent of the overall initial values. This property endows a biochemical network with output robustness and therefore is essential for its functioning in a highly variable environment. It is important to identify structure of the dynamical system as well as constraints required for dynamic ACR. We propose a power-engine-load form of dynamic ACR and obtain results regarding convergence to the ACR value based on this form.  
Keywords: biochemical reaction network, absolute concentration robustness, power-engine-load form, dynamical systems, robust network output, chemostat.

AMS subject codes: 34C20, 37N25, 37N35, 92C42

1 Introduction

Dynamic absolute concentration robustness (dynamic ACR), introduced in [1], is a property of dynamical systems wherein one variable converges to a unique value independent of the initial values. This variable is the dynamic ACR variable and the unique value is its ACR value. Dynamic ACR is significant for applications to biochemistry. Biochemical systems need to perform robustly in a wide variety of conditions. Dynamic ACR provides a mechanism for such robustness. In signal response circuits, an essential design feature is that the response depends on the signal but not on the internal state of the signaling circuit. Different initial states (which model the internal state of the signaling circuit when the signal arrives) generically result in different final states, especially when some conservation laws hold. However, if one of the variables is invariant across all final states, such as is the case in dynamic ACR, then this variable can be taken to be the invariant signal response. The dependence of the signal strength is encoded in the reaction rate constants, and therefore signal response will depend on the signal strength via the rate constants. However, the signal response will not depend on the initial values, i.e. the internal state of the signaling circuit.

Formally, dynamic ACR is the property that all trajectories (with some minor restrictions on allowed initial values) converge to the hyperplane {xi=ai}\{x_{i}=a_{i}^{*}\}. It is desirable to identify sufficient conditions, either on the underlying reaction network or the dynamical system, that generate dynamic ACR. Necessary and sufficient conditions for dynamic ACR in complex balanced systems have been obtained in [1]. Moreover, a classification of small networks with dynamic ACR appears in [2].

Shinar and Feinberg [3] defined static absolute concentration robustness (static ACR) as the property that all positive steady states are located in the hyperplane {xi=ai}\{x_{i}=a_{i}^{*}\}. Furthermore, they gave sufficient network conditions for the reaction network to have static ACR in some species. Additionally, they gave many biochemically realistic networks and showed that these networks satisfy their network conditions and therefore have static ACR. Many of the biochemical networks appearing in their paper may have the property of dynamic ACR as well, although this remains an open question. It is difficult to characterize dynamic ACR because it requires understanding the limiting behavior for arbitrary initial values. We propose a power-engine-load form (see (3.1)) for the differential equation satisfied by the supposed dynamic ACR variable, which may help establish dynamic ACR in some situations as we show in this paper. In Theorem 3.8, we establish sufficient conditions for convergence to the ACR value, based on properties of the terms appearing in the power-engine-load form equation.

Dynamic ACR has been experimentally observed in bacterial two-component signaling systems such as the EnvZ-OmpR system and the IDHKP-IDH system [4, 5, 6, 7, 8, 9]. Absolute concentration robustness (both static and dynamic) is related to the concept called robust perfect adaptation [10, 11, 12] studied from a control theory perspective. For a biochemical perspective on perfect adaptation, see [13, 14]. Structural requirements for robust perfect adaptation in biomolecular networks are studied in [15]. While our goal in this paper is not to explore the connections between the two fields of dynamical systems and control theory, we mention the above papers here as a help to any readers interested in understanding the connection further.

This article is organized as follows. Section 2 gives the background on reaction networks, mass action kinetics, and dynamic absolute concentration robustness (ACR). Section 3 introduces a power-engine-load form for dynamic ACR, examples, and convergence results. The main result of this paper is Theorem 3.8. Section 4 studies the dynamics of several reaction networks where we apply power-engine-load form and illustrate the use of Theorem 3.8. In many cases, the reaction network is coupled with inflows. While each network requires special analysis, we ultimately apply Theorem 3.8 to show dynamic ACR for the network under study.

2 Reaction networks

An example of a reaction is A+B2CA+B\to 2C, which is a schematic representation of the process where a molecule of species AA and a molecule of species BB react with one another and result in two molecules of a species CC. The abstract linear combination of species A+BA+B that appears on the left of the reaction arrow is called the reactant complex while 2C2C is called the product complex of the reaction A+B2CA+B\to 2C. We assume that for any reaction the product complex is different from the reactant complex. A reaction network is a nonempty, finite set of species and a nonempty, finite set of reactions such that every species appears in at least one complex. We will use mass action kinetics for the rate of reactions.

2.1 Mass action systems

In mass action kinetics, each reaction occurs at a rate proportional to the product of the concentrations of species appearing in the reactant complex. We conventionally use lower case letters a,b,ca,b,c to denote the species concentrations of the corresponding species A,B,CA,B,C, respectively. Under mass action kinetics, the reaction A+B2CA+B\to 2C occurs at rate kabkab where kk is the reaction rate constant, conventionally placed near the reaction arrow, as follows: A+B𝑘2CA+B\xrightarrow{k}2C. Consider the reaction network {A+Bk12C,2Ck22A}\{A+B\xrightarrow{k_{1}}2C,~{}2C\xrightarrow{k_{2}}2A\}. Application of mass action kinetics results in the following dynamical system, called mass action system, that describes the time-evolution of the species concentrations.

a˙\displaystyle\dot{a} =k1ab+2k2c2\displaystyle=-k_{1}ab+2k_{2}c^{2}
b˙\displaystyle\dot{b} =k1ab\displaystyle=-k_{1}ab
c˙\displaystyle\dot{c} =2k1ab2k2c2\displaystyle=2k_{1}ab-2k_{2}c^{2}

For further details on reaction networks and mass action systems, see for instance [16, 17]. We use standard notation and terminology of dynamical systems, such as steady states, stability, basin of attraction etc., see for instance [18, 19].

2.2 Dynamic absolute concentration robustness

Shinar and Feinberg defined (static) absolute concentration robustness (static ACR) as the property that all positive steady states are contained in a fixed hyperplane parallel to a coordinate hyperplane [3]. This means that for every positive steady state, one of the coordinates is invariant. Several network/structural conditions for static ACR have been identified [3, 20, 1, 2]. Static ACR is designed to model the property of a robust signal response despite variability in the internal state of the signaling circuit. However, the property accurately models output robustness only if there is convergence to a positive steady state for every initial value, and indeed, such convergence is not required in the definition of static ACR.

We defined dynamic absolute concentration robustness (dynamic ACR) as the property that all initial values lead to convergence to a fixed hyperplane parallel to a coordinate hyperplane [1] – a requirement somewhat more general than converging to a steady state in the hyperplane. We believe that this property better captures the idea of output robustness, when compared to static ACR, since it models long-term dynamics. Network conditions for dynamic ACR in small reaction networks were found in [2]. The condition found there is geometric in nature – for dynamic ACR in networks with two reactions and two species, the reactant polytope (line segment joining the two reactant complexes in their geometric embedding) must be parallel to a coordinate axis. We seek an extension of this condition in arbitrary networks. If the differential equation for the candidate ACR species has a certain special form (power-engine-load form), then it may have dynamic ACR provided that the power and load satisfy some additional conditions.

Dynamic ACR was defined for autonomous systems in [1]. We generalize to include non-autonomous systems. Throughout the paper, we assume that 𝒟\mathcal{D} is a dynamical system defined by x˙=F(x,t)\dot{x}=F(x,t) with x0nx\in\mathbb{R}^{n}_{\geq 0} for which 0n\mathbb{R}^{n}_{\geq 0} is forward invariant.

Definition 2.1.

The kinetic subspace of 𝒟\mathcal{D} is defined to be the linear span of the image of FF, denoted by span(Im(F)){\rm span}(\imaginary(F)). Two points x,y0nx,y\in\mathbb{R}^{n}_{\geq 0} are compatible if yxspan(Im(F))y-x\in{\rm span}(\imaginary(F)). The sets S,S0nS,S^{\prime}\subseteq\mathbb{R}^{n}_{\geq 0} are compatible if there are xSx\in S and xSx^{\prime}\in S^{\prime} such that xx and xx^{\prime} are compatible. A compatibility class SS is a nonempty subset of 0n\mathbb{R}^{n}_{\geq 0} such that x,ySx,y\in S if and only if yxspan(Im(F))y-x\in{\rm span}(\imaginary(F)).

Definition 2.2.

𝒟\mathcal{D} is a dynamic ACR system if there is an i{1,,n}i\in\{1,\ldots,n\} with Fi0F_{i}\not\equiv 0 and a positive ai>0a_{i}^{*}\in\mathbb{R}_{>0} such that for any (t0,x(t0))×>0n(t_{0},x(t_{0}))\in\mathbb{R}\times\mathbb{R}^{n}_{>0} with x(t0)x(t_{0}) is compatible with {x>0n|xi=ai}\{x\in\mathbb{R}^{n}_{>0}~{}|~{}x_{i}=a_{i}^{*}\}, a unique solution to x˙=F(x,t)\dot{x}=F(x,t) exists up to some maximal T0(t0,x(t0))(t0,]T_{0}(t_{0},x(t_{0}))\in(t_{0},\infty], and xi(t)tT0aix_{i}(t)\xrightarrow{t\to T_{0}}a_{i}^{*}. Any such xix_{i} and aia_{i}^{*} is a dynamic ACR variable and its dynamic ACR value, respectively.

If the dynamical system x˙=F(x,t)\dot{x}=F(x,t) does not have the possibility of a finite-time blow-up, then T0(t0,x(t0))=T_{0}(t_{0},x(t_{0}))=\infty for any (t0,x(t0))×>0n(t_{0},x(t_{0}))\in\mathbb{R}\times\mathbb{R}^{n}_{>0}. In this paper, we focus on dynamic ACR in systems where a unique solution is assumed to exist for all positive time.

3 Power-engine-load form of dynamic ACR

In [2], we discuss the network conditions for static and dynamic ACR in reaction networks with two species and two reactions. One of the conditions is that the reactant polytope (i.e. in the Euclidean embedding of the reaction network, the line segment joining the two reactant complexes) is parallel to one of the coordinate axes. A natural generalization of this geometric condition is the one given below, called power-engine-load form for dynamic ACR.

Suppose that xnx\in\mathbb{R}^{n} and for some 1in1\leq i\leq n, xix_{i} satisfies

dxidt=f(x(t),t)power(xixi(t))engine+g(x(t),t)load\displaystyle\derivative{x_{i}}{t}~{}=~{}{\color[rgb]{0.5,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.5,0}\pgfsys@color@cmyk@stroke{0}{0}{1}{.5}\pgfsys@color@cmyk@fill{0}{0}{1}{.5}\underbrace{f(x(t),t)}_{\mbox{power}}}~{}\cdot~{}{\color[rgb]{1,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,1}\pgfsys@color@cmyk@stroke{0}{1}{0}{0}\pgfsys@color@cmyk@fill{0}{1}{0}{0}\underbrace{(x_{i}^{*}-x_{i}(t))}_{\mbox{engine}}}~{}+~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}\underbrace{g(x(t),t)}_{\mbox{load}}} (3.1)

then under some reasonable conditions on “power” f(x(t),t)f(x(t),t) and “load” g(x(t),t)g(x(t),t), the variable xix_{i} will have dynamic ACR with the value xix_{i}^{*}. An example of sufficient conditions is g0g\equiv 0, f>0f>0, and 0f(x(t),t)𝑑t=\int_{0}^{\infty}f(x(t),t)~{}dt=\infty. We state the precise result and prove other convergence results in Section 3.2.

3.1 Examples of power-engine-load form in mass action systems

Example 3.1.

The simplest example of a mass action system with power-engine-load form is the reaction network 0k[]kX0\stackrel{{\scriptstyle[}}{{k}}^{\prime}]{k}{\rightleftarrows}X and its associated ODE

dxdt=k(k/kx).\derivative{x}{t}={\color[rgb]{0.5,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.5,0}\pgfsys@color@cmyk@stroke{0}{0}{1}{.5}\pgfsys@color@cmyk@fill{0}{0}{1}{.5}k^{\prime}}{\color[rgb]{1,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,1}\pgfsys@color@cmyk@stroke{0}{1}{0}{0}\pgfsys@color@cmyk@fill{0}{1}{0}{0}\left(k/k^{\prime}-x\right)}.

It is straightforward to show that xx has dynamic ACR with value k/kk/k^{\prime}. ∎

Example 3.2.

Biologically interesting cases require a reactant complex with more than one species (so that a ‘reaction’ can occur) and some positive mass conservation law involving all species. The following well-known network is the simplest that satisfies these requirements of species interaction and mass conservation:

A+Bk12B,Bk2A.\displaystyle A+B\xrightarrow{k_{1}}2B,\quad B\xrightarrow{k_{2}}A. (3.2)

The positive mass conservation law a+b=consta+b=const is apparent from the mass action ODE system:

a˙=k1b(k2/k1a),b˙=k1b(k2/k1a).\displaystyle\dot{a}={\color[rgb]{0.5,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.5,0}\pgfsys@color@cmyk@stroke{0}{0}{1}{.5}\pgfsys@color@cmyk@fill{0}{0}{1}{.5}k_{1}b}{\color[rgb]{1,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,1}\pgfsys@color@cmyk@stroke{0}{1}{0}{0}\pgfsys@color@cmyk@fill{0}{1}{0}{0}\left(k_{2}/k_{1}-a\right)},\quad\dot{b}=-k_{1}b\left(k_{2}/k_{1}-a\right). (3.3)

The ODE for aa has power-engine-load form and it is known (see for instance [1]) that a(t)tk2/k1a(t)\xrightarrow{t\to\infty}k_{2}/k_{1} for any initial value (a(0),b(0))=(a0,b0)(a(0),b(0))=(a_{0},b_{0}) that satisfies a0+b0k2/k1a_{0}+b_{0}\geq k_{2}/k_{1}. ∎

Example 3.3.

In bacterial two-component signaling systems, the circuit mechanism for robust signal transduction from the cell environment to its interior involves a bifunctional component, a mechanism that is found in thousands of biological systems [8]. One such system is the E. coli IDHKP-IDH glyoxylate bypass regulation system whose core ACR module is

X+E\displaystyle X+E k2[]k1C1k3Y+E\displaystyle\stackrel{{\scriptstyle[}}{{k}}_{2}]{k_{1}}{\rightleftarrows}C_{1}\xrightarrow{k_{3}}Y+E
Y+C1\displaystyle Y+C_{1} k5[]k4C2k6X+C1.\displaystyle\stackrel{{\scriptstyle[}}{{k}}_{5}]{k_{4}}{\rightleftarrows}C_{2}\xrightarrow{k_{6}}X+C_{1}. (3.4)

It is known that (3.3) has static ACR in YY [3]. The mass action ODE equation for the concentration of YY can be written in power-engine-load form, using the fact that the static ACR value of yy is k3/k4(1+k5/k6)k_{3}/k_{4}(1+k_{5}/k_{6}), as follows.

y˙\displaystyle\dot{y} =k3c1k4c1y+k5c2\displaystyle=k_{3}c_{1}-k_{4}c_{1}y+k_{5}c_{2}
=k3c1k4c1y+k5k3k6c1+k5k6(k6c2k3c1)\displaystyle=k_{3}c_{1}-k_{4}c_{1}y+k_{5}\frac{k_{3}}{k_{6}}c_{1}+\frac{k_{5}}{k_{6}}\left(k_{6}c_{2}-k_{3}c_{1}\right)
=k4c1(k3k4(1+k5k6)y)+k5k6(k6c2k3c1).\displaystyle={\color[rgb]{0.5,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.5,0}\pgfsys@color@cmyk@stroke{0}{0}{1}{.5}\pgfsys@color@cmyk@fill{0}{0}{1}{.5}k_{4}c_{1}}{\color[rgb]{1,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,1}\pgfsys@color@cmyk@stroke{0}{1}{0}{0}\pgfsys@color@cmyk@fill{0}{1}{0}{0}\left(\frac{k_{3}}{k_{4}}\left(1+\frac{k_{5}}{k_{6}}\right)-y\right)}+{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}\frac{k_{5}}{k_{6}}\left(k_{6}c_{2}-k_{3}c_{1}\right)}.

The load is not identically zero in this case. We will prove in future work that yy has dynamic ACR with ACR value k=k3k4(1+k5k6)k^{*}=\dfrac{k_{3}}{k_{4}}\left(1+\dfrac{k_{5}}{k_{6}}\right). As shown in Theorem 3.8, a sufficient condition for dynamic ACR is that for any positive initial value (k6c2(t)k3c1(t))t0\left(k_{6}c_{2}(t)-k_{3}c_{1}(t)\right)\xrightarrow{t\to\infty}0 and that 0c1(t)𝑑t=\int_{0}^{\infty}c_{1}(t)~{}dt=\infty. Checking that the two conditions are satisfied requires some additional ideas which will be developed in future work. ∎

We now discuss conditions on “power” f(x(t),t)f(x(t),t) and “load” g(x(t),t)g(x(t),t) that either ensure or prevent dynamic ACR.

3.2 Convergence results for power-engine-load form dynamic ACR

Theorem 3.4 (Zero load).

Consider the dynamical system x˙=F(x,t)\dot{x}=F(x,t) with continuously differentiable F:0n×0nF:\mathbb{R}^{n}_{\geq 0}\times\mathbb{R}_{\geq 0}\to\mathbb{R}^{n} and for which 0n\mathbb{R}^{n}_{\geq 0} is forward invariant. Suppose there is an xi>0x_{i}^{*}\in\mathbb{R}_{>0} such that Fi(x,t)|x>0n=0F_{i}(x,t)|_{x\in\mathbb{R}^{n}_{>0}}=0 if and only if xi=xix_{i}=x_{i}^{*}. The following hold for every solution (x(t))t0(x(t))_{t\geq 0} of x˙=F(x,t)\dot{x}=F(x,t).

  1. 1.

    xixi(t)x_{i}^{*}-x_{i}(t) has the same sign as xixi(0)x_{i}^{*}-x_{i}(0) for all t0t\geq 0.

  2. 2.

    Suppose xi(0)xix_{i}(0)\neq x_{i}^{*}. Then Fi(x(t),t)xixi(t)\displaystyle\frac{F_{i}(x(t),t)}{x_{i}^{*}-x_{i}(t)} has the same sign as Fi(x(0),0)xixi(0)\displaystyle\frac{F_{i}(x(0),0)}{x_{i}^{*}-x_{i}(0)} for all t0t\geq 0.

  3. 3.

    Suppose xi(0)xix_{i}(0)\neq x_{i}^{*}.

    1. (a)

      If Fi(x(0),0)xixi(0)>0\displaystyle\frac{F_{i}(x(0),0)}{x_{i}^{*}-x_{i}(0)}>0 then |xi(t)xi|\absolutevalue{x_{i}(t)-x_{i}^{*}} is strictly decreasing on [0,)[0,\infty) and

      |xi()xi|=|xi(0)xi|exp(0Fi(x(s),s)xixi(s)𝑑s).\absolutevalue{x_{i}(\infty)-x_{i}^{*}}=\absolutevalue{x_{i}(0)-x_{i}^{*}}\exp\left(-\int_{0}^{\infty}\frac{F_{i}(x(s),s)}{x_{i}^{*}-x_{i}(s)}ds\right).
    2. (b)

      If Fi(x(0),0)xixi(0)<0\displaystyle\frac{F_{i}(x(0),0)}{x_{i}^{*}-x_{i}(0)}<0 then |xi(t)xi|\absolutevalue{x_{i}(t)-x_{i}^{*}} is strictly increasing on [0,)[0,\infty).

  4. 4.

    Suppose xi(0)xix_{i}(0)\neq x_{i}^{*}. Then xi(t)txix_{i}(t)\xrightarrow{t\to\infty}x_{i}^{*} if and only if

    0Fi(x(t),t)xixi(t)𝑑t=.\int_{0}^{\infty}\frac{F_{i}(x(t),t)}{x_{i}^{*}-x_{i}(t)}dt=\infty.
Proof.

If xi(0)=xix_{i}(0)=x_{i}^{*} then x˙i|t=0=Fi(x(0),0)=0\dot{x}_{i}|_{t=0}=F_{i}(x(0),0)=0 and so xi(t)=xix_{i}(t)=x_{i}^{*} for all t0t\geq 0. If there is a t>0t>0 such that xi(t)xix_{i}(t)-x_{i}^{*} and xi(0)xix_{i}(0)-x_{i}^{*} have different signs then by continuity of xi(t)x_{i}(t), there must be a t(0,t)t^{\prime}\in(0,t) such that xi(t)=xix_{i}(t^{\prime})=x_{i}^{*}. But then xi(t)=xix_{i}(t)=x_{i}^{*} for all tt\in\mathbb{R}, which is a contradiction.

If xi(0)xix_{i}(0)\neq x_{i}^{*} then by the previous part Fi(x(t),t)xixi(t)\displaystyle\frac{F_{i}(x(t),t)}{x_{i}^{*}-x_{i}(t)} is defined for all time t0t\geq 0. Since Fi(x,t)0F_{i}(x,t)\neq 0 for xixix_{i}\neq x_{i}^{*}, the second result follows. Since xi(t)xix_{i}(t)\neq x_{i}^{*} for all t0t\geq 0, we can divide by xixix_{i}^{*}-x_{i} and integrate to get

dxixixi=Fi(x(t),t)xixi(t)dtxixi(t)xixi(0)=exp(0tFi(x(s),s)xixi(s)𝑑s).\displaystyle\frac{dx_{i}}{x_{i}^{*}-x_{i}}=\frac{F_{i}(x(t),t)}{x_{i}^{*}-x_{i}(t)}dt\implies\frac{x_{i}^{*}-x_{i}(t)}{x_{i}^{*}-x_{i}(0)}=\exp\left(-\int_{0}^{t}\frac{F_{i}(x(s),s)}{x_{i}^{*}-x_{i}(s)}ds\right). (3.5)

By the previous result, the integrand has the same sign as Fi(x(0),0)xixi(0)\displaystyle\frac{F_{i}(x(0),0)}{x_{i}^{*}-x_{i}(0)} for all positive time, and so result 3 follows. Finally, result 4 follows from taking limt\lim_{t\to\infty} on both sides. ∎

Example 3.5.

Consider the mass action system (3.3), a˙=k1b(ka),b˙=k1b(ka)\dot{a}=k_{1}b\left(k-a\right),~{}\dot{b}=-k_{1}b\left(k-a\right). The variable aa has power-engine-load form with zero load and power k1b>0k_{1}b>0 for every (a,b)>02(a,b)\in\mathbb{R}^{2}_{>0}. It is clear that a(0)=ka(0)=k if and only if a˙|t0=0\dot{a}|_{t\geq 0}=0. So assume that a(0)ka(0)\neq k.

The mass action system with initial value (a(0),b(0))=(a0,b0)02(a(0),b(0))=(a_{0},b_{0})\in\mathbb{R}^{2}_{\geq 0} can be solved explicitly after using the conservation relation a(t)+b(t)=a0+b0a(t)+b(t)=a_{0}+b_{0}, thus reducing to the one-dimensional system b˙=k1b(k+ba0b0)\dot{b}=-k_{1}b(k+b-a_{0}-b_{0}). We get

b(t)={a0+b0k1+(a0kb0)e(a0+b0k)t if a0+b0k,ka01+(ka0)t if a0+b0=k.\displaystyle b(t)=\begin{cases}\dfrac{a_{0}+b_{0}-k}{1+\left(\frac{a_{0}-k}{b_{0}}\right)e^{-(a_{0}+b_{0}-k)t}}&\mbox{ if }a_{0}+b_{0}\neq k,\\ \dfrac{k-a_{0}}{1+(k-a_{0})t}&\mbox{ if }a_{0}+b_{0}=k.\end{cases}

and a(t)=a0+b0b(t)a(t)=a_{0}+b_{0}-b(t).

For (a0,b0)>02(a_{0},b_{0})\in\mathbb{R}^{2}_{>0}, it is well-known that a(t)tka(t)\xrightarrow{t\to\infty}k if and only if a0+b0ka_{0}+b_{0}\geq k. We now argue that 0b(t)𝑑t=\int_{0}^{\infty}b(t)~{}dt=\infty if and only if a0+b0ka_{0}+b_{0}\geq k and a0ka_{0}\neq k. Indeed, for any a0<ka_{0}<k,

0ka01+(ka0)t𝑑t=.\int_{0}^{\infty}\frac{k-a_{0}}{1+(k-a_{0})t}~{}dt=\infty.

Moreover

limta0+b0k1+(a0kb0)e(a0+b0k)t={a0+b0k if a0+b0>k,0 if a0+b0<k,\displaystyle\lim_{t\to\infty}\frac{a_{0}+b_{0}-k}{1+\left(\frac{a_{0}-k}{b_{0}}\right)e^{-(a_{0}+b_{0}-k)t}}=\begin{cases}a_{0}+b_{0}-k&\mbox{ if }a_{0}+b_{0}>k,\\ 0&\mbox{ if }a_{0}+b_{0}<k,\end{cases} (3.6)

implies that

0a0+b0k1+(a0kb0)e(a0+b0k)t𝑑t{= if a0+b0>k,< if a0+b0<k,\int_{0}^{\infty}\frac{a_{0}+b_{0}-k}{1+\left(\frac{a_{0}-k}{b_{0}}\right)e^{-(a_{0}+b_{0}-k)t}}~{}dt\begin{cases}=\infty&\mbox{ if }a_{0}+b_{0}>k,\\ <\infty&\mbox{ if }a_{0}+b_{0}<k,\end{cases}

because in the first case (a0+b0>ka_{0}+b_{0}>k) the integrand does not go to zero, the integral is clearly divergent and in the latter case (a0+b0<ka_{0}+b_{0}<k) the integrand is exp((a0+b0k)t)\approx\exp{(a_{0}+b_{0}-k)t}, so the integral is convergent.

This shows that atka\xrightarrow{t\to\infty}k if and only if either a(0)=ka(0)=k or 0k1b(t)𝑑t=\int_{0}^{\infty}k_{1}b(t)~{}dt=\infty. ∎

Corollary 3.6.

Suppose the hypotheses of Theorem 3.4 hold and xi(0)xix_{i}(0)\neq x_{i}^{*}. If

lim infttFi(x(t),t)xixi(t)(0,],\displaystyle\liminf_{t\to\infty}\frac{tF_{i}(x(t),t)}{x_{i}^{*}-x_{i}(t)}\in(0,\infty], (3.7)

then xi(t)txix_{i}(t)\xrightarrow{t\to\infty}x_{i}^{*}.

Proof.

By (3.7), there is a δ>0\delta>0 and a t00t_{0}\geq 0 such that tFi(x(t),t)xixi(t)>δ\dfrac{tF_{i}(x(t),t)}{x_{i}^{*}-x_{i}(t)}>\delta for all tt0t\geq t_{0}. Then for any Tt0T\geq t_{0},

0Fi(x(t),t)xixi𝑑tt0TFi(x(t),t)xixi𝑑t=t0T1ttFi(x(t),t)xixi𝑑t>δt0T1t𝑑tT,\displaystyle\int_{0}^{\infty}\frac{F_{i}(x(t),t)}{x_{i}^{*}-x_{i}}dt\geq\int_{t_{0}}^{T}\frac{F_{i}(x(t),t)}{x_{i}^{*}-x_{i}}dt=\int_{t_{0}}^{T}\frac{1}{t}~{}\frac{tF_{i}(x(t),t)}{x_{i}^{*}-x_{i}}dt>\delta\int_{t_{0}}^{T}\frac{1}{t}dt\xrightarrow{T\to\infty}\infty,

and so by Theorem 3.4, xixix_{i}\to x_{i}^{*}. ∎

Example 3.7.

The condition (3.7) is sufficient but not necessary for xixix_{i}\to x_{i}^{*}. Consider the mass action system of the following reaction network:

2X1k1X1,\displaystyle 2X_{1}\xrightarrow{k_{1}}X_{1}, 2X2k2X2,\displaystyle\quad 2X_{2}\xrightarrow{k_{2}}X_{2},
X1+X2k2X2,\displaystyle X_{1}+X_{2}\xrightarrow{k_{2}}X_{2}, Y+X1k4[]k5X1,\displaystyle\quad Y+X_{1}\stackrel{{\scriptstyle[}}{{k}}_{4}]{k_{5}}{\rightleftarrows}X_{1}, (3.8)

where the rate constants are the same for the reactions 2X2X22X_{2}\to X_{2} and X1+X2X2X_{1}+X_{2}\to X_{2} (X2X_{2} degrades both X1X_{1} and X2X_{2} at the same rate). The mass action ODEs are

x˙2\displaystyle\dot{x}_{2} =k2x22,\displaystyle=-k_{2}x_{2}^{2},
x˙1\displaystyle\dot{x}_{1} =k1x12k2x1x2,\displaystyle=-k_{1}x_{1}^{2}-k_{2}x_{1}x_{2},
y˙\displaystyle\dot{y} =k5x1(k4k5y).\displaystyle=k_{5}x_{1}\left(\frac{k_{4}}{k_{5}}-y\right). (3.9)

It is simple to check that given an arbitrary positive initial value (y(0),x1(0),x2(0))=(b0,b1,b2)>03(y(0),x_{1}(0),x_{2}(0))=(b_{0},b_{1},b_{2})\in\mathbb{R}^{3}_{>0}, the unique solution satisfies

x2(t)\displaystyle x_{2}(t) =b21+k2b2t,\displaystyle=\frac{b_{2}}{1+k_{2}b_{2}t},
x1(t)\displaystyle x_{1}(t) =b1b2k2(1+b2k2t)(b2k2+b1k1log(1+b2k2t)1k1tlog(1+b2k2t)\displaystyle=\frac{b_{1}b_{2}k_{2}}{(1+b_{2}k_{2}t)(b_{2}k_{2}+b_{1}k_{1}\log(1+b_{2}k_{2}t)}\sim\frac{1}{k_{1}t\log(1+b_{2}k_{2}t)}

and so t0x1(t)𝑑t=\int_{t_{0}}^{\infty}x_{1}(t)dt=\infty for any t0>0t_{0}>0 which implies that yk4/k5y\to k_{4}/k_{5} by Theorem 3.4. However, lim inft(tx1(t))=0\liminf_{t\to\infty}(tx_{1}(t))=0, so Corollary 3.6 does not apply. ∎

When the load gg is nonzero, the power ff must overpower the load gg for convergence of xix_{i} to xix_{i}^{*}.

Theorem 3.8.

Consider the dynamical system x˙=F(x,t)\dot{x}=F(x,t) with F:0n×0nF:\mathbb{R}^{n}_{\geq 0}\times\mathbb{R}_{\geq 0}\to\mathbb{R}^{n} and for which 0n\mathbb{R}^{n}_{\geq 0} is forward invariant. Suppose that for some i{1,,n}i\in\{1,\ldots,n\} we have

Fi(x,t)=f(x,t)(xixi)+g(x,t),F_{i}(x,t)=f(x,t)\cdot\left(x_{i}^{*}-x_{i}\right)+g(x,t),

with f(x,t)>0f(x,t)>0 in >0n×>0\mathbb{R}^{n}_{>0}\times\mathbb{R}_{>0}, and g0g\not\equiv 0. Let (x(t))t0(x(t))_{t\geq 0} be a solution of x˙=F(x,t)\dot{x}=F(x,t) such that 0f(x(t),t)𝑑t=\displaystyle\int_{0}^{\infty}f(x(t),t)dt=\infty, and such that the limit αlimtg(x(t),t)f(x(t),t)\displaystyle\alpha\coloneqq\lim_{t\to\infty}\frac{g(x(t),t)}{f(x(t),t)} exists, with α>xi\alpha>-x_{i}^{*}. Then we have xi(t)txi+αx_{i}(t)\xrightarrow{t\to\infty}x_{i}^{*}+\alpha.

Proof.

Note that the equation dxidt=Fi(x,t)\displaystyle\derivative{x_{i}}{t}=F_{i}(x,t) can be rewritten as

dxidt\displaystyle\derivative{x_{i}}{t} =f(x,t)(xi+g(x,t)f(x,t)xi).\displaystyle=f(x,t)\left(x_{i}^{*}+\frac{g(x,t)}{f(x,t)}-x_{i}\right). (3.10)

Then, if we denote x~i=xi+α\widetilde{x}_{i}^{*}=x_{i}^{*}+\alpha and g~=g/fα\widetilde{g}=g/f-\alpha, we can reduce our problem to showing that if xi(t)x_{i}(t) satisfies the equation

dxidt\displaystyle\derivative{x_{i}}{t} =f(x,t)(x~ixi+g~(x,t)),\displaystyle=f(x,t)\left(\ \widetilde{x}_{i}^{*}-x_{i}+\widetilde{g}(x,t)\ \right), (3.11)

and limtg~(x(t),t)=0\displaystyle\lim_{t\to\infty}\widetilde{g}(x(t),t)=0, then xi(t)tx~ix_{i}(t)\xrightarrow{t\to\infty}\widetilde{x}_{i}^{*}.


For any fixed ε(0,x~i)\varepsilon\in(0,\widetilde{x}_{i}^{*}) we will now show that there exists some T0>0T_{0}>0 such that xi(t)(x~iε,x~i+ε)x_{i}(t)\in(\widetilde{x}_{i}^{*}-\varepsilon,\widetilde{x}_{i}^{*}+\varepsilon) for all t>T0t>T_{0}. For this, let us first choose some T1>0T_{1}>0 such that g~(x(t),t)(ε2,ε2)\widetilde{g}(x(t),t)\in(-\frac{\varepsilon}{2},\frac{\varepsilon}{2}) for all t>T1t>T_{1}. Note that this implies that the interval (x~iε,x~i+ε)(\widetilde{x}_{i}^{*}-\varepsilon,\widetilde{x}_{i}^{*}+\varepsilon) is an invariant set of (3.10) for t>T1t>T_{1}. Therefore, if xi(T1)(x~iε,x~i+ε)x_{i}(T_{1})\in(\widetilde{x}_{i}^{*}-\varepsilon,\widetilde{x}_{i}^{*}+\varepsilon), we can choose T0=T1T_{0}=T_{1}. Assume now that xi(T1)(x~iε,x~i+ε)x_{i}(T_{1})\notin(\widetilde{x}_{i}^{*}-\varepsilon,\widetilde{x}_{i}^{*}+\varepsilon), and for example xi(T1)x~i+εx_{i}(T_{1})\geq\widetilde{x}_{i}^{*}+\varepsilon (the case where xi(T1)x~iεx_{i}(T_{1})\leq\widetilde{x}_{i}^{*}-\varepsilon is analogous).

Let us assume that the inequality xi(t)x~i+εx_{i}(t)\geq\widetilde{x}_{i}^{*}+\varepsilon holds for all t>T1t>T_{1}; we will show that this leads to a contradiction. Indeed, for any such tt we have

x~ixi+g~(x,t)<ε2,\widetilde{x}_{i}^{*}-x_{i}+\widetilde{g}(x,t)<-\frac{\varepsilon}{2},

which implies that

dxidt\displaystyle\derivative{x_{i}}{t} ε2f(x,t),\displaystyle\leq-\frac{\varepsilon}{2}f(x,t), (3.12)

for all t>T1t>T_{1}; but this, together with the hypothesis that 0f(x(t),t)𝑑t=\displaystyle\int_{0}^{\infty}f(x(t),t)dt=\infty, would imply that limtxi(t)=\displaystyle\lim_{t\to\infty}x_{i}(t)=-\infty, which contradicts our assumption that for all t>T1t>T_{1} we have xi(t)x~i+εx_{i}(t)\geq\widetilde{x}_{i}^{*}+\varepsilon.

Therefore we obtain the desired conclusion that xi(t)x_{i}(t) will enter the interval (x~iε,x~i+ε)(\widetilde{x}_{i}^{*}-\varepsilon,\widetilde{x}_{i}^{*}+\varepsilon) in some finite time T0T_{0}, and will remain inside it for all t>T0t>T_{0}. ∎

4 Applications of power-engine-load form

We use the power-engine-load form and Theorem 3.8 to prove dynamic ACR in certain reaction networks taken with inflows. The underlying reaction networks have dynamic ACR when there are no inflows or outflows, i.e. when the system is closed. We show that even when certain inflows are included, the property of dynamic ACR persists. Moreover, the ACR value for the open system (with inflows) remains the same in many instances as the ACR value for the closed/isolated system. In Section 4.4, we also consider an application where both inflows and outflows are included, and again given the right conditions the dynamic ACR property persists. In Section 4.7, we consider a simple enzyme catalysis network with a bifunctional enzyme. We show once again that dynamic ACR persists under many different possible inflow conditions. Each individual network requires special analysis in combination with application of Theorem 3.8.

The reaction networks that we consider here have a conserved quantity when the system is closed. When inflows (but not outflows) are included, the total concentration grows in an unbounded manner. This behavior can be realized in a chemostat (for some finite time) but there are also biologically realistic situations where this occurs. For instance, specific ion channels on the surface of a cell might be open and selectively permeable which results in influx of channel-specific ions from outside the cell volume. Our analysis shows that there are invariant quantities even while influxes are ongoing, that is even when the overall system is not near any long-term equilibrium. One variable, the ACR variable, can consistently converge to a robust value even when the overall trajectory does not converge. Robust convergence of one variable when the overall system is far from equilibrium can even occur when inflows and outflows are both present, as shown in Section 4.4. In all these cases, Theorem 3.8 is used to show convergence.

4.1 Motifs of static and dynamic ACR

In [2], we identified the network motifs with two reactions and two species which have both static ACR and dynamic ACR when treated as isolated or closed systems. Each motif represents an entire infinite set of reaction networks. We consider three such motifs shown in Figure 1 and we select one network for each motif. It is shown in [2] that these three are the only motifs (up to interchange of species labels) which have two reactions, two species, and have both properties of static ACR and dynamic ACR.

Figure 1: Motifs with two reactions and two species that have both static and dynamic ACR in one species AA.

In each network motif, the arrows represent reactions while the (dashed) line segment joining the two arrow tails is the so-called reactant polytope which plays an important role in the classification of ACR systems [2] – the reactant polytope is parallel to a coordinate axis in all ACR systems with two reactions and two species.

Each static and dynamic ACR motif (with two reactions and two species) has the following properties: the reactant polytope is parallel to a coordinate axis, the two reaction arrows are mutually parallel and they point towards each other. The three motifs are then characterized by the slope of the reaction vectors, which can be positive, zero, or negative. We will refer to the motifs accordingly as “positive/zero/negative slope motif”.

For each motif, we choose a particular network and show that when considered as an open system by allowing inflows, the ACR property is preserved in many cases. One representative of negative slope motif, see (3.2), is often studied as a simple example of a conservative system with static ACR. We show that not only does it have dynamic ACR, it has many other striking robustness properties.

4.2 Positive slope motif

Our first example is a network based on the motif   .   For this motif, we choose a particular reaction network whose true/non-flow reactions are {A+B0,BA+2B}\{A+B\to 0,B\to A+2B\}, and we include the inflows {0A,0B}\{0\to A,0\to B\} with arbitrary time-dependent rates. So the reaction network is

A+Bk10,\displaystyle A+B\xrightarrow{k_{1}}0,\quad Bk2A+2B,\displaystyle B\xrightarrow{k_{2}}A+2B,
0ga(t)A,\displaystyle 0\xrightarrow{g_{a}(t)}A,\quad 0gb(t)B.\displaystyle 0\xrightarrow{g_{b}(t)}B.

whose mass action system is

a˙\displaystyle\dot{a} =k1b(ka)+ga(t),\displaystyle=k_{1}b(k^{*}-a)+g_{a}(t), (4.1)
b˙\displaystyle\dot{b} =k1b(ka)+gb(t),\displaystyle=k_{1}b(k^{*}-a)+g_{b}(t),

where k=k2/k1k^{*}=k_{2}/k_{1}.

Theorem 4.1.

Consider the mass action system (4.1) with time-dependent inflow rates ga:00g_{a}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} and gb:00g_{b}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}, such that ga(t)g_{a}(t) is bounded and

0(gb(t)ga(t))𝑑t=,\int_{0}^{\infty}\left(g_{b}(t)-g_{a}(t)\right)dt=\infty, (4.2)

then a(t)tka(t)\xrightarrow{t\to\infty}k^{*}.

Proof.

Since b˙a˙=gbga\dot{b}-\dot{a}=g_{b}-g_{a}, by (4.2), we have that b(t)a(t)tb(t)-a(t)\xrightarrow{t\to\infty}\infty, which implies that b(t)tb(t)\xrightarrow{t\to\infty}\infty. Then the conditions of Theorem 3.8 hold for aa and from

limtga(t)b(t)=0,\lim_{t\to\infty}\frac{g_{a}(t)}{b(t)}=0,

we conclude that a(t)tka(t)\xrightarrow{t\to\infty}k^{*}. ∎

Corollary 4.2.

Consider the mass action system (4.1) with constant inflows gb>gag_{b}>g_{a}. Then a(t)tka(t)\xrightarrow{t\to\infty}k^{*}.

4.3 Negative slope motif

A representative of the motif     was used to illustrate static ACR by Shinar and Feinberg [3]. We show here that this network has even stronger robustness properties than previously known. In particular, even in face of inflows which send the total concentration to infinity, the species with ACR still converges to a finite value, and moreover, the finite value is the same as its ACR value when there are no inflows.

Consider the following open reaction network:

A+Bk12B,\displaystyle A+B\xrightarrow{k_{1}}2B, Bk2A,\displaystyle B\xrightarrow{k_{2}}A, (4.3)
0ga(t)A,\displaystyle 0\xrightarrow{g_{a}(t)}A, 0gb(t)B,\displaystyle 0\xrightarrow{g_{b}(t)}B,

whose mass action system is

a˙\displaystyle\dot{a} =k1b(ka)+ga(t),\displaystyle=k_{1}b(k^{*}-a)+g_{a}(t), (4.4)
b˙\displaystyle\dot{b} =k1b(ka)+gb(t),\displaystyle=-k_{1}b(k^{*}-a)+g_{b}(t),

where k=k2/k1k^{*}=k_{2}/k_{1}.

If gag_{a} and gbg_{b} are constant, then the ACR value survives. We give a more general result below, allowing gag_{a} and gbg_{b} to be arbitrary functions of time. One implication of the result is that the ACR value survives for arbitrary functions, provided that ga(t)g_{a}(t) does not increase too fast.

Theorem 4.3.

Consider the mass action system (4.4) with time-dependent inflow rates ga:00g_{a}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} and gb:00g_{b}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}, such that

G(t)0t(ga(s)+gb(s))𝑑st,G(t)\coloneqq\int_{0}^{t}\left(g_{a}(s)+g_{b}(s)\right)ds\xrightarrow{t\to\infty}\infty, (4.5)

and

ga(t)/G(t)tα.g_{a}(t)/G(t)\xrightarrow{t\to\infty}\alpha. (4.6)

then a(t)tk+α/k1a(t)\xrightarrow{t\to\infty}k^{*}+\alpha/k_{1} for any (a(0),b(0))>02(a(0),b(0))\in\mathbb{R}^{2}_{>0}.

Proof.

We will first show that b/G1b/G\to 1. Note that a˙+b˙=ga+gb\dot{a}+\dot{b}=g_{a}+g_{b} implies that a(t)+b(t)=a(0)+b(0)+G(t),a(t)+b(t)=a(0)+b(0)+G(t), and so a+bta+b\xrightarrow{t\to\infty}\infty.

Define the following invertible change of coordinates:

02{(0,0)}\displaystyle\mathbb{R}_{\geq 0}^{2}\setminus\{(0,0)\} >0×[0,1]\displaystyle\to\mathbb{R}_{>0}\times[0,1] (4.7)
(a,b)\displaystyle(a,b) (x,β)=(a+b,b/(a+b))\displaystyle\mapsto\left(x,\beta\right)=\left(a+b,b/(a+b)\right)

In (x,β)(x,\beta) coordinates, the dynamical system (4.4) is:

x˙\displaystyle\dot{x} =ga(t)+gb(t),\displaystyle=g_{a}(t)+g_{b}(t), (4.8)
β˙\displaystyle\dot{\beta} =k1β(x(1β)k)+(1β)gbβgax.\displaystyle=k_{1}\beta\left(x(1-\beta)-k^{*}\right)+\frac{(1-\beta)g_{b}-\beta g_{a}}{x}.

The first equation has the solution x(t)=x(0)+G(t)x(t)=x(0)+G(t), which implies that x(t)x(t) grows monotonically to infinity. It then follows that for any δ(0,12)\delta\in(0,\frac{1}{2}) there is a time Tδ>0T_{\delta}>0 such that for all t>Tδt>T_{\delta} we have β˙(t)>1\dot{\beta}(t)>1 whenever β(t)(δ,1δ)\beta(t)\in(\delta,1-\delta), because under such assumptions x(t)x(t) is large enough to imply that the positive term

k1β(x(1β))k_{1}\beta\left(x(1-\beta)\right)

is much larger than all the negative terms on the right-hand side of the equation for β˙(t)\dot{\beta}(t). Moreover, we obtain that if β(0)(δ,1δ)\beta(0)\in(\delta,1-\delta) then the solution β(t)\beta(t) becomes larger than 1δ1-\delta for all t>Tδ+1t>T_{\delta}+1, which implies that β(t)1\beta(t)\to 1, and therefore b/G1b/G\to 1.

Finally, we check the hypotheses of Theorem 3.8. We may rewrite the equation for a˙\dot{a} in (4.4) as

a˙\displaystyle\dot{a} =k1b(αk1+ka)+G(t)(ga(t)G(t)αbG(t)).\displaystyle=k_{1}b\left(\frac{\alpha}{k_{1}}+k^{*}-a\right)+G(t)\left(\frac{g_{a}(t)}{G(t)}-\alpha\frac{b}{G(t)}\right). (4.9)

Clearly bb\to\infty and so b\int b\to\infty. Moreover, we have that

limtG(t)b(t)(ga(t)G(t)αbG(t))=limt(ga(t)G(t)α)=0.\lim_{t\to\infty}\frac{G(t)}{b(t)}\left(\frac{g_{a}(t)}{G(t)}-\alpha\frac{b}{G(t)}\right)=\lim_{t\to\infty}\left(\frac{g_{a}(t)}{G(t)}-\alpha\right)=0.

Therefore we may conclude from Theorem 3.8 that a(t)tk+α/k1a(t)\xrightarrow{t\to\infty}k^{*}+\alpha/k_{1} for any (a(0),b(0))>02(a(0),b(0))\in\mathbb{R}^{2}_{>0}. ∎

Remark 4.4.

Theorem 4.3 is formally about a two-dimensional non-autonomous system. We show in examples below that the theorem can be applied to study the dynamics of higher dimensional systems. See Examples 4.6, 4.7 and 4.19.

Corollary 4.5.

Consider the mass action system (4.4) with time-dependent inflow rates ga:00g_{a}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} and gb:00g_{b}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}, such that ga(t)g_{a}(t) is a polynomial of tt (that takes only positive values). This includes the case of constant inflows. Then a(t)tka(t)\xrightarrow{t\to\infty}k^{*} for any (a(0),b(0))>02(a(0),b(0))\in\mathbb{R}^{2}_{>0}.

Proof.

We only need to check the hypotheses of Theorem 4.3. Clearly (4.5) holds. Moreover for (4.6), we have

0ga(t)G(t)ga(t)0tga(s)𝑑st0,\displaystyle 0\leq\frac{g_{a}(t)}{G(t)}\leq\frac{g_{a}(t)}{\int_{0}^{t}g_{a}(s)ds}\xrightarrow{t\to\infty}0,

and so α=0\alpha=0. It follows that atka\xrightarrow{t\to\infty}k^{*}. ∎

Example 4.6.

Consider the following reaction network for some positive integer dd

0\displaystyle 0 X1,\displaystyle\to X_{1}, (4.10)
Xj\displaystyle X_{j} Xj+Xj+1,(1jd1)\displaystyle\to X_{j}+X_{j+1},\quad(1\leq j\leq d-1)
Xd+Y\displaystyle X_{d}+Y k12Y,\displaystyle\xrightarrow{k_{1}}2Y,
Y\displaystyle Y k2Xd.\displaystyle\xrightarrow{k_{2}}X_{d}.

Note that d=1d=1 is allowed and corresponds to constant inflow of Xd=X1X_{d}=X_{1}. For j{1,,d1}j\in\{1,\ldots,d-1\}, XjtjX_{j}\sim t^{j} and so Corollary 4.5 implies that Xdk2/k1X_{d}\to k_{2}/k_{1}. ∎

Now we consider the case where the inflows are exponentially growing in time.

Example 4.7.

Consider the following reaction network

Z𝛼2Z,\displaystyle Z\xrightarrow{\alpha}2Z, ZZ+A\displaystyle\quad Z\to Z+A (4.11)
A+B\displaystyle A+B k12B,\displaystyle\xrightarrow{k_{1}}2B,
B\displaystyle B k2A.\displaystyle\xrightarrow{k_{2}}A.

The ZZ could represent an exponentially growing virus which produces a toxin AA. The differential equation for ZZ is z˙=αz\dot{z}=\alpha z which has the solution z(t)=eαtz(t)=e^{\alpha t} (assuming that z(0)=1z(0)=1) and so the concentrations of AA and BB evolve according to:

a˙\displaystyle\dot{a} =k1b(ka)+eαt,\displaystyle=k_{1}b(k^{*}-a)+e^{\alpha t}, (4.12)
b˙\displaystyle\dot{b} =k1b(ka).\displaystyle=-k_{1}b(k^{*}-a).

Now

G(t)=0teαs𝑑s=eαt1α,G(t)=\int_{0}^{t}e^{\alpha s}ds=\frac{e^{\alpha t}-1}{\alpha},

which clearly goes to infinity and moreover ga(t)/G(t)αg_{a}(t)/G(t)\to\alpha. By Theorem 4.3, it then follows that

a(t)tk+α/k1.a(t)\xrightarrow{t\to\infty}k^{*}+\alpha/k_{1}.

A similar result holds even when both AA and BB have exponentially growing inflows. For instance, for the system

a˙\displaystyle\dot{a} =k1b(ka)+eαt,\displaystyle=k_{1}b(k^{*}-a)+e^{\alpha t}, (4.13)
b˙\displaystyle\dot{b} =k1b(ka)+eβt,\displaystyle=-k_{1}b(k^{*}-a)+e^{\beta t},

we have that

limta(t)={k+α/k1 if α>βk+α/(2k1) if α=βk if α<β.\displaystyle\lim_{t\to\infty}a(t)=\begin{cases}k^{*}+\alpha/k_{1}&\mbox{ if }\alpha>\beta\\ k^{*}+\alpha/(2k_{1})&\mbox{ if }\alpha=\beta\\ k^{*}&\mbox{ if }\alpha<\beta.\end{cases}

Remark 4.8.

It is possible to find an explicit solution for the case of arbitrary time-dependent inflow in AA, ga(t)g_{a}(t) and no inflow in BB. We give such a solution here. One can see from the form of the solution, even though it is explicit, that the limiting value of a(t)a(t) given in Theorem 4.3 is not obtained easily. Moreover, we emphasize the additional shortcoming that the explicit solution is for the restricted situation of no inflow in BB.

Consider the following system:

a˙=k1ab+k2b+ga(t),b˙=k1abk2b.\displaystyle\dot{a}=-k_{1}ab+k_{2}b+g_{a}(t),\quad\dot{b}=k_{1}ab-k_{2}b. (4.14)

Let Ga(t)=0tga(s)𝑑sG_{a}(t)=\int_{0}^{t}g_{a}(s)ds, so a˙+b˙=g(t)\dot{a}+\dot{b}=g(t) has the solution a(t)+b(t)=a(0)+b(0)+Ga(t)a(t)+b(t)=a(0)+b(0)+G_{a}(t). We can use this to write an ODE in aa only:

a˙=k1(ak)(a(0)+b(0)+Ga(t)a)+ga(t),\displaystyle\dot{a}=-k_{1}(a-k^{*})(a(0)+b(0)+G_{a}(t)-a)+g_{a}(t), (4.15)

where k=k2/k1k^{*}=k_{2}/k_{1}.

Theorem 4.9.

Let ga(t)g_{a}(t) be an arbitrary function of time tt and Ga(t)=0tga(s)𝑑sG_{a}(t)=\int_{0}^{t}g_{a}(s)~{}ds. For any initial value a(0)=a00a(0)=a_{0}\geq 0 and b(0)=b00b(0)=b_{0}\geq 0,

a˙=k1(ak)(a0+b0+Ga(t)a)+ga(t)\displaystyle\dot{a}=-k_{1}(a-k)(a_{0}+b_{0}+G_{a}(t)-a)+g_{a}(t) (4.16)

has the solution

a(t)=a0+b0+Ga(t)b0q(t)1+k1b0Q(t)\displaystyle a(t)=a_{0}+b_{0}+G_{a}(t)-\frac{b_{0}q(t)}{1+k_{1}b_{0}Q(t)} (4.17)

for all time t0t\geq 0 where

q(t)=exp[k1(a0+b0k)t+k10tGa(s)𝑑s],\displaystyle q(t)=\exp\left[k_{1}(a_{0}+b_{0}-k)t+k_{1}\int_{0}^{t}G_{a}(s)~{}ds\right], (4.18)

and Q(t)=0tq(s)𝑑sQ(t)=\int_{0}^{t}q(s)~{}ds.

Proof.

The proof is a simple verification. From their definitions, q(0)=1,Ga(0)=0,Q(0)=0q(0)=1,G_{a}(0)=0,Q(0)=0 and so

a0+b0+Ga(0)b0q(0)1+k1b0Q(0)=a0+b0b0=a0,a_{0}+b_{0}+G_{a}(0)-\frac{b_{0}q(0)}{1+k_{1}b_{0}Q(0)}=a_{0}+b_{0}-b_{0}=a_{0},

so the initial condition is satisfied. We note that q˙(t)=k1q(t)(a0+b0k+Ga(t))\dot{q}(t)=k_{1}q(t)\left(a_{0}+b_{0}-k+G_{a}(t)\right) and so

a˙(t)\displaystyle\dot{a}(t) =ga(t)b0k1q(t)(a0+b0k+Ga(t))1+k1b0Q(t)+k1(b0q(t)1+k1b0Q(t))2,\displaystyle=g_{a}(t)-b_{0}\frac{k_{1}q(t)\left(a_{0}+b_{0}-k+G_{a}(t)\right)}{1+k_{1}b_{0}Q(t)}+k_{1}\left(\frac{b_{0}q(t)}{1+k_{1}b_{0}Q(t)}\right)^{2},

and therefore

a˙+k1(ak)(a0+b0+Ga(t)a)ga(t)\displaystyle\dot{a}+k_{1}(a-k)(a_{0}+b_{0}+G_{a}(t)-a)-g_{a}(t)
=a˙+k1(a0+b0+Ga(t)b0q(t)1+k1b0Q(t)k)(b0q(t)1+k1b0Q(t))ga(t)=0,\displaystyle=\dot{a}+k_{1}\left(a_{0}+b_{0}+G_{a}(t)-\frac{b_{0}q(t)}{1+k_{1}b_{0}Q(t)}-k\right)\left(\frac{b_{0}q(t)}{1+k_{1}b_{0}Q(t)}\right)-g_{a}(t)=0,

which completes the verification. ∎

4.4 Negative slope motif with inflows and outflows

We can extend the results in the previous section even to the situation where one or both species are in outflow. We consider the case where both AA and BB are in outflow and at equal rates. Outflows with equal rates for all species is a reasonable (and standard) assumption for chemostats. We will allow the inflows to be arbitrary functions of time. Suppose we have the following reaction network:

0ga(t)A,\displaystyle 0\xrightarrow{g_{a}(t)}A,\quad 0gb(t)B,\displaystyle 0\xrightarrow{g_{b}(t)}B,
A0,\displaystyle A\xrightarrow{\ell}0,\quad B0,\displaystyle B\xrightarrow{\ell}0,
A+Bk12B,\displaystyle A+B\xrightarrow{k_{1}}2B,\quad Bk2A,\displaystyle B\xrightarrow{k_{2}}A,

whose mass action system is:

a˙\displaystyle\dot{a} =k1b(ka)+ga(t)a,\displaystyle=k_{1}b(k^{*}-a)+g_{a}(t)-\ell a, (4.19)
b˙\displaystyle\dot{b} =k1b(ka)+gb(t)b.\displaystyle=-k_{1}b(k^{*}-a)+g_{b}(t)-\ell b.
Refer to caption
Refer to caption
Figure 2: The reaction network {A+Bk12B,Bk2A}\{A+B\xrightarrow{k_{1}}2B,~{}B\xrightarrow{k_{2}}A\} coupled with an inflow 0ga(t)A0\xrightarrow{g_{a}(t)}A (with ga(t)tg_{a}(t)\xrightarrow{t\to\infty}\infty) and outflows A0,B0A\xrightarrow{\ell}0,~{}B\xrightarrow{\ell}0. When the inflow ga(t)g_{a}(t) is sub-exponential, aa converges to a value that is equivalent to changing the rate constant of the reaction BAB\to A from k2k_{2} to k2+k_{2}+\ell. When the inflow ga(t)=exp(αt)g_{a}(t)=\exp(\alpha t) is exponential, aa converges to a value that is equivalent to changing the rate constant of the reaction BAB\to A from k2k_{2} to k2+α+k_{2}+\alpha+\ell.
Theorem 4.10.

Consider (4.19) with ga,gb:00g_{a},g_{b}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} such that g(t)ga(t)+gb(t)tg(t)\coloneqq g_{a}(t)+g_{b}(t)\xrightarrow{t\to\infty}\infty, and >0\ell>0. Assume that if gb0g_{b}\equiv 0 then b(0)>0b(0)>0. Let

H(t)et0tg(s)es𝑑s.H(t)\coloneqq e^{-\ell t}\int_{0}^{t}g(s)e^{\ell s}~{}ds.

If αlimtga(t)/H(t)\alpha\coloneqq\lim_{t\to\infty}g_{a}(t)/H(t) exists and is finite, then

a(t)tk+αk1.a(t)\xrightarrow{t\to\infty}k^{*}+\frac{\alpha}{k_{1}}.
Proof.

We will first show that b/H1b/H\to 1. Define the following invertible change of coordinates:

02{(0,0)}\displaystyle\mathbb{R}_{\geq 0}^{2}\setminus\{(0,0)\} >0×[0,1]\displaystyle\to\mathbb{R}_{>0}\times[0,1]
(a,b)\displaystyle(a,b) (x,β)=(a+b,b/(a+b)).\displaystyle\mapsto\left(x,\beta\right)=\left(a+b,b/(a+b)\right).

In (x,β)(x,\beta) coordinates, the dynamical system (4.19) is

x˙\displaystyle\dot{x} =g(t)x\displaystyle=g(t)-\ell x
β˙\displaystyle\dot{\beta} =k1β(x(1β)k)+(1β)gbβgax\displaystyle=k_{1}\beta\left(x(1-\beta)-k^{*}\right)+\frac{(1-\beta)g_{b}-\beta g_{a}}{x}

The equation for x˙\dot{x} can be solved explicitly using an integrating factor:

x(t)=et(0tg(s)es𝑑s+x(0))=x(0)et+H(t).x(t)=e^{-\ell t}\left(\int_{0}^{t}g(s)e^{\ell s}~{}ds+x(0)\right)=x(0)e^{-\ell t}+H(t).

We now show that xx\to\infty. By hypothesis, we have that gtg\xrightarrow{t\to\infty}\infty. So for any N>0N>0, there is a TN>(ln2)/T_{N}>(\ln 2)/\ell such that g(t)>Ng(t)>N for t>TNt>T_{N}. So,

x(2TN)\displaystyle x(2T_{N}) H(2TN)e2TNTN2TNg(s)es𝑑s\displaystyle\geq H(2T_{N})\geq e^{-2\ell T_{N}}\int_{T_{N}}^{2T_{N}}g(s)e^{\ell s}~{}ds
e2TNNTN2TNes𝑑s=N1exp(TN)N2.\displaystyle\geq e^{-2\ell T_{N}}N\int_{T_{N}}^{2T_{N}}e^{\ell s}~{}ds=N\frac{1-\exp(-\ell T_{N})}{\ell}\geq\frac{N}{2\ell}.

Since NN is arbitrary, xtx\xrightarrow{t\to\infty}\infty.

Now note that the equation for β˙\dot{\beta} is the same as the corresponding equation in Theorem 4.3 (in particular, the outflow parameter \ell does not appear in β˙\dot{\beta}). Therefore, the same reasoning as in Theorem 4.3 gives us that b/xt1b/x\xrightarrow{t\to\infty}1 and b/Ht1b/H\xrightarrow{t\to\infty}1. Moreover, a/x0a/x\to 0 and

ab=a/xb/x0.\frac{a}{b}=\frac{a/x}{b/x}\to 0.

Finally, we check the hypotheses of Theorem 3.8. Clearly bb\to\infty and so b\int b\to\infty. Moreover, we have that

limta+ga(t)k1b(t)=limt1k1(ab+ga(t)/H(t)b/H(t))=αk1.\lim_{t\to\infty}\frac{-\ell a+g_{a}(t)}{k_{1}b(t)}=\lim_{t\to\infty}\frac{1}{k_{1}}\left(-\ell\frac{a}{b}+\frac{g_{a}(t)/H(t)}{b/H(t)}\right)=\frac{\alpha}{k_{1}}.

Therefore, we conclude from Theorem 3.8 that a(t)tk+α/k1a(t)\xrightarrow{t\to\infty}k^{*}+\alpha/k_{1} for any b(0)>0b(0)>0 or gb0g_{b}\not\equiv 0. ∎

Remark 4.11.

In Theorem 4.10, we assumed equal outflows of species AA and BB, a=b=\ell_{a}=\ell_{b}=\ell, a standard assumption for a chemostat. Simulations show that even when this assumption is relaxed and the outflow rates are allowed to be different, the property of dynamic ACR in AA is preserved. We conjecture that the ACR value is obtained by replacing \ell with b\ell_{b}, the outflow of species BB, in the expression for H(t)H(t) in Theorem 4.10.

Corollary 4.12.

Assume the hypotheses and notation of Theorem 4.10. Suppose that gag_{a} is differentiable. Then

α=limt[ga(t)g(t)+ga(t)g(t)],\alpha=\lim_{t\to\infty}\left[\ell\frac{g_{a}(t)}{g(t)}+\frac{g_{a}^{\prime}(t)}{g(t)}\right],

and so

a(t)tk+1k1limt[ga(t)g(t)+ga(t)g(t)].a(t)\xrightarrow{t\to\infty}k^{*}+\frac{1}{k_{1}}\lim_{t\to\infty}\left[\ell\frac{g_{a}(t)}{g(t)}+\frac{g_{a}^{\prime}(t)}{g(t)}\right].
Proof.

Note that

α=limtga(t)H(t)=limtga(t)et0tg(s)es𝑑s.\displaystyle\alpha=\lim_{t\to\infty}\frac{g_{a}(t)}{H(t)}=\lim_{t\to\infty}\frac{g_{a}(t)e^{\ell t}}{\int_{0}^{t}g(s)e^{\ell s}ds}.

For any ga(t)0g_{a}(t)\not\equiv 0, both the numerator and the denominator go to infinity, so the result then follows from applying L’Hospital’s rule. ∎

Example 4.13.

Consider the reaction network (4.10) in Example 4.6 with d2d\geq 2 (so that ga(t)g_{a}(t)\to\infty), and additionally include outflows Xd0,Y0,X_{d}\xrightarrow{\ell}0,~{}Y\xrightarrow{\ell}0, so the complete reaction network is

0\displaystyle 0 X1,\displaystyle\to X_{1}, (4.20)
Xj\displaystyle X_{j} Xj+Xj+1,(1jd1)\displaystyle\to X_{j}+X_{j+1},\quad(1\leq j\leq d-1)
Xd+Y\displaystyle X_{d}+Y k12Y,\displaystyle\xrightarrow{k_{1}}2Y,
Y\displaystyle Y k2Xd,\displaystyle\xrightarrow{k_{2}}X_{d},
Xd\displaystyle X_{d} 0,Y0.\displaystyle\xrightarrow{\ell}0,\quad Y\xrightarrow{\ell}0.

As earlier, for j{1,,d1}j\in\{1,\ldots,d-1\}, XjtjX_{j}\sim t^{j}. Choose rate constants and initial values so that Xj=tjX_{j}=t^{j} for j{1,,d1}j\in\{1,\ldots,d-1\}. So the concentrations for XdX_{d} and YY follow:

x˙d\displaystyle\dot{x}_{d} =k1y(kxd)+td1xd,\displaystyle=k_{1}y(k^{*}-x_{d})+t^{d-1}-\ell x_{d},
y˙\displaystyle\dot{y} =k1y(kxd)y.\displaystyle=-k_{1}y(k^{*}-x_{d})-\ell y.

In the notation of Theorem 4.10, gxd(t)=g(t)=td1g_{x_{d}}(t)=g(t)=t^{d-1} and by Corollary 4.12

α=+limtga(t)g(t)=,\displaystyle\alpha=\ell+\lim_{t\to\infty}\frac{g_{a}^{\prime}(t)}{g(t)}=\ell,

and so

xdtk+k1.x_{d}\xrightarrow{t\to\infty}k^{*}+\frac{\ell}{k_{1}}.

See left panel of Figure 2 for a representative trajectory. ∎

Example 4.14.

Consider the reaction network (4.11) in Example 4.7 with outflows included, so the complete reaction network is

Z𝛼2Z,\displaystyle Z\xrightarrow{\alpha}2Z, ZZ+A\displaystyle\quad Z\to Z+A (4.21)
A+B\displaystyle A+B k12B,\displaystyle\xrightarrow{k_{1}}2B,
B\displaystyle B k2A,\displaystyle\xrightarrow{k_{2}}A,
B0,\displaystyle B\xrightarrow{\ell}0, A0.\displaystyle\quad A\xrightarrow{\ell}0.

The concentrations of AA and BB evolve according to:

a˙\displaystyle\dot{a} =k1b(ka)+eαta,\displaystyle=k_{1}b(k^{*}-a)+e^{\alpha t}-\ell a, (4.22)
b˙\displaystyle\dot{b} =k1b(ka)b.\displaystyle=-k_{1}b(k^{*}-a)-\ell b.

Noting that ga(t)=g(t)=eαtg_{a}(t)=g(t)=e^{\alpha t}, by Corollary 4.12,

atk+k1+αk1.a\xrightarrow{t\to\infty}k^{*}+\frac{\ell}{k_{1}}+\frac{\alpha}{k_{1}}.

See right panel of Figure 2 for a representative trajectory. ∎

4.5 A representative of the negative slope motif with stronger robustness properties

Another representative of the negative slope motif has even stronger robustness properties. Consider the following open reaction network:

A+2Bk13B,\displaystyle A+2B\xrightarrow{k_{1}}3B, 2Bk2A+B,\displaystyle 2B\xrightarrow{k_{2}}A+B, (4.23)
0ga(t)A,\displaystyle 0\xrightarrow{g_{a}(t)}A, 0gb(t)B,\displaystyle 0\xrightarrow{g_{b}(t)}B,

whose mass action system is

a˙\displaystyle\dot{a} =k1b2(ka)+ga(t),\displaystyle=k_{1}b^{2}(k^{*}-a)+g_{a}(t), (4.24)
b˙\displaystyle\dot{b} =k1b2(ka)+gb(t),\displaystyle=-k_{1}b^{2}(k^{*}-a)+g_{b}(t),

where k=k2/k1k^{*}=k_{2}/k_{1}.

Theorem 4.15.

Consider the mass action system (4.24) with time-dependent inflow rates ga:00g_{a}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} and gb:00g_{b}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}, such that

G(t)0t(ga(s)+gb(s))𝑑st,G(t)\coloneqq\int_{0}^{t}\left(g_{a}(s)+g_{b}(s)\right)ds\xrightarrow{t\to\infty}\infty, (4.25)

and

ga(t)/(G(t))2tα.g_{a}(t)/(G(t))^{2}\xrightarrow{t\to\infty}\alpha. (4.26)

Then a(t)tk+α/k1a(t)\xrightarrow{t\to\infty}k^{*}+\alpha/k_{1} for any (a(0),b(0))>02(a(0),b(0))\in\mathbb{R}^{2}_{>0}.

Proof.

The proof is analogous to that of Theorem 4.3. We will first show that b/G1b/G\to 1. Note that a˙+b˙=ga+gb\dot{a}+\dot{b}=g_{a}+g_{b} implies that a(t)+b(t)=a(0)+b(0)+G(t),a(t)+b(t)=a(0)+b(0)+G(t), and so a+bta+b\xrightarrow{t\to\infty}\infty. After changing to (x,β)(x,\beta) coordinates defined in (4.7), the dynamical system (4.4) is:

x˙\displaystyle\dot{x} =ga(t)+gb(t),\displaystyle=g_{a}(t)+g_{b}(t), (4.27)
β˙\displaystyle\dot{\beta} =k1β2x(x(1β)k)+(1β)gbβgax.\displaystyle=k_{1}\beta^{2}x\left(x(1-\beta)-k^{*}\right)+\frac{(1-\beta)g_{b}-\beta g_{a}}{x}.

The first equation has the solution x(t)=x(0)+G(t)x(t)=x(0)+G(t), which implies that x(t)x(t) grows monotonically to infinity. It then follows that for any δ(0,12)\delta\in(0,\frac{1}{2}) there is a time Tδ>0T_{\delta}>0 such that for all t>Tδt>T_{\delta} we have β˙(t)>1\dot{\beta}(t)>1 whenever β(t)(δ,1δ)\beta(t)\in(\delta,1-\delta), because under such assumptions x(t)x(t) is large enough to imply that the positive term

k1β2x(x(1β))k_{1}\beta^{2}x\left(x(1-\beta)\right)

is much larger than all the negative terms on the right-hand side of the equation for β˙(t)\dot{\beta}(t). Moreover, we obtain that if β(0)(δ,1δ)\beta(0)\in(\delta,1-\delta) then the solution β(t)\beta(t) becomes larger than 1δ1-\delta for all t>Tδ+1t>T_{\delta}+1, which implies that β(t)1\beta(t)\to 1, and therefore b/G1b/G\to 1.

Finally, we check the hypotheses of Theorem 3.8. We may rewrite the equation for a˙\dot{a} in (4.4) as

a˙\displaystyle\dot{a} =k1b2(αk1+ka)+G2(t)(ga(t)G2(t)αb2G2(t)).\displaystyle=k_{1}b^{2}\left(\frac{\alpha}{k_{1}}+k^{*}-a\right)+G^{2}(t)\left(\frac{g_{a}(t)}{G^{2}(t)}-\alpha\frac{b^{2}}{G^{2}(t)}\right). (4.28)

Clearly bb\to\infty and so b\int b\to\infty. Moreover, we have that

limtG2(t)b2(t)(ga(t)G2(t)αb2G2(t))=limt(ga(t)G2(t)α)=0.\lim_{t\to\infty}\frac{G^{2}(t)}{b^{2}(t)}\left(\frac{g_{a}(t)}{G^{2}(t)}-\alpha\frac{b^{2}}{G^{2}(t)}\right)=\lim_{t\to\infty}\left(\frac{g_{a}(t)}{G^{2}(t)}-\alpha\right)=0.

Therefore we may conclude from Theorem 3.8 that a(t)tk+α/k1a(t)\xrightarrow{t\to\infty}k^{*}+\alpha/k_{1} for any (a(0),b(0))>02(a(0),b(0))\in\mathbb{R}^{2}_{>0}. ∎

Theorem 4.16.

Consider the mass action system (4.24) with time-dependent inflow rates ga:00g_{a}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} and gb:00g_{b}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}, such that

G(t)0t(ga(s)+gb(s))𝑑st.G(t)\coloneqq\int_{0}^{t}\left(g_{a}(s)+g_{b}(s)\right)ds\xrightarrow{t\to\infty}\infty. (4.29)

We have that a(t)tka(t)\xrightarrow{t\to\infty}k^{*} for any (a(0),b(0))>02(a(0),b(0))\in\mathbb{R}^{2}_{>0} if one of the following holds:

  1. 1.

    gag_{a} is bounded,

  2. 2.

    ga(t)g_{a}(t)\to\infty and g˙a(t)ga(t)1/2g(t)0\displaystyle\frac{\dot{g}_{a}(t)}{g_{a}(t)^{1/2}g(t)}\to 0.

Proof.

If gag_{a} is bounded, then ga(t)/(G(t))2t0g_{a}(t)/(G(t))^{2}\xrightarrow{t\to\infty}0 and so the result follows immediately from Theorem 4.15. Suppose that ga(t)tg_{a}(t)\xrightarrow{t\to\infty}\infty and g˙a(t)ga(t)1/2g(t)0\displaystyle\frac{\dot{g}_{a}(t)}{g_{a}(t)^{1/2}g(t)}\to 0. Since ga(t)g_{a}(t)\to\infty, clearly G(t)G(t)\to\infty and so by applying L’Hospital’s rule, we have that

limtga(t)G(t)=limtddtga(t)ddtG(t)=limtg˙a(t)2ga(t)1/2g(t)=0,\displaystyle\lim_{t\to\infty}\frac{\sqrt{g_{a}(t)}}{G(t)}=\lim_{t\to\infty}\frac{\frac{d}{dt}\sqrt{g_{a}(t)}}{\frac{d}{dt}G(t)}=\lim_{t\to\infty}\frac{\dot{g}_{a}(t)}{2g_{a}(t)^{1/2}g(t)}=0,

and so ga(t)/(G(t))20g_{a}(t)/(G(t))^{2}\to 0. Then from Theorem 4.15, a(t)tka(t)\xrightarrow{t\to\infty}k^{*}. ∎

Remark 4.17.

When ga(t)tg_{a}(t)\xrightarrow{t\to\infty}\infty, the limit of g˙a(t)ga(t)1/2g(t)\displaystyle\frac{\dot{g}_{a}(t)}{g_{a}(t)^{1/2}g(t)} cannot be positive. Indeed suppose that g˙a(t)ga(t)1/2g(t)tα>0\displaystyle\frac{\dot{g}_{a}(t)}{g_{a}(t)^{1/2}g(t)}\xrightarrow{t\to\infty}\alpha>0, then after some finite time Tc>0T_{c}>0, we have that for some positive constant c>0c>0,

ddtga(t)>cαga(t)1/4g(t)cαga(t)5/4,\frac{d}{dt}g_{a}(t)>c\alpha g_{a}(t)^{1/4}g(t)\geq c\alpha g_{a}(t)^{5/4},

and the differential equation g˙a(t)=cαga(t)5/4\dot{g}_{a}(t)=c\alpha g_{a}(t)^{5/4} has a finite time blow up.

Remark 4.18.

A simpler result is obtained by replacing the term g˙a(t)ga(t)1/2g(t)\displaystyle\frac{\dot{g}_{a}(t)}{g_{a}(t)^{1/2}g(t)} in Theorem 4.16 and Remark 4.17 with the term g˙a(t)ga(t)3/2\displaystyle\frac{\dot{g}_{a}(t)}{g_{a}(t)^{3/2}}.

Example 4.19.

Consider the following reaction network for some positive integer mm

Z1\displaystyle Z_{1} 12Z1,\displaystyle\xrightarrow{1}2Z_{1}, (4.30)
Z2+Z1\displaystyle Z_{2}+Z_{1} 12Z2+Z1,\displaystyle\xrightarrow{1}2Z_{2}+Z_{1},
Z2\displaystyle Z_{2} 1Z2+A\displaystyle\xrightarrow{1}Z_{2}+A
A+2B\displaystyle A+2B k13B,\displaystyle\xrightarrow{k_{1}}3B,
2B\displaystyle 2B k2A+B.\displaystyle\xrightarrow{k_{2}}A+B.

We assumed that the rate constants for some of the equations are 11 for simplicity. The variable z1z_{1} satisfies the differential equation z˙1=z1\dot{z}_{1}=z_{1}, which has an exponentially growing solution, z1=etz_{1}=e^{t}, assuming initially z1(0)=1z_{1}(0)=1. Then z2z_{2} satisfies the differential equation

z˙2=z1z2,\dot{z}_{2}=z_{1}z_{2},

which has the solution:

z2(t)=eet,z_{2}(t)=e^{e^{t}},

assuming that z2(0)=ez_{2}(0)=e. The ODE system satisfied by the species AA and BB is:

a˙\displaystyle\dot{a} =k1b2(ka)+eet,\displaystyle=k_{1}b^{2}(k^{*}-a)+e^{e^{t}}, (4.31)
b˙\displaystyle\dot{b} =k1b2(ka).\displaystyle=-k_{1}b^{2}(k^{*}-a).

Let ga(t)eetg_{a}(t)\coloneqq e^{e^{t}}, then clearly ga(t)g_{a}(t)\to\infty and

g˙a(t)ga(t)3/2=eteete3et/2=etet/2t0.\frac{\dot{g}_{a}(t)}{g_{a}(t)^{3/2}}=\frac{e^{t}e^{e^{t}}}{e^{3e^{t}/2}}=e^{t-e^{t}/2}\xrightarrow{t\to\infty}0.

Then, by Theorem 4.16, aka\to k^{*}. ∎

4.6 Zero slope motif

We consider the motif     and the following open reaction network:

B k1k2A+B,\displaystyle B\mathrel{\mathop{\vbox{\offinterlineskip\halign{\hfil#\hfil\cr\hphantom{$\scriptstyle\mspace{8.0mu}{k_{2}}\mspace{8.0mu}$}\cr\rightarrowfill\cr\vrule height=0.0pt,width=20.00003pt\cr\leftarrowfill\cr\hphantom{$\scriptstyle\mspace{8.0mu}{k_{1}}\mspace{8.0mu}$}\cr\kern-1.29167pt\cr}}}\limits^{k_{2}}_{k_{1}}}A+B, (4.32)
0ga(t)A,\displaystyle 0\xrightarrow{g_{a}(t)}A, 0gb(t)B.\displaystyle 0\xrightarrow{g_{b}(t)}B.

whose mass action system is

a˙\displaystyle\dot{a} =k1b(ka)+ga(t),\displaystyle=k_{1}b(k^{*}-a)+g_{a}(t), (4.33)
b˙\displaystyle\dot{b} =gb(t),\displaystyle=g_{b}(t),

where k=k2/k1k^{*}=k_{2}/k_{1}.

Theorem 4.20.

Consider (4.33) with ga,gb:00g_{a},g_{b}:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} such that 0tb(s)𝑑st\int_{0}^{t}b(s)ds\xrightarrow{t\to\infty}\infty. Then

a(t)tk+1k1limtga(t)b(0)+0tgb(s)𝑑s,a(t)\xrightarrow{t\to\infty}k^{*}+\frac{1}{k_{1}}\lim_{t\to\infty}\frac{g_{a}(t)}{b(0)+\int_{0}^{t}g_{b}(s)ds},

provided the limit exists as a finite value.

Proof.

The result follows immediately from applying Theorem 3.8. ∎

Remark 4.21.

It is worth contrasting the dynamic ACR properties of the negative slope motif with the other motifs. The negative slope motif has particularly striking robustness properties when inflows are added, which is not the case for the other motifs. This is further argument in favor of the attention that the negative slope motif has attracted, besides having a positive conservation law and being relatively simple in nature.

4.7 Enzyme catalysis with bifunctional enzyme

Bifunctional enzymes are those which have two different (usually opposing) functions, for instance helping both to catalyze production as well as elimination of a substrate [8, 21]. For instance, a bifunctional enzyme might facilitate phosphorylation and when in a different form, facilitate dephosphorylation. Bifunctional enzymes have been shown to play a critical role in generating static absolute concentration robustness [9, 21]. Here we show that an enzyme catalysis network with a bifunctional enzyme can produce dynamic absolute concentration robustness as well.

We consider a mass action system with inflows that results from the reaction network depicted below.

0gxX,0gyY,\displaystyle 0\xrightarrow{g_{x}}X,~{}0\xrightarrow{g_{y}}Y,{} 0geE,0gcC,\displaystyle 0\xrightarrow{g_{e}}E,~{}0\xrightarrow{g_{c}}C, (4.34)
X+Ek2[]k1\displaystyle X+E\stackrel{{\scriptstyle[}}{{k}}_{2}]{k_{1}}{\rightleftarrows} Ck3Y+E,\displaystyle C\stackrel{{\scriptstyle k_{3}}}{{\to}}Y+E,
Y+C\displaystyle Y+C k4X+C.\displaystyle\xrightarrow{k_{4}}X+C.

The system of ODEs for the mass action system is

x˙\displaystyle\dot{x} =k1xe+k2c+k4cy+gx,\displaystyle=-k_{1}xe+k_{2}c+k_{4}cy+g_{x}, (4.35)
y˙\displaystyle\dot{y} =k3ck4cy+gy,\displaystyle=k_{3}c-k_{4}cy+g_{y},
e˙\displaystyle\dot{e} =k1xe+(k2+k3)c+ge,\displaystyle=-k_{1}xe+(k_{2}+k_{3})c+g_{e},
c˙\displaystyle\dot{c} =k1xe(k2+k3)c+gc.\displaystyle=k_{1}xe-(k_{2}+k_{3})c+g_{c}.

We note that the variable yy is in power-engine-load form:

y˙=k4c(k3k4y)+gy,\displaystyle\dot{y}={\color[rgb]{0.5,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0.5,0}\pgfsys@color@cmyk@stroke{0}{0}{1}{.5}\pgfsys@color@cmyk@fill{0}{0}{1}{.5}k_{4}c}{\color[rgb]{1,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,1}\pgfsys@color@cmyk@stroke{0}{1}{0}{0}\pgfsys@color@cmyk@fill{0}{1}{0}{0}\left(\frac{k_{3}}{k_{4}}-y\right)}+{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}g_{y}},

and therefore is a candidate for dynamic ACR. The key to proving dynamic ACR for different inflows or no inflows is to show that the “power” variable cc is eventually bounded away from 0 for any initial value. We will do this here for a variety of cases with inflows. When there is no inflow in YY (gy=0g_{y}=0) and assuming that YY is a dynamic ACR species, then it is easy to see that the dynamic ACR value must be kk3/k4k^{*}\coloneqq k_{3}/k_{4}, the same as its static ACR value. In the case of no inflows (in any species gx=gy=ge=gc=0g_{x}=g_{y}=g_{e}=g_{c}=0), one can further show that yy has dynamic ACR, that is for any positive initial value with Txx(0)+y(0)+c(0)kT_{x}\coloneqq x(0)+y(0)+c(0)\geq k^{*}, we have that yky\to k^{*}. In other words, every initial value compatible with the hyperplane {y=k}\{y=k^{*}\} results in convergence to the hyperplane. We defer the proof of this result to future work because it requires additional ideas not directly related to the main theme of the paper. We focus here on certain interesting inflow cases.

In the theorem below, we show that yy can converge to the same value kk^{*} even in cases when some (or all) of the inflows are present.

Theorem 4.22.

Consider (4.35) with gc>0g_{c}>0 (while the other inflow rates gx,gy,geg_{x},g_{y},g_{e} are arbitrary nonnegative constants). Then ytkk3/k4y\xrightarrow{t\to\infty}k^{*}\coloneqq k_{3}/k_{4}.

Proof.

Introduce the invertible change of variables (x,y,e,c)(y,c,Tx,Te)04(x,y,e,c)\mapsto(y,c,T_{x},T_{e})\in\mathbb{R}^{4}_{\geq 0}, where Txx+y+cT_{x}\coloneqq x+y+c and Tee+cT_{e}\coloneqq e+c. The new variables satisfy

y˙\displaystyle\dot{y} =k4c(ky)+gy\displaystyle=k_{4}c\left(k^{*}-y\right)+g_{y} (4.36)
c˙\displaystyle\dot{c} =k1((Txyc)(Tec)kc)+gc\displaystyle=k_{1}\left((T_{x}-y-c)(T_{e}-c)-kc\right)+g_{c}
T˙x\displaystyle\dot{T}_{x} =gx+gy+gc=:g1\displaystyle=g_{x}+g_{y}+g_{c}=:g_{1}
T˙e\displaystyle\dot{T}_{e} =ge+gc=:g2\displaystyle=g_{e}+g_{c}=:g_{2}

where k(k2+k3)/k1k\coloneqq(k_{2}+k_{3})/k_{1}, kk3/k4k^{*}\coloneqq k_{3}/k_{4}, as well as 0c(t)Te(t)0\leq c(t)\leq T_{e}(t) and 0c(t)+y(t)Tx(t)0\leq c(t)+y(t)\leq T_{x}(t) for all t0t\geq 0. The hypothesis gc>0g_{c}>0 implies that g1>0g_{1}>0 and g2>0g_{2}>0. The evolution of TxT_{x} and TeT_{e} is uncoupled from that of yy and cc, and is easily seen to have the solution Tx(t)=Tx(0)+g1tT_{x}(t)=T_{x}(0)+g_{1}t and Te(t)=Te(0)+g2tT_{e}(t)=T_{e}(0)+g_{2}t. So it is natural to think of (4.36) as a two-dimensional non-autonomous system in the variables (y,c)(y,c).

Since the first term of c˙\dot{c} is nonnegative, we have that c˙(k2+k3)c+gc\dot{c}\geq-(k_{2}+k_{3})c+g_{c} which is positive for any c<gc/(k2+k3)c<g_{c}/(k_{2}+k_{3}) and so after some finite time, c(t)>gc/(k2+k3)c(t)>g_{c}/(k_{2}+k_{3}). But this implies that y˙<0\dot{y}<0 for any sufficiently large yy, and so yy is bounded above by some y¯>0\bar{y}>0 after some finite time.

Now suppose that

c(t)<12(min{Tx(t)y¯,Te(t)}k2).c(t)<\frac{1}{2}\left(\min\left\{T_{x}(t)-\bar{y},T_{e}(t)\right\}-\frac{k}{2}\right).

Then

c˙\displaystyle\dot{c} =k1((Tx(t)yc)(Te(t)c)kc)+gc\displaystyle=k_{1}\left((T_{x}(t)-y-c)(T_{e}(t)-c)-kc\right)+g_{c}
>k1((Tx(t)y¯c)(Te(t)c)kc)+gc\displaystyle>k_{1}\left((T_{x}(t)-\bar{y}-c)(T_{e}(t)-c)-kc\right)+g_{c}
>k1((c+k/2)2kc)+gc\displaystyle>k_{1}\left(\left(c+k/2\right)^{2}-kc\right)+g_{c}
=k1(c2+k2/4)+gc,\displaystyle=k_{1}\left(c^{2}+k^{2}/4\right)+g_{c},

which implies that cc\to\infty since Tx(t)y¯T_{x}(t)-\bar{y}\to\infty and Te(t)T_{e}(t)\to\infty.

The result yky\to k^{*} then follows from applying Theorem 3.8. ∎

We now present a case where specific inflows result in yy converging to a finite value, which is however not the same as kk^{*} (unless gy=0g_{y}=0), the ACR value of yy when there are no inflows.

Theorem 4.23.

Consider (4.35) with gc=0g_{c}=0, ge=0g_{e}=0, gy0g_{y}\geq 0 and gx+gy>0g_{x}+g_{y}>0. Then

ytk+gyk4(e(0)+c(0)).y\xrightarrow{t\to\infty}k^{*}+\frac{g_{y}}{k_{4}(e(0)+c(0))}.

In particular, yky\to k^{*} if gy=0g_{y}=0.

Proof.

Introduce the invertible change of variables (x,y,e,c)(y,c,Tx,Te)04(x,y,e,c)\mapsto(y,c,T_{x},T_{e})\in\mathbb{R}^{4}_{\geq 0}, where Txx+y+cT_{x}\coloneqq x+y+c and Tee+cT_{e}\coloneqq e+c. The new variables satisfy

y˙\displaystyle\dot{y} =k4c(ky)+gy\displaystyle=k_{4}c\left(k^{*}-y\right)+g_{y} (4.37)
c˙\displaystyle\dot{c} =k1((Tx(t)yc)(Tec)kc)\displaystyle=k_{1}\left((T_{x}(t)-y-c)(T_{e}-c)-kc\right)
T˙x\displaystyle\dot{T}_{x} =gx+gy=:g1>0\displaystyle=g_{x}+g_{y}=:g_{1}>0
Te(t)\displaystyle T_{e}(t) =Te(0)Te\displaystyle=T_{e}(0)\coloneqq T_{e}

where k(k2+k3)/k1k\coloneqq(k_{2}+k_{3})/k_{1}, kk3/k4k^{*}\coloneqq k_{3}/k_{4}, as well as 0c(t)Te0\leq c(t)\leq T_{e} and 0c(t)+y(t)Tx(t)0\leq c(t)+y(t)\leq T_{x}(t) for all t0t\geq 0.

Suppose first that yky\leq k^{*} for all time. For arbitrary ε\varepsilon, suppose that c<Teεc<T_{e}-\varepsilon. Then we have that

c˙\displaystyle\dot{c} =k1((Tx(t)yc)(Tec)kc)\displaystyle=k_{1}\left((T_{x}(t)-y-c)(T_{e}-c)-kc\right)
>k1((Tx(t)kc)εkc)\displaystyle>k_{1}\left((T_{x}(t)-k^{*}-c)\varepsilon-kc\right)
>k1((Tx(t)kTe)εkTe).\displaystyle>k_{1}\left((T_{x}(t)-k^{*}-T_{e})\varepsilon-kT_{e}\right).

Since Tx(t)T_{x}(t)\uparrow\infty, we have that the right hand side is arbitrarily large after some finite time, which implies that c(t)Tec(t)\to T_{e}.

Now suppose that y(t)>ky(t)>k^{*} for some tt. It is clear that {y>k}\{y>k^{*}\} is invariant for any gy0g_{y}\geq 0. We may therefore assume, without loss of generality, that y(t)>ky(t)>k^{*} for all t0t\geq 0. For convenience we add 0=0c=(k3k4k)c0=0\cdot c=(k_{3}-k_{4}k^{*})c to x˙\dot{x} and rewrite the original system (4.35) as

x˙\displaystyle\dot{x} =k1xe+(k2+k3)ck4c(ky)+gx,\displaystyle=-k_{1}xe+(k_{2}+k_{3})c-k_{4}c(k^{*}-y)+g_{x}, (4.38)
y˙\displaystyle\dot{y} =k4c(ky)+gy,\displaystyle=k_{4}c(k^{*}-y)+g_{y},
e˙\displaystyle\dot{e} =k1xe+(k2+k3)c,\displaystyle=-k_{1}xe+(k_{2}+k_{3})c,
c˙\displaystyle\dot{c} =k1xe(k2+k3)c.\displaystyle=k_{1}xe-(k_{2}+k_{3})c.

Let u(t)k1xe(k2+k3)c=k1(xekc),v(t)k4c(ky)u(t)\coloneqq k_{1}xe-(k_{2}+k_{3})c=k_{1}(xe-kc),~{}v(t)\coloneqq-k_{4}c(k^{*}-y), and so the previous system in terms of uu and vv is:

x˙=u(t)+v(t)+gx,y˙=v(t)+gy,e˙=u(t),c˙=u(t).\displaystyle\dot{x}=-u(t)+v(t)+g_{x},\quad\dot{y}=-v(t)+g_{y},\quad\dot{e}=-u(t),\quad\dot{c}=u(t). (4.39)

Note that y(t)>ky(t)>k^{*} is equivalent to v(t)>0v(t)>0. Furthermore,

u˙(t)\displaystyle\dot{u}(t) =k1(x˙e+xe˙kc˙)\displaystyle=k_{1}(\dot{x}e+x\dot{e}-k\dot{c}) (4.40)
=k1(eu+ev+gxexuku)=k1(u(e+x+k)+e(v+gx))\displaystyle=k_{1}(-eu+ev+g_{x}e-xu-ku)=k_{1}(-u(e+x+k)+e(v+g_{x}))
>k1(e+x+k)u\displaystyle>-k_{1}(e+x+k)u

This shows that either u(t)>0u(t)>0 for all time after some finite time has passed, or u(t)0u(t)\uparrow 0. We argue that in both cases cc must converge to a positive limit.

First suppose that u(t)0u(t)\uparrow 0. Then x˙0\dot{x}\geq 0 and e˙0\dot{e}\geq 0, and so x(t)e(t)x(t)e(t) is nondecreasing for any t0t\geq 0. If xexe\uparrow\infty, then u(t)=k1(xekc)u(t)=k_{1}(xe-kc)\uparrow\infty since cc is bounded above. But this contradicts u(t)=k1(xekc)0u(t)=k_{1}(xe-kc)\uparrow 0. So xexe is bounded above and therefore converges to a positive limit that is greater than x(0)e(0)x(0)e(0). But then from the equation c˙=k1xe(k2+k3)c\dot{c}=k_{1}xe-(k_{2}+k_{3})c, we have that c(t)c(t) converges to a positive limit that is greater than x(0)e(0)/kx(0)e(0)/k.

Now suppose that u(t)>0u(t)>0 for all large enough tt. Then c˙=u(t)>0\dot{c}=u(t)>0 for large enough tt, so that c(t)c(t) is increasing. Since c(t)c(t) is bounded above by TeT_{e}, c(t)c(t) must converge to a positive limit.

We conclude that cc converges to a positive limit in every case. From the equation y˙=k4c(ky)+gy\dot{y}=k_{4}c(k^{*}-y)+g_{y}, we conclude that yy converges to a positive limit. If cc converges to a limit less than TeT_{e}, then from c˙=k1((Tx(t)yc)(Tec)kc)\dot{c}=k_{1}\left((T_{x}(t)-y-c)(T_{e}-c)-kc\right), cc increases without bound because TxT_{x} increases without bound. This is a contradiction. Therefore cTec\to T_{e}. We may therefore conclude from Theorem 3.8 that

ytk+gyk4(e(0)+c(0)).y\xrightarrow{t\to\infty}k^{*}+\frac{g_{y}}{k_{4}(e(0)+c(0))}.

5 Discussion

The property of dynamic ACR depends on global dynamics and this makes it difficult to prove it even in simple systems. It is helpful to have either network conditions or ODE characterizations. Extending our previous work on network motifs which was only in two dimensions, we provide a power-engine-load form applicable in arbitrary dimensions. We give convergence results and apply these to certain network models in the chemostat setting, where inflows and sometimes outflows are present. Even when the overall trajectory does not converge, we show that the ACR variable converges to a definite finite quantity, the so-called ACR value. Moreover, in many cases this is the same value that the system would converge to when taken as a closed system, i.e. without any inflows and outflows.

This work has implications for understanding the dynamics of systems which are far from equilibrium. We focus here on showing dynamic ACR in a few representative examples. In future work, we apply some of these results to analyze more complex and biochemically realistic systems.

Acknowledgments

B.J. was supported by NSF grant DMS-2051498. G.C. was supported by NSF grant DMS-2051568 and by a Simons Foundation fellowship. We thank the referee for careful reading and helpful comments.

References

  • [1] Badal Joshi and Gheorghe Craciun. Foundations of static and dynamic absolute concentration robustness. Journal of Mathematical Biology, 85(53), 2022.
  • [2] Badal Joshi and Gheorghe Craciun. Reaction network motifs for static and dynamic absolute concentration robustness. SIAM Journal on Applied Dynamical Systems, 22(2):501–526, 2023.
  • [3] Guy Shinar and Martin Feinberg. Structural sources of robustness in biochemical reaction networks. Science, 327(5971):1389–1391, 2010.
  • [4] Frank D Russo and Thomas J Silhavy. The essential tension: opposed reactions in bacterial two-component regulatory systems. Trends in microbiology, 1(8):306–310, 1993.
  • [5] Weihong Hsing, Frank D Russo, Karen K Bernd, and Thomas J Silhavy. Mutations that alter the kinase and phosphatase activities of the two-component sensor EnvZ. Journal of bacteriology, 180(17):4538–4546, 1998.
  • [6] Eric Batchelor and Mark Goulian. Robustness and the cycle of phosphorylation and dephosphorylation in a two-component regulatory system. Proceedings of the National Academy of Sciences, 100(2):691–696, 2003.
  • [7] Guy Shinar, Ron Milo, María Rodríguez Martínez, and Uri Alon. Input–output robustness in simple bacterial signaling systems. Proceedings of the National Academy of Sciences, 104(50):19931–19935, 2007.
  • [8] Uri Alon. An introduction to systems biology: design principles of biological circuits. CRC press, 2019.
  • [9] Yuval Hart and Uri Alon. The utility of paradoxical components in biological circuits. Molecular cell, 49(2):213–221, 2013.
  • [10] Jinsu Kim and German Enciso. Absolutely robust controllers for chemical reaction networks. Journal of the Royal Society Interface, 17(166):20200031, 2020.
  • [11] Fangzhou Xiao and John C Doyle. Robust perfect adaptation in biomolecular reaction networks. In 2018 IEEE conference on decision and control (CDC), pages 4345–4352. IEEE, 2018.
  • [12] Daniele Cappelletti, Ankit Gupta, and Mustafa Khammash. A hidden integral structure endows absolute concentration robust systems with resilience to dynamical concentration disturbances. Journal of the Royal Society Interface, 17(171):20200437, 2020.
  • [13] Mustafa H Khammash. Perfect adaptation in biology. Cell Systems, 12(6):509–521, 2021.
  • [14] Eduardo D Sontag. Adaptation and regulation with signal detection implies internal model. Systems & control letters, 50(2):119–126, 2003.
  • [15] Ankit Gupta and Mustafa Khammash. Universal structural requirements for maximal robust perfect adaptation in biomolecular networks. Proceedings of the National Academy of Sciences, 119(43):e2207802119, 2022.
  • [16] Badal Joshi and Anne Shiu. A survey of methods for deciding whether a reaction network is multistationary. “Chemical Dynamics” – special issue of Mathematical Modelling of Natural Phenomena, 10(5):47–67, 2015.
  • [17] Polly Y Yu and Gheorghe Craciun. Mathematical analysis of chemical reaction systems. Israel Journal of Chemistry, 58(6-7):733–741, 2018.
  • [18] David G. Schaeffer and John W. Cain. Ordinary differential equations: Basics and beyond. Springer, 2018.
  • [19] John Guckenheimer and Philip Holmes. Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, volume 42. Springer Science & Business Media, 2013.
  • [20] Nicolette Meshkat, Anne Shiu, and Angelica Torres. Absolute concentration robustness in networks with low-dimensional stoichiometric subspace. Vietnam Journal of Mathematics, 50:623–651, 2022.
  • [21] Badal Joshi and Tung D Nguyen. Bifunctional enzyme provides absolute concentration robustness in multisite covalent modification networks. arXiv preprint arXiv:2304.02679, 2023.