This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Parameter inference and nonequilibrium identification for Markovian systems
based on coarse-grained observations

Bingjie Wu1    Chen Jia1 Correspondence: [email protected] 1 Applied and Computational Mathematics Division, Beijing Computational Science Research Center, Beijing 100193, China.
Abstract

Most experiments can only detect a set of coarse-grained clusters of a molecular system, while the internal microstates are often inaccessible. Here, based on an infinitely long coarse-grained trajectory, we obtain a set of sufficient statistics which extracts all statistic information of coarse-grained observations. Based on these sufficient statistics, we set up a theoretical framework of parameter inference and nonequilibrium identification for a general Markovian system with an arbitrary number of microstates and arbitrary coarse-grained partitioning. Our framework can identify whether the sufficient statistics are enough for empirical estimation of all unknown parameters and we can also provide a quantitative criterion that reveals nonequilibrium. Our nonequilibrium criterion generalizes the one obtained [J. Chem. Phys. 132:041102 (2010)] for a three-state system with two coarse-grained clusters, and is capable of detecting a larger nonequilibrium region compared to the classical criterion based on autocorrelation functions.

Introduction — Mesoscopic molecular systems are widely modeled as a Markov process with a large number of microstates Cornish-Bowden (2013); Sakmann (2013); Golding et al. (2005). In experiments, it often occurs that only a set of coarse-grained clusters can be detected, while the internal microstates of the system are often inaccessible or indistinguishable. For instance, using live-cell imaging, one can obtain the time trace of the copy number of a protein in a single cell; however, it is difficult to determine whether the gene is in an active or an inactive state Golding et al. (2005). Due to the inability to accurately identify all microstates, the data obtained is usually the time trace of some coarse-grained states, which only retain a small degrees of freedom of the system Seifert (2019); Esposito (2012); Teza and Stella (2020). Other well-known examples of partial observations include ion channel opening Sakmann (2013), molecular docking and undocking to a sensor Skoge et al. (2013), and flagellar motor switches Berg (2003).

Given a sufficiently long trajectory of coarse-grained states, two natural and crucial questions arise: (i) is it possible to determine the transition topology and even all transition rates between all microstates? (ii) If complete recovery of transition rates is impossible, is it still possible to determine whether the system is in an equilibrium state or in a nonequilibrium steady state (NESS) and even estimate the values of some macroscopic thermodynamic quantities? The detection of nonequilibrium is important since in an NESS, there are nonzero net fluxes between microstates, indicating that the system is externally driven with concomitant entropy production Qian and Qian (2000); Li and Qian (2002); Witkoskie and Cao (2006); Tu (2008); Amann et al. (2010); Jia and Chen (2015); Skinner and Dunkel (2021); Harunari et al. (2022); Van der Meer et al. (2022); Van Vu and Saito (2023); Ghosal and Bisker (2023). Over the past two decades, numerous studies have partially answered these questions; however, there is still a lack of a unified theory for general systems.

Among these studies, some Bruno et al. (2005); Flomenbom et al. (2005); Flomenbom and Silbey (2006, 2008) focus on transition topology inference; some Deng et al. (2003); Xiang et al. (2006, 2024) focus on transition rate inference with a given transition topology; some Qian and Qian (2000); Li and Qian (2002); Witkoskie and Cao (2006); Tu (2008); Amann et al. (2010); Jia and Chen (2015) investigate nonequilibrium detection based on a two-state coarse-grained trajectory. Recently, several studies have estimated the values of some macroscopic thermodynamic quantities, such as entropy production and cycle affinities, based on partial observations Skinner and Dunkel (2021); Harunari et al. (2022); Van der Meer et al. (2022); Van Vu and Saito (2023); Ghosal and Bisker (2023). In this study, we develop a sufficient statistics approach and set up a theoretical framework of parameter recovery and nonequilibrium detection for a general Markovian system with a given transition topology. Our results only depend on data available from a sufficient long trajectory between coarse-grained states.

Refer to caption
Figure 1: Model. (a) An NN-state Markovian system with nn coarse-grained states A1,,AnA_{1},\cdots,A_{n}, each composed of multiple microstates. (b) Illustration of an infinitely long trajectory of coarse-grained states. (c) The ladder model. (d) The cyclic model. (e) The tree model. (f) The linear model.

Model — We consider an NN-state molecular system modeled by a continuous-time Markov chain (Xs)s0(X_{s})_{s\geq 0} with microstates x=1,,Nx=1,\cdots,N and generator matrix Q=(qxy)Q=(q_{xy}), where qxyq_{xy} denotes the transition rate from state xx to state yy whenever xyx\neq y and qxx=yxqxyq_{xx}=-\sum_{y\neq x}q_{xy}. Recall that the transition diagram of the Markovian system is a directed graph with vertex set V={1,,N}V=\{1,\cdots,N\} and edge set E={(x,y):qxy>0}E=\{(x,y):q_{xy}>0\}. Experimentally, it often occurs that not all microstates can be observed — what can be observed are often some coarse-grained states, each being composed of multiple microstates. Specifically, we assume that the NN microstates can be divided into nn coarse-grained states A1,,AnA_{1},\cdots,A_{n} (Fig. 1(a)). We also assume that the observation is an infinitely long trajectory of coarse-grained states (Fig. 1(b)) Flomenbom et al. (2005); Flomenbom and Silbey (2006, 2008).

Based on the coarse-grained clusters, the generator matrix QQ can be represented as the block form

Q=(QA1QA1A2QA1AnQA2A1QA2QA2AnQAnA1QAnA2QAn).Q=\begin{pmatrix}Q_{A_{1}}&Q_{A_{1}A_{2}}&\cdots&Q_{A_{1}A_{n}}\\ Q_{A_{2}A_{1}}&Q_{A_{2}}&\cdots&Q_{A_{2}A_{n}}\\ \vdots&\vdots&\ddots&\vdots\\ Q_{A_{n}A_{1}}&Q_{A_{n}A_{2}}&\cdots&Q_{A_{n}}\end{pmatrix}. (1)

The steady-state distribution π=(πA1,,πAn)\mathbf{\pi}=(\pi_{A_{1}},\cdots,\pi_{A_{n}}) of the system must satisfy πQ=𝟎\mathbf{\pi}Q=\mathbf{0}, where 𝟎=(0,,0)\mathbf{0}=(0,\cdots,0) is the zero vector. For simplicity, we assume that each diagonal block QAiQ_{A_{i}} has different eigenvalues λ1i,,λ|Ai|i-\lambda^{i}_{1},\cdots,-\lambda^{i}_{|A_{i}|} (any matrix with repeated eigenvalues can be approximated by matrices with different eigenvalues to any degree of accuracy Horn and Johnson (2012)). Then there exists an invertible matrix ΦAi\Phi_{A_{i}} such that QAi=ΦAi1ΛAiΦAiQ_{A_{i}}=\Phi^{-1}_{A_{i}}\Lambda_{A_{i}}\Phi_{A_{i}}, where ΛAi=diag(λ1i,,λ|Ai|i)\Lambda_{A_{i}}=\mathrm{diag}(-\lambda^{i}_{1},\cdots,-\lambda^{i}_{|A_{i}|}) is a diagonal matrix. Note that each row of ΦAi\Phi_{A_{i}} is an eigenvector of QAiQ_{A_{i}}. For convenience, the sum of elements of each eigenvector is normalized to one, i.e, ΦAi𝟏T=𝟏T\Phi_{A_{i}}\mathbf{1}^{T}=\mathbf{1}^{T}, where 𝟏=(1,,1)\mathbf{1}=(1,\cdots,1). Moreover, we set

ΛAiAj=ΦAiQAiAjΦAj1=(λklij),αAi=πAiΦAi1=(αki).\begin{split}\Lambda_{A_{i}A_{j}}&=\Phi_{A_{i}}Q_{A_{i}A_{j}}\Phi_{A_{j}}^{-1}=(\lambda^{ij}_{kl}),\\ \alpha_{A_{i}}&=\pi_{A_{i}}\Phi_{A_{i}}^{-1}=(\alpha^{i}_{k}).\end{split} (2)

These quantities will play a crucial role in our analysis.

Sufficient statistics — In what follows, we assume that the system has reached a steady state. We next examine which information can be extracted from the infinitely long coarse-grained trajectory (Fig. 1(b)). Clearly, what can be observed from the trajectory are the jump times and the coarse-grained states before and after each jump within any finite period of time. In other words, for any given jump times 0t1<<tmt0\leq t_{1}<...<t_{m}\leq t and any coarse-grained states Ai1,,Aim+1A_{i_{1}},\cdots,A_{i_{m+1}}, we can estimate the following probability (density) since the trajectory is infinitely long Fredkin and Rice (1986):

(XsAi1(s<t1),,XsAim+1(tmst)).\begin{split}\mathbb{P}(X_{s}\in A_{i_{1}}(s<t_{1}),\cdots,X_{s}\in A_{i_{m+1}}(t_{m}\leq s\leq t)).\end{split} (3)

These probabilities are actually all statistical information that can be extracted from coarse-grained observations.

We first consider the case where there is no jump before time tt. Based on the notation in Eq. (2), it is clear that

(XsAi(0st))=πAieQAit𝟏T=αAieΛAit𝟏T=kαkieλkit.\begin{split}&\mathbb{P}\left(X_{s}\in A_{i}(0\leq s\leq t)\right)=\pi_{A_{i}}e^{Q_{A_{i}}t}\mathbf{1}^{T}\\ &=\alpha_{A_{i}}e^{\Lambda_{A_{i}}t}\mathbf{1}^{T}=\sum_{k}\alpha^{i}_{k}e^{-\lambda^{i}_{k}t}.\end{split} (4)

Based on coarse-grained observations, we can estimate the probability on the left-hand side of the above equation for each time tt. Since time tt is arbitrary, we can estimate the values of all αki\alpha^{i}_{k} and λki\lambda^{i}_{k}, and hence αAi\alpha_{A_{i}} and ΛAi\Lambda_{A_{i}} can be determined. Similarly, if there is only one jump before time tt, then for any iji\neq j, we have

(XsAi(0s<t1),XsAj(t1st))=πAieQAit1QAiAjeQAj(tt1)𝟏T=αAieΛAit1ΛAiAjeΛAj(tt1)𝟏T.\begin{split}&\mathbb{P}\left(X_{s}\in A_{i}(0\leq s<t_{1}),X_{s}\in A_{j}(t_{1}\leq s\leq t)\right)\\ &=\pi_{A_{i}}e^{Q_{A_{i}}t_{1}}Q_{A_{i}A_{j}}e^{Q_{A_{j}}(t-t_{1})}\mathbf{1}^{T}\\ &=\alpha_{A_{i}}e^{\Lambda_{A_{i}}t_{1}}\Lambda_{A_{i}A_{j}}e^{\Lambda_{A_{j}}(t-t_{1})}\mathbf{1}^{T}.\end{split} (5)

Since times t1t_{1} and tt are arbitrary and since we have determined αAi\alpha_{A_{i}} and ΛAi\Lambda_{A_{i}}, we can also determine ΛAiAj\Lambda_{A_{i}A_{j}} from coarse-grained observations for any iji\neq j sup .

Similarly to Eqs. (4) and (5), it can be proved that sup

(XsAi1(0s<t1),,XsAim+1(tmst))\displaystyle\mathbb{P}(X_{s}\in A_{i_{1}}(0\leq s<t_{1}),\cdots,X_{s}\in A_{i_{m+1}}(t_{m}\leq s\leq t))
=πAi1eQAi1t1QAimAim+1eQAim+1(ttm)𝟏T\displaystyle=\pi_{A_{i_{1}}}e^{Q_{A_{i_{1}}}t_{1}}\cdots Q_{A_{i_{m}}A_{i_{m+1}}}e^{Q_{A_{i_{m+1}}}(t-t_{m})}\mathbf{1}^{T} (6)
=αAi1eΛAi1t1ΛAimAim+1eΛAim+1(ttm)𝟏T.\displaystyle=\alpha_{A_{i_{1}}}e^{\Lambda_{A_{i_{1}}}t_{1}}\cdots\Lambda_{A_{i_{m}}A_{i_{m+1}}}e^{\Lambda_{A_{i_{m+1}}}(t-t_{m})}\mathbf{1}^{T}.

This shows that the probability given in Eq. (3) can be represented by αAi\alpha_{A_{i}}, ΛAi\Lambda_{A_{i}}, and ΛAiAj\Lambda_{A_{i}A_{j}}. Hence these three quantities contain all coarse-grained statistical information. In fact, αAi\alpha_{A_{i}}, ΛAi\Lambda_{A_{i}}, and ΛAiAj\Lambda_{A_{i}A_{j}} are not independent. Since Q𝟏T=𝟎TQ\mathbf{1}^{T}=\mathbf{0}^{T}, it is clear that Λ𝟏T=𝟎T\Lambda\mathbf{1}^{T}=\mathbf{0}^{T}, where

Λ=(ΛA1ΛA1A2ΛA1AnΛAnA1ΛAnA2ΛAn).\Lambda=\begin{pmatrix}\Lambda_{A_{1}}&\Lambda_{A_{1}A_{2}}&\cdots&\Lambda_{A_{1}A_{n}}\\ \vdots&\vdots&\ddots&\vdots\\ \Lambda_{A_{n}A_{1}}&\Lambda_{A_{n}A_{2}}&\cdots&\Lambda_{A_{n}}\end{pmatrix}. (7)

This implies that λki=j,lλklij\lambda^{i}_{k}=\sum_{j,l}\lambda^{ij}_{kl}, where λklij\lambda^{ij}_{kl} are the elements of ΛAiAj\Lambda_{A_{i}A_{j}} defined in Eq. (2). Hence ΛAi\Lambda_{A_{i}} can be determined by all ΛAiAj\Lambda_{A_{i}A_{j}}. Since πQ=𝟎\pi Q=\mathbf{0}, we have αΛ=𝟎\alpha\Lambda=\mathbf{0}, where α=(αA1,,αAn)\alpha=(\alpha_{A_{1}},\cdots,\alpha_{A_{n}}). This equation, together with the normalization condition

α𝟏T=i=1nπAiΦAi1𝟏T=i=1nπAi𝟏T=1,\alpha\mathbf{1}^{T}=\sum_{i=1}^{n}\pi_{A_{i}}\Phi_{A_{i}}^{-1}\mathbf{1}^{T}=\sum_{i=1}^{n}\pi_{A_{i}}\mathbf{1}^{T}=1, (8)

shows that αAi\alpha_{A_{i}} can also be determined by all ΛAiAj\Lambda_{A_{i}A_{j}}.

Recall that a family of statistics that contain all statistical information of the observations are called sufficient statistics Fisher (1922). Since the probability in Eq. (3) can be represented by αAi\alpha_{A_{i}}, ΛAi\Lambda_{A_{i}}, and ΛAiAj\Lambda_{A_{i}A_{j}} and since αAi\alpha_{A_{i}} and ΛAi\Lambda_{A_{i}} can be determined by all ΛAiAj\Lambda_{A_{i}A_{j}}, it is clear that all ΛAiAj\Lambda_{A_{i}A_{j}} are the sufficient statistics for infinitely long coarse-grained observations. In particular, the number of these sufficient statistics is given by

S=1ijn|Ai||Aj|=21i<jn|Ai||Aj|.S=\sum_{1\leq i\neq j\leq n}|A_{i}||A_{j}|=2\sum_{1\leq i<j\leq n}|A_{i}||A_{j}|. (9)

Note that based on coarse-grained observations, two systems with different generator matrices but having the same ΛAiAj\Lambda_{A_{i}A_{j}} are statistically indistinguishable.

Parameter inference — In practice, a crucial question is whether all transition rates of a Markovian system with a given transition topology can be inferred from an infinitely long coarse-grained trajectory. Note that each transition rate qxyq_{xy} of the system corresponds to a direct edge in the transition diagram (V,E)(V,E). Hence the system has |E||E| unknown parameters, where |E||E| denotes the number of elements in the edge set EE. Note that each ΛAiAj\Lambda_{A_{i}A_{j}} is uniquely determined by all transition rates and in the following, we rewrite it as ΛAiAj(qxy)\Lambda_{A_{i}A_{j}}(q_{xy}). Based on coarse-grained observations, we can obtain estimates of the sufficient statistics ΛAiAj\Lambda_{A_{i}A_{j}} and hence all transition rates should satisfy the following set of equations:

ΛAiAj(qxy)=Λ~AiAj, 1ijn,\Lambda_{A_{i}A_{j}}(q_{xy})=\tilde{\Lambda}_{A_{i}A_{j}},\;\;\;\forall\;1\leq i\neq j\leq n, (10)

where Λ~AiAj\tilde{\Lambda}_{A_{i}A_{j}} are the estimates of ΛAiAj\Lambda_{A_{i}A_{j}}. Clearly, Eq. (10) has |E||E| unknown parameters and SS equations, where SS is the number of sufficient statistics. Therefore, we obtain the following criteria regarding parameter inference: (i) when S<|E|S<|E|, it is impossible to determine all transition rates of the system; (ii) when S|E|S\geq|E|, all transition rates can be determined if and only if Eq. (10) has a unique solution. In general, it is very difficult to give a simple criterion for the unique solvability of Eq. (10); however, it can be checked numerically using, e.g., Gröbner basis computation Weispfenning (1992) and can be proved theoretically in some simple examples. Next we focus on three examples.

1) The ladder model (Fig. 1(c)). This model is widely used to model allostery of receptors in living cells with strong cooperativity and high sensitivity Monod et al. (1965). Consider a receptor with two conformational states and nn ligand binding sites. According to the conformational state and the number of occupied binding sites, each receptor can be modeled by a Markovian system with N=2nN=2n microstates. Due to technical limitations, we are often unable to distinguish between the two conformational states. The NN microstates and nn coarse-grained states are shown in Fig. 1(c). Clearly, we have |A1|==|An|=2|A_{1}|=\cdots=|A_{n}|=2 and there are |E|=3N4|E|=3N-4 unknown parameters for the system. Note that for any n2n\geq 2, we have

S=N22N3N4=|E|.S=N^{2}-2N\geq 3N-4=|E|. (11)

Hence all parameters of the ladder model can be inferred if and only if Eq. (10) has unique solution. In sup , we show that Eq. (10) is indeed uniquely solvable for the ladder model.

2) The cyclic model (Fig. 1(d)). Many crucial cellular biochemical processes can be modeled as cyclic Markovian systems such as conformational changes of enzymes and ion channels Cornish-Bowden (2013); Sakmann (2013), phosphorylation-dephosphorylation cycle Qian (2007), cell cycle progression Jia and Grima (2021), and gene state switching Jia and Li (2023). An NN-state cyclic model has |E|=2N|E|=2N unknown parameters. If there are only two coarse-grained states (n=2n=2), then it is impossible to infer all unknown parameters in the case of |A1|=1|A_{1}|=1 and |A2|=N1|A_{2}|=N-1 since S=2(N1)<|E|S=2(N-1)<|E|. For any other coarse-grained partitioning, we have S4(N2)|E|S\geq 4(N-2)\geq|E| and hence parameter inference is generally possible. When n3n\geq 3, we also have S4N6|E|S\geq 4N-6\geq|E|, and hence a complete parameter recovery can be made if Eq. (10) can be uniquely solved. In sup , we show that Eq. (10) is indeed uniquely solvable for the four-state cyclic model under any coarse-grained partitioning whenever S|E|S\geq|E|.

3) The tree and linear models (Fig. 1(e)). We finally focus on an NN-state system whose transition diagram is a tree. For any coarse-grained partitioning, it is easy to check that S2(N1)=|E|S\geq 2(N-1)=|E|. Hence all transition rates can be inferred if Eq. (10) has unique solution. In particular, a system with linear transitions (Fig. 1(f)) can be viewed as a tree. The linear model also widely used in biochemical studies Ullah et al. (2012). In sup , we show that Eq. (10) is indeed uniquely solvable for the linear model if there are only two coarse-grained states (n=2n=2) with |A1|=1|A_{1}|=1 and |A2|=N1|A_{2}|=N-1.

Nonequilibrium identification — When the number of sufficient statistics is less than the number of unknown parameters, it is impossible to determine all transition rates from coarse-grained observations. However, we may still identify whether the system is in an NESS. Our Ness criterion is based on the sufficient statistics ΛAiAj\Lambda_{A_{i}A_{j}}. Recall that once ΛAiAj\Lambda_{A_{i}A_{j}} are determined, αAi\alpha_{A_{i}} are automatically determined by solving αΛ=𝟎\alpha\Lambda=\mathbf{0} and α𝟏T=1\alpha\mathbf{1}^{T}=1. In sup , we prove that if the system is in equilibrium, then the entries of αAi\alpha_{A_{i}} and ΛAiAj\Lambda_{A_{i}A_{j}} are all real numbers, and the following two conditions must be satisfied:
(i) (coarse-grained probability distribution condition)

αki0.\alpha^{i}_{k}\geq 0. (12)

(ii) (coarse-grained detailed balance condition)

αkiλklij=αljλlkji.\alpha^{i}_{k}\lambda^{ij}_{kl}=\alpha^{j}_{l}\lambda^{ji}_{lk}. (13)

Recall that α𝟏T=1\alpha\mathbf{1}^{T}=1. When the system is in equilibrium, we have αki0\alpha^{i}_{k}\geq 0 and thus α=(αA1,,αAn)\alpha=(\alpha_{A_{1}},\cdots,\alpha_{A_{n}}) is a probability distribution. This is why Eq. (12) is called the coarse-grained probability distribution condition. On the other hand, in equilibrium, the system satisfies the detailed balance condition πxqxy=πyqyx\pi_{x}q_{xy}=\pi_{y}q_{yx}. Eq. (13) can be viewed as the coarse-grained version of the detailed balance condition. The above result implies that if any one of Eqs. (12) and (13) is violated, then the system must be in an NESS. This gives a general criterion for detecting nonequilibrium based on coarse-grained observations.

For a three-state cyclic system with two coarse-grained states A1={1}A_{1}=\{1\} and A2={2,3}A_{2}=\{2,3\}, we have seen that it is impossible to infer all parameters. In Amann et al. (2010), the authors showed that this system is in an NESS when

TL2>LSMM2,TL^{2}>LSM-M^{2}, (14)

where L=q12+q13L=q_{12}+q_{13}, S=q21+q23+q31+q32S=q_{21}+q_{23}+q_{31}+q_{32}, T=q21q31+q21q32+q23q31T=q_{21}q_{31}+q_{21}q_{32}+q_{23}q_{31}, and M=q12(q23+q31+q32)+q13(q21+q23+q32)M=q_{12}(q_{23}+q_{31}+q_{32})+q_{13}(q_{21}+q_{23}+q_{32}) are another set of sufficient statistics for the three-state system that can be determined by coarse-grained observations Amann et al. (2010); Jia and Chen (2015). Our result can be viewed as an extension of Eq. (14) to complex molecular systems with an arbitrary number of microstates and an arbitrary number of coarse-grained states.

In particular, when N=3N=3 and n=2n=2, our criterion reduces to Eq. (14). This can be seen as follows. First, since αΛ=𝟎\alpha\Lambda=\mathbf{0} and Λ𝟏T=𝟎T\Lambda\mathbf{1}^{T}=\mathbf{0}^{T}, it is easy to see that

α11λ1212=α22λ12=α22λ2121,α11λ1312=α32λ22=α32λ3121.\alpha^{1}_{1}\lambda^{12}_{12}=\alpha^{2}_{2}\lambda^{2}_{1}=\alpha^{2}_{2}\lambda^{21}_{21},\;\;\;\alpha^{1}_{1}\lambda^{12}_{13}=\alpha^{2}_{3}\lambda^{2}_{2}=\alpha^{2}_{3}\lambda^{21}_{31}. (15)

Hence the coarse-grained detailed balance condition is satisfied. On the other hand, since α𝟏T=α11+α22+α32=1\alpha\mathbf{1}^{T}=\alpha^{1}_{1}+\alpha^{2}_{2}+\alpha^{2}_{3}=1, it follows from Eq. (15) that

α11=λ2121λ3121λ2121λ3121+λ1212λ3121+λ1312λ2121,α22=λ1212λ3121λ2121λ3121+λ1212λ3121+λ1312λ2121,α32=λ1312λ2121λ2121λ3121+λ1212λ3121+λ1312λ2121.\begin{split}\alpha^{1}_{1}&=\frac{\lambda^{21}_{21}\lambda^{21}_{31}}{\lambda^{21}_{21}\lambda^{21}_{31}+\lambda^{12}_{12}\lambda^{21}_{31}+\lambda^{12}_{13}\lambda^{21}_{21}},\\ \alpha^{2}_{2}&=\frac{\lambda^{12}_{12}\lambda^{21}_{31}}{\lambda^{21}_{21}\lambda^{21}_{31}+\lambda^{12}_{12}\lambda^{21}_{31}+\lambda^{12}_{13}\lambda^{21}_{21}},\\ \alpha^{2}_{3}&=\frac{\lambda^{12}_{13}\lambda^{21}_{21}}{\lambda^{21}_{21}\lambda^{21}_{31}+\lambda^{12}_{12}\lambda^{21}_{31}+\lambda^{12}_{13}\lambda^{21}_{21}}.\end{split} (16)

Furthermore, it is easy to check that sup

λ2121λ3121=T,λ1212λ3121+λ1312λ2121=M,λ1212λ3121λ1312λ2121=T(LSMM2TL2)(λ1312λ1212)2.\begin{split}&\;\;\lambda^{21}_{21}\lambda^{21}_{31}=T,\;\;\;\lambda^{12}_{12}\lambda^{21}_{31}+\lambda^{12}_{13}\lambda^{21}_{21}=M,\\ &\lambda^{12}_{12}\lambda^{21}_{31}\lambda^{12}_{13}\lambda^{21}_{21}=\frac{T(LSM-M^{2}-TL^{2})}{(\lambda^{12}_{13}-\lambda^{12}_{12})^{2}}.\end{split} (17)

Combining Eqs. (16) and (17), we immediately obtain

α11=TT+M0,α22+α32=MT+M0,α22α32=T(LSMM2TL2)(T+M)2(λ1312λ1212)2.\begin{split}&\alpha^{1}_{1}=\frac{T}{T+M}\geq 0,\;\;\;\alpha^{2}_{2}+\alpha^{2}_{3}=\frac{M}{T+M}\geq 0,\\ &\quad\quad\alpha^{2}_{2}\alpha^{2}_{3}=\frac{T(LSM-M^{2}-TL^{2})}{(T+M)^{2}(\lambda^{12}_{13}-\lambda^{12}_{12})^{2}}.\end{split} (18)

Since α110\alpha^{1}_{1}\geq 0 and α22+α320\alpha^{2}_{2}+\alpha^{2}_{3}\geq 0, the coarse-grained probability distribution condition is violated if and only if α22α32<0\alpha^{2}_{2}\alpha^{2}_{3}<0, which is obviously equivalent to Eq. (14).

We now compare our NESS criterion with the classical criterion based on autocorrelation functions. For any observable ϕ\phi, recall that the autocorrelation function of the system, Bϕ(t)=Cov(ϕ(X0),ϕ(Xt))B_{\phi}(t)=\mathrm{Cov}(\phi(X_{0}),\phi(X_{t})), is defined as the steady-state covariance between ϕ(X0)\phi(X_{0}) and ϕ(Xt)\phi(X_{t}). For coarse-grained observations, the observable ϕ\phi is usually chosen as ϕ(x)=i\phi(x)=i for all xAix\in A_{i}. It is well-known Qian et al. (2003) that if the system is in equilibrium, then

Bϕ(t)=m=1N1cmemt,m>0,cm0,B_{\phi}(t)=\sum_{m=1}^{N-1}c_{m}e^{-\ell_{m}t},\;\;\;\ell_{m}>0,\;c_{m}\geq 0, (19)

where m<0-\ell_{m}<0 are all nonzero eigenvalues of the generator matrix QQ and cm0c_{m}\geq 0 are the coefficients (note that in the case, the autocorrelation function must be monotonic). Hence if there exists some 1mN11\leq m\leq N-1 such that any one of m\ell_{m} and cmc_{m} is negative or not real, then the system must be in an NESS. Clearly, this NESS criterion is stronger than the criterion based on oscillatory or non-monotonic autocorrelation functions Qian and Qian (2000).

In sup , we have proved a stronger result — if Eqs. (12) and (13) hold, then we also have m>0\ell_{m}>0 and cm0c_{m}\geq 0. This suggests that all NESS scenarios that can identified by the autocorrelation criterion can definitely be identified by our criterion; in other words, our NESS criterion is mathematically stronger than the autocorrelation criterion. In particular, if there are only two coarse-grained states (n=2n=2) with |A1|=1|A_{1}|=1 and |A2|=N1|A_{2}|=N-1, then the two criteria are equivalent; in this case, the number of sufficient statistics is S=2(N1)S=2(N-1) and {1,,N1,c1,,cN1}\{\ell_{1},\cdots,\ell_{N-1},c_{1},\cdots,c_{N-1}\} is exactly a set of sufficient statistics sup . For other coarse-grained partitionings, the two criteria are in general not equivalent and our criterion may extend the NESS region significantly beyond the one identified by the autocorrelation criterion.

We stress that our NESS criterion is only a sufficient condition; there may be some NESS scenarios that fail to be detected by our criterion. However, we prove in the End Matter that for any three-state system with two coarse-grained states (N=3N=3 and n=2n=2), our criterion is also a necessary condition. In other words, if Eqs. (12) and (13) are both satisfied, then among all three-state systems having the same sufficient statistics ΛAiAj\Lambda_{A_{i}A_{j}}, there must exist a system which is in equilibrium.

Finally, as an example, we focus on a four-state fully connected system with two different coarse-grained partitionings (Fig. 2(a),(b)). The system has 1212 unknown parameters and we want to determine whether it is in an NESS. We first consider the partitioning of A1={1}A_{1}=\{1\} and A2={2,3,4}A_{2}=\{2,3,4\} (Fig. 2(a)). In this case, our criterion is equivalent to the autocorrelation criterion. Fig. 2(c) illustrates the NESS region in the parameter space that can be identified by our criterion, or equivalently, the autocorrelation criterion (shown in shaded blue) and the region that fails to be identified by the two criteria (shown in green). The red line shows the region of equilibrium states. In this case, NESS can only be detected in the parameter region far from the red line.

Refer to caption
Figure 2: NESS identification for a four-state Markovian system. (a) System with coarse-grained partitioning A1={1}A_{1}=\{1\} and A2={2,3,4}A_{2}=\{2,3,4\}. (b) System with coarse-grained partitioning A1={1,2}A_{1}=\{1,2\} and A2={3,4}A_{2}=\{3,4\}. (c) Phase diagram in the q14q41q_{14}-q_{41} plane for the system in (a). (d) Same as (c) but for the system in (b). In (c),(d), the red line shows the region of equilibrium states (EQ). The blue (and shaded blue) area shows the NESS region that can be identified by our criterion. The shaded blue area shows the NESS region that can be identified by the autocorrelation criterion. The green area shows the NESS region that fails to be detected by our criterion. The parameters are chosen as q12=1q_{12}=1, q13=q21=2q_{13}=q_{21}=2, q23=q24=4q_{23}=q_{24}=4, q31=q32=q34=3q_{31}=q_{32}=q_{34}=3, and q42=q43=5q_{42}=q_{43}=5.

We then consider another partitioning, i.e. A1={1,2}A_{1}=\{1,2\} and A2={3,4}A_{2}=\{3,4\} (Fig. 2(b)). Fig. 2(c) shows the NESS regions that can be identified by our criterion (shown in blue) and by the autocorrelation criterion (shown in shaded blue). Clearly, the entire NESS region can be captured by our criterion but only a small subregion can be detected by the autocorrelation criterion. To gain deeper insights, note that there are S=6S=6 sufficient statistics for the partitioning shown in Fig. 2(a), while for the one shown in Fig. 2(b), there are S=8S=8 sufficient statistics. More sufficient statistics means that more information can be extracted from coarse-grained observations, which generally leads to a larger NESS region that can be identified by our criterion.

Estimation of sufficient statistics — We emphasize that our methods of parameter inference and nonequilibrium detection are based on an accurate estimation of all the sufficient statistics. In the End Matter, we have proposed a maximum likelihood approach to accurately inferring the sufficient statistics ΛAiAj\Lambda_{A_{i}A_{j}} as well as the transition rates qxyq_{xy} (when S|E|S\geq|E|). This method is then validated using synthetic time-trace data for the ladder, cyclic, and linear models generated using stochastic simulations.

Conclusions and discussion — Another crucial question beyond this study is whether the transition topology of the system can also be inferred based on coarse-grained observations. In fact, this is impossible in many cases due to the loss of information during coarse-graining; however, when there are two coarse-grained states, Refs. Bruno et al. (2005); Flomenbom et al. (2005); Flomenbom and Silbey (2006, 2008) have developed a canonical form method of finding all possible transition topologies of the underlying Markovian dynamics. Hence in this paper, we always assume that the transition topology of the system is given.

Here we established a framework of parameter inference and nonequilibrium identification based on coarse-grained observations for a general Markovian system with an arbitrary number of microstates and an arbitrary number of coarse-grained states when the underlying transition topology is given. We provided a criterion for evaluating whether the coarse-grained information is enough for estimating all transition rates and also a criterion for detecting whether the system is in an NESS. Our method is based on extracting a set of sufficient statistics that incorporates all statistical information of an infinitely long coarse-grained trajectory. Using these sufficient statistics, the problems of parameter recovery and nonequilibrium detection can be brought into a unified theoretical framework.

Acknowledgements — C. J. acknowledges support from National Natural Science Foundation of China with grant No. U2230402 and No. 12271020.

References

  • Cornish-Bowden (2013) A. Cornish-Bowden, Fundamentals of enzyme kinetics (John Wiley & Sons, 2013).
  • Sakmann (2013) B. Sakmann, Single-channel recording (Springer Science & Business Media, 2013).
  • Golding et al. (2005) I. Golding, J. Paulsson, S. M. Zawilski, and E. C. Cox, Cell 123, 1025 (2005).
  • Seifert (2019) U. Seifert, Annual Review of Condensed Matter Physics 10, 171 (2019).
  • Esposito (2012) M. Esposito, Phys. Rev. E 85, 041125 (2012).
  • Teza and Stella (2020) G. Teza and A. L. Stella, Phys. Rev. Lett. 125, 110601 (2020).
  • Skoge et al. (2013) M. Skoge, S. Naqvi, Y. Meir, and N. S. Wingreen, Phys. Rev. Lett. 110, 248102 (2013).
  • Berg (2003) H. C. Berg, Annu. Rev. Biochem. 72, 19 (2003).
  • Qian and Qian (2000) H. Qian and M. Qian, Phys. Rev. Lett. 84, 2271 (2000).
  • Li and Qian (2002) G. Li and H. Qian, Traffic 3, 249 (2002).
  • Witkoskie and Cao (2006) J. B. Witkoskie and J. Cao, J. Phys. Chem. B 110, 19009 (2006).
  • Tu (2008) Y. Tu, Proc. Natl. Acad. Sci. USA 105, 11737 (2008).
  • Amann et al. (2010) C. P. Amann, T. Schmiedl, and U. Seifert, J. Chem. Phys. 132, 041102 (2010).
  • Jia and Chen (2015) C. Jia and Y. Chen, J. Phys. A: Math. Theor. 48, 205001 (2015).
  • Skinner and Dunkel (2021) D. J. Skinner and J. Dunkel, Phys. Rev. Lett. 127, 198101 (2021).
  • Harunari et al. (2022) P. E. Harunari, A. Dutta, M. Polettini, and É. Roldán, Phys. Rev. X 12, 041026 (2022).
  • Van der Meer et al. (2022) J. Van der Meer, B. Ertel, and U. Seifert, Phys. Rev. X 12, 031025 (2022).
  • Van Vu and Saito (2023) T. Van Vu and K. Saito, Phys. Rev. X 13, 011013 (2023).
  • Ghosal and Bisker (2023) A. Ghosal and G. Bisker, J. Phys. D: Appl. Phys. 56, 254001 (2023).
  • Bruno et al. (2005) W. J. Bruno, J. Yang, and J. E. Pearson, Proc. Natl. Acad. Sci. USA 102, 6326 (2005).
  • Flomenbom et al. (2005) O. Flomenbom, J. Klafter, and A. Szabo, Biophys. J. 88, 3780 (2005).
  • Flomenbom and Silbey (2006) O. Flomenbom and R. J. Silbey, Proc. Natl. Acad. Sci. USA 103, 10907 (2006).
  • Flomenbom and Silbey (2008) O. Flomenbom and R. Silbey, J. Chem. Phys. 128 (2008).
  • Deng et al. (2003) Y. Deng, S. Peng, M. Qian, and J. Feng, J. Phys. A: Math. Gen. 36, 1195 (2003).
  • Xiang et al. (2006) X. Xiang, X. Yang, Y. Deng, and J. Feng, J. Phys. A: Math. Gen. 39, 9477 (2006).
  • Xiang et al. (2024) X. Xiang, J. Zhou, Y. Deng, and X. Yang, Chaos 34, 023132 (2024).
  • Horn and Johnson (2012) R. A. Horn and C. R. Johnson, Matrix analysis (Cambridge university press, 2012).
  • Fredkin and Rice (1986) D. R. Fredkin and J. A. Rice, J. Appl. Probab. 23, 208 (1986).
  • (29) Supplemental Material.
  • Fisher (1922) R. A. Fisher, Philosophical transactions of the Royal Society of London. Series A 222, 309 (1922).
  • Weispfenning (1992) V. Weispfenning, J. Symb. Comput. 14, 1 (1992).
  • Monod et al. (1965) J. Monod, J. Wyman, and J.-P. Changeux, J. Mol. Biol. 12, 88 (1965).
  • Qian (2007) H. Qian, Annu. Rev. Phys. Chem. 58, 113 (2007).
  • Jia and Grima (2021) C. Jia and R. Grima, Phys. Rev. X 11, 021032 (2021).
  • Jia and Li (2023) C. Jia and Y. Li, SIAM J. Appl. Math. 83, 1572 (2023).
  • Ullah et al. (2012) G. Ullah, W. J. Bruno, and J. E. Pearson, J. Theor. Biol. 311, 117 (2012).
  • Qian et al. (2003) M. Qian, M.-P. Qian, and X.-J. Zhang, Phys. Lett. A 309, 371 (2003).
  • Kolmogoroff (1936) A. Kolmogoroff, Math. Ann. 112, 155 (1936).
  • Jiao et al. (2024) F. Jiao, J. Li, T. Liu, Y. Zhu, W. Che, L. Bleris, and C. Jia, PLoS Comput. Biol. 20, e1012118 (2024).

End matter

Appendix A: Necessity of the NESS criterion — In the main text, we proposed an NESS criterion which states that the violation of the coarse-grained probability distribution condition (Eq. (12)) or the coarse-grained detailed balanced condition (Eq. (13)) implies the presence of nonequilibrium. Note that this is a sufficient condition for detecting nonequilibrium. Here we prove that for any three-state system with two coarse-grained states A1={1}A_{1}=\{1\} and A2={2,3}A_{2}=\{2,3\}, the above criterion is also a necessary condition. To this end, we only need to show that if Eqs. (12) and (13) are both satisfied, then among all three-state systems having the same sufficient statistics Λ~AiAj\tilde{\Lambda}_{A_{i}A_{j}}, there must exist a system which is in equilibrium. In fact, if Eqs. (12) and (13) hold, then we prove in sup that the entries of Λ~AiAj\tilde{\Lambda}_{A_{i}A_{j}} are all positive. To proceed, we set

ΦA1=1,ΦA2=(o1cosθα~22o2sinθα~22o1sinθα~32o2cosθα~32),\Phi_{A_{1}}=1,\;\;\;\Phi_{A_{2}}=\begin{pmatrix}\frac{o_{1}\cos\theta}{\sqrt{\tilde{\alpha}_{2}^{2}}}&\frac{o_{2}\sin\theta}{\sqrt{\tilde{\alpha}_{2}^{2}}}\\ \frac{-o_{1}\sin\theta}{\sqrt{\tilde{\alpha}_{3}^{2}}}&\frac{o_{2}\cos\theta}{\sqrt{\tilde{\alpha}_{3}^{2}}}\\ \end{pmatrix}, (20)

where θ\theta is an undetermined constant and

o1=α~22cosθα~32sinθ,o2=α~22sinθ+α~32cosθ.\begin{split}o_{1}&=\sqrt{\tilde{\alpha}_{2}^{2}}\cos\theta-\sqrt{\tilde{\alpha}_{3}^{2}}\sin\theta,\\ o_{2}&=\sqrt{\tilde{\alpha}_{2}^{2}}\sin\theta+\sqrt{\tilde{\alpha}_{3}^{2}}\cos\theta.\end{split} (21)

Clearly, we have ΦA2𝟏T=𝟏T\Phi_{A_{2}}\mathbf{1}^{T}=\mathbf{1}^{T}. Moreover, we set

Q=(qxy)=(ΦA11Λ~A1ΦA1ΦA11Λ~A1A2ΦA2ΦA21Λ~A2A1ΦA1ΦA21Λ~A2ΦA2).\begin{split}Q=(q_{xy})=\begin{pmatrix}\Phi_{A_{1}}^{-1}\tilde{\Lambda}_{A_{1}}\Phi_{A_{1}}&\Phi_{A_{1}}^{-1}\tilde{\Lambda}_{A_{1}A_{2}}\Phi_{A_{2}}\\ \Phi_{A_{2}}^{-1}\tilde{\Lambda}_{A_{2}A_{1}}\Phi_{A_{1}}&\Phi_{A_{2}}^{-1}\tilde{\Lambda}_{A_{2}}\Phi_{A_{2}}\\ \end{pmatrix}.\end{split} (22)

Direct computations show that

q12=o1cosθα~22λ~1212o1sinθα~32λ~1312,q13=o2sinθα~22λ~1212+o2cosθα~32λ~1312,q21=α~22cosθo1λ~2121α~32sinθo1λ~3121,q31=α~22sinθo2λ~2121+α~32cosθo2λ~3121,q23=o2o1(λ~22λ~12)sinθcosθ,q32=o1o2(λ~22λ~12)sinθcosθ.\begin{split}q_{12}&=\frac{o_{1}\cos\theta}{\sqrt{\tilde{\alpha}_{2}^{2}}}\tilde{\lambda}_{12}^{12}-\frac{o_{1}\sin\theta}{\sqrt{\tilde{\alpha}_{3}^{2}}}\tilde{\lambda}_{13}^{12},\\ q_{13}&=\frac{o_{2}\sin\theta}{\sqrt{\tilde{\alpha}_{2}^{2}}}\tilde{\lambda}_{12}^{12}+\frac{o_{2}\cos\theta}{\sqrt{\tilde{\alpha}_{3}^{2}}}\tilde{\lambda}_{13}^{12},\\ q_{21}&=\frac{\sqrt{\tilde{\alpha}_{2}^{2}}\cos\theta}{o_{1}}\tilde{\lambda}_{21}^{21}-\frac{\sqrt{\tilde{\alpha}_{3}^{2}}\sin\theta}{o_{1}}\tilde{\lambda}_{31}^{21},\\ q_{31}&=\frac{\sqrt{\tilde{\alpha}_{2}^{2}}\sin\theta}{o_{2}}\tilde{\lambda}_{21}^{21}+\frac{\sqrt{\tilde{\alpha}_{3}^{2}}\cos\theta}{o_{2}}\tilde{\lambda}_{31}^{21},\\ q_{23}&=\frac{o_{2}}{o_{1}}(\tilde{\lambda}^{2}_{2}-\tilde{\lambda}^{2}_{1})\sin\theta\cos\theta,\\ q_{32}&=\frac{o_{1}}{o_{2}}(\tilde{\lambda}^{2}_{2}-\tilde{\lambda}^{2}_{1})\sin\theta\cos\theta.\end{split} (23)

Note that there are two cases: (i) if λ~22>λ~12\tilde{\lambda}_{2}^{2}>\tilde{\lambda}_{1}^{2}, then we choose θ\theta to be a very small positive number; (ii) if λ~22<λ~12\tilde{\lambda}_{2}^{2}<\tilde{\lambda}_{1}^{2}, then we choose θ\theta to be a very small negative number. For both the two cases, it is easy to check that qxy>0q_{xy}>0 for any xyx\neq y and hence QQ is indeed the generator matrix of a Markovian system. According to the definition of QQ, it is clear that ΛAiAj(qxy)=Λ~AiAj\Lambda_{A_{i}A_{j}}(q_{xy})=\tilde{\Lambda}_{A_{i}A_{j}}. This shows that Λ~AiAj\tilde{\Lambda}_{A_{i}A_{j}} are the sufficient statistics of the system.

On the other hand, it follows from Eq. (23) that

q12q23q31=(λ~22λ~12)sinθcosθ(cosθα~22λ~1212sinθα~32λ~1312)×(α~22sinθλ~2121+α~32cosθλ~3121),q13q32q21=(λ~22λ~12)sinθcosθ(sinθα~22λ~1212+cosθα~32λ~1312)×(α~22cosθλ~2121α~32sinθλ~3121).\begin{split}q_{12}q_{23}q_{31}=&\;(\tilde{\lambda}^{2}_{2}-\tilde{\lambda}^{2}_{1})\sin\theta\cos\theta\left(\frac{\cos\theta}{\sqrt{\tilde{\alpha}_{2}^{2}}}\tilde{\lambda}_{12}^{12}-\frac{\sin\theta}{\sqrt{\tilde{\alpha}_{3}^{2}}}\tilde{\lambda}_{13}^{12}\right)\\ &\;\times\left(\sqrt{\tilde{\alpha}_{2}^{2}}\sin\theta\tilde{\lambda}_{21}^{21}+\sqrt{\tilde{\alpha}_{3}^{2}}\cos\theta\tilde{\lambda}_{31}^{21}\right),\\ q_{13}q_{32}q_{21}=&\;(\tilde{\lambda}^{2}_{2}-\tilde{\lambda}^{2}_{1})\sin\theta\cos\theta\left(\frac{\sin\theta}{\sqrt{\tilde{\alpha}_{2}^{2}}}\tilde{\lambda}_{12}^{12}+\frac{\cos\theta}{\sqrt{\tilde{\alpha}_{3}^{2}}}\tilde{\lambda}_{13}^{12}\right)\\ &\;\times\left(\sqrt{\tilde{\alpha}_{2}^{2}}\cos\theta\tilde{\lambda}_{21}^{21}-\sqrt{\tilde{\alpha}_{3}^{2}}\sin\theta\tilde{\lambda}_{31}^{21}\right).\end{split} (24)

Since the coarse-grained detailed balance condition holds, we have

α~11λ~1212=α~22λ~2121,α~11λ~1312=α~32λ~3121.\tilde{\alpha}^{1}_{1}\tilde{\lambda}^{12}_{12}=\tilde{\alpha}^{2}_{2}\tilde{\lambda}^{21}_{21},\;\;\;\tilde{\alpha}^{1}_{1}\tilde{\lambda}^{12}_{13}=\tilde{\alpha}^{2}_{3}\tilde{\lambda}^{21}_{31}. (25)

This indicates that α~22λ~2121λ~1312=α~32λ~3121λ~1212\tilde{\alpha}^{2}_{2}\tilde{\lambda}^{21}_{21}\tilde{\lambda}^{12}_{13}=\tilde{\alpha}^{2}_{3}\tilde{\lambda}^{21}_{31}\tilde{\lambda}^{12}_{12}. Inserting this into Eq. (24) yields q12q23q31=q13q32q31q_{12}q_{23}q_{31}=q_{13}q_{32}q_{31}. This shows that the system satisfies the Kolmogorov cyclic condition and hence it is in equilibrium Kolmogoroff (1936).

Appendix B: Estimation of sufficient statistics — Our methods of parameter inference and nonequilibrium identification depend on an accurate estimation of all the sufficient statistics ΛAiAj\Lambda_{A_{i}A_{j}}. Recall that once we have obtained estimates of ΛAiAj\Lambda_{A_{i}A_{j}}, all the transition rates of the system can be determined by solving Eq. (10), and the system is in an NESS if any one of Eqs. (12) and (13) is violated. In applications, a crucial question is how to accurately estimate ΛAiAj\Lambda_{A_{i}A_{j}} using time-series measurements of coarse-grained states.

To answer this, here we assume that the coarse-grained states of the system can be observed at multiple discrete time points. The time resolution of the coarse-grained trajectory is assumed to be sufficiently high so that the jump times 0t1<<tm0\leq t_{1}<...<t_{m} of the trajectory can be determined accurately (this is equivalent to saying that the waiting time distributions between coarse-grained states can be measured accurately, an assumption widely used in previous studies Bruno et al. (2005); Flomenbom et al. (2005); Flomenbom and Silbey (2006, 2008)). The corresponding coarse-grained states before these jump times are denoted by Ai1,,AimA_{i_{1}},\cdots,A_{i_{m}}, respectively. For any Markovian system, we use the stochastic simulation algorithm to generate synthetic time-series recordings of coarse-grained states with exact mm jumps, where the number of jumps is chosen to be m=3×103,3×104,3×105m=3\times 10^{3},3\times 10^{4},3\times 10^{5}. In what follows, we refer to mm as the sample size.

We then use a maximum likelihood method to estimate these sufficient statistics. Recall that Eq. (Parameter inference and nonequilibrium identification for Markovian systems based on coarse-grained observations) gives the probability density of each coarse-grained trajectory; hence for any given jump times 𝒯={t1,,tm}\mathcal{T}=\{t_{1},\cdots,t_{m}\} and coarse-grained states 𝒜={Ai1,,Aim}\mathcal{A}=\{A_{i_{1}},\cdots,A_{i_{m}}\}, the likelihood function can be constructed as

L(ΛAiAj|𝒯,𝒜)=αAi1eΛAi1t1ΛAim1AimeΛAim(tmtm1)𝟏T,\begin{split}&\;L(\Lambda_{A_{i}A_{j}}|\mathcal{T},\mathcal{A})\\ &=\alpha_{A_{i_{1}}}e^{\Lambda_{A_{i_{1}}}t_{1}}\cdots\Lambda_{A_{i_{m-1}}A_{i_{m}}}e^{\Lambda_{A_{i_{m}}}(t_{m}-t_{m-1})}\mathbf{1}^{T},\end{split} (26)

where ΛAiAj\Lambda_{A_{i}A_{j}} are the sufficient statistics to be estimated, and we have shown that both αAi\alpha_{A_{i}} and ΛAi\Lambda_{A_{i}} are fully determined by ΛAiAj\Lambda_{A_{i}A_{j}}. The estimates of the sufficient statistics can be obtained by maximizing the likelihood function, i.e.

Λ~AiAj=argmaxΛAiAjL(ΛAiAj|𝒯,𝒜).\tilde{\Lambda}_{A_{i}A_{j}}=\underset{\Lambda_{A_{i}A_{j}}}{\mathrm{arg\;max}}\;L(\Lambda_{A_{i}A_{j}}|\mathcal{T},\mathcal{A}). (27)

The estimation accuracy of each entry λklij\lambda^{ij}_{kl} of ΛAiAj\Lambda_{A_{i}A_{j}} can be measured by the relative error

κ(λklij)=|λ~klijλklij||λklij|.\kappa(\lambda^{ij}_{kl})=\frac{\big{|}\tilde{\lambda}^{ij}_{kl}-\lambda^{ij}_{kl}\big{|}}{\big{|}\lambda^{ij}_{kl}\big{|}}. (28)

If the relative error is less than 0.20.2, then we believe that the estimated value of λklij\lambda^{ij}_{kl} can reflect the realistic dynamic property of the system Jiao et al. (2024). Moreover, let η\eta denote the proportion of those λklij\lambda^{ij}_{kl} (1ijn1\leq i\neq j\leq n, kAik\in A_{i}, lAjl\in A_{j}) which have a relative error less than 0.20.2. If η75%\eta\geq 75\%, i.e. 75%75\% of sufficient statistics have a relative error less than 0.20.2, then we believe that an accurate estimation of sufficient statistics is made.

Refer to caption
Figure 3: Accuracy of sufficient statistics estimation for the three models. (a) Plot of η¯\bar{\eta} as a function of NN and mm for the ladder model. Here η¯\bar{\eta}, which characterizes the accuracy of estimation, is defined as the sample mean of η\eta for 100100 randomly selected parameter sets. (b) Same as (a) but for the cyclic model. (c) Same as (a) but for the linear model. In (a)-(c), all the transition rates qxyq_{xy} are randomly selected so that log10qxyU[0,1]\log_{10}q_{xy}\sim U[0,1], where U[0,1]U[0,1] denotes the uniform distribution within the interval [0,1][0,1]. (d) The minimal sample size mm required for achieving accurate estimation as a function of NN for the three models. Empirically, if η¯75%\bar{\eta}\geq 75\%, then we believe that an accurate estimation is made for the corresponding model.

Next we apply our inference method to three specific models: (i) the ladder model (Fig. 1(c)) with N/2N/2 coarse-grained states

A1={1,2},,AN/2={N1,N};A_{1}=\{1,2\},\cdots,A_{N/2}=\{N-1,N\}; (29)

(ii) the cyclic model (Fig. 1(d)) with two coarse-grained states

A1={1,2},A2={3,,N};A_{1}=\{1,2\},\;\;\;A_{2}=\{3,\cdots,N\}; (30)

(iii) the linear model (Fig. 1(f)) with two coarse-grained states

A1={1},A2={2,3,,N}.A_{1}=\{1\},\;\;\;A_{2}=\{2,3,\cdots,N\}. (31)

All these models satisfy S|E|S\geq|E| and hence a complete parameter recovery is generally possible. For each model, we perform parameter inference using the maximum likelihood method under 100100 randomly selected parameter sets, which cover large swathes of parameter space.

Let η¯\bar{\eta} denote the sample mean of η\eta for all the 100100 parameter sets; clearly, it characterizes the estimation accuracy for the corresponding model. Fig. 3(a)-(c) show the value of η¯\bar{\eta} as a function of the number of microstates NN and the sample size mm for the three models. As expected, an increasing sample size leads to a higher estimation accuracy. Interestingly, we also find that the estimation accuracy, evaluated by η¯\bar{\eta}, is insensitive to the number of microstates for the ladder model. while it is very sensitive to the number of microstates for the cyclic and linear models. This is possibly because the ladder model has more coarse-grained states (n=N/2)(n=N/2) than the cyclic and linear models (n=2n=2).

Another crucial question is what sample size is needed for an accurate estimation. Empirically, if η¯75%\bar{\eta}\geq 75\%, then we believe that an accurate estimation is made for the corresponding model. Fig. 3(d) shows the minimal sample size mm required for achieving accurate estimation as a function of the number of microstates NN. For the ladder model, the minimal sample size is roughly m3×104m\approx 3\times 10^{4}, insensitive to the number of microstates NN. For the cyclic and linear models, the minimal sample size increases significantly with NN. When N=4N=4, all the three models require a similar sample size of m3×104m\approx 3\times 10^{4} for achieving accurate inference. However, when N=8N=8, the cyclic model requires a sample size of m5×105m\approx 5\times 10^{5} and the linear model requires a sample size of m106m\approx 10^{6}.

Thus far, we only focus on the estimation of the sufficient statistics ΛAiAj\Lambda_{A_{i}A_{j}}. Once the sufficient statistics have been determined, the transition rates qxyq_{xy} can also be recovered by solving Eq. (10) numerically. Similarly, we can define the counterpart of η¯\bar{\eta} for transition rate estimation. Supplementary Fig. 1 shows the accuracy of transition rate estimation as a function of NN and mm for the three models. Comparing Supplementary Fig. 1 with Fig. 3, it can be seen that the accuracy of transition rate estimation is similar to but slightly lower than the accuracy of sufficient statistics estimation.