This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Compound matrices in systems and control theory

Eyal Bar-Shalom and Michael Margaliot This research was partially supported by a research grant from the Israel Science Foundation (ISF)The authors are with the School of Electrical Engineering, and the Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv 69978, Israel. E-mail: [email protected]
Abstract

The multiplicative and additive compounds of a matrix play an important role in several fields of mathematics including geometry, multi-linear algebra, combinatorics, and the analysis of nonlinear time-varying dynamical systems. There is a growing interest in applications of these compounds, and their generalizations, in systems and control theory. This tutorial paper provides a gentle introduction to these topics with an emphasis on the geometric interpretation of the compounds, and surveys some of their recent applications.

I Introduction

Let An×nA\in\mathbb{R}^{n\times n}. Fix k{1,,n}k\in\{1,\dots,n\}. The kk-multiplicative and kk-additive compounds of AA play an important role in several fields of applied mathematics. Recently, there is a growing interest in the applications of these compounds, and their generalizations, in systems and control theory (see, e.g. [43, 42, 21, 1, 5, 28, 44, 40, 2, 41, 13, 16, 14, 17, 15]).

This tutorial paper reviews the kk-compounds, focusing on their geometric interpretations, and surveys some of their recent applications in systems and control theory, including kk-positive systems, kk-cooperative systems, and kk-contracting systems.

The results described here are known, albeit never collected in a single paper. The only exception is the new generalization principle described in Section V.

We use standard notation. For a set SS, int(S)\operatorname{int}(S) is the interior of SS. For scalars λi\lambda_{i}, i{1,,n}i\in\{1,\dots,n\}, diag(λ1,,λn)\operatorname{diag}(\lambda_{1},\dots,\lambda_{n}) is the n×nn\times n diagonal matrix with diagonal entries λi\lambda_{i}. Column vectors [matrices] are denoted by small [capital] letters. For a matrix AA, ATA^{T} is the transpose of AA. For a square matrix BB, tr(B)\operatorname{tr}(B) [det(B)\det(B)] is the trace [determinant] of BB. BB is called Metzler if all its off-diagonal entries are non-negative. The positive orthant in n\mathbb{R}^{n} is +n={xn:xi0,i=1,,n}\mathbb{R}^{n}_{+}=\{x\in\mathbb{R}^{n}:x_{i}\geq 0,\;i=1,\dots,n\}.

II Geometric motivation

kk-compound matrices provide information on the evolution of kk-dimensional polygons subject to a linear time-varying dynamics. To explain this in a simple setting, consider the LTI system:

x˙=diag(λ1,λ2,λ3)x,\dot{x}=\operatorname{diag}(\lambda_{1},\lambda_{2},\lambda_{3})x, (1)

with λi\lambda_{i}\in\mathbb{R} and x:+3x:\mathbb{R}_{+}\to\mathbb{R}^{3}. Let eie^{i}, i=1,2,3i=1,2,3, denote the iith canonical vector in 3\mathbb{R}^{3}. For x(0)=eix(0)=e^{i} we have x(t)=exp(λit)x(0)x(t)=\exp(\lambda_{i}t)x(0). Thus, exp(λit)\exp(\lambda_{i}t) describes the rate of evolution of the line eie^{i} subject to (1). What about 2D areas? Let S3S\subset\mathbb{R}^{3} denote the square generated by eie^{i} and eje^{j}, with iji\not=j. Then S(t):=x(t,S)S(t):=x(t,S) is the rectangle generated by exp(λit)ei\exp(\lambda_{i}t)e^{i} and exp(λjt)ej\exp(\lambda_{j}t)e^{j}, so the area of S(t)S(t) is exp((λi+λj)t)\exp((\lambda_{i}+\lambda_{j})t). Similarly, if B3B\subset\mathbb{R}^{3} is the 3D box generated by e1,e2e^{1},e^{2}, and e3e^{3} then the volume of B(t):=x(t,B)B(t):=x(t,B) is exp((λ1+λ2+λ3)t)\exp((\lambda_{1}+\lambda_{2}+\lambda_{3})t) (see Fig. 1).

Since exp(At)=diag(exp(λ1t),exp(λ2t),exp(λ3t))\exp(At)=\operatorname{diag}(\exp(\lambda_{1}t),\exp(\lambda_{2}t),\exp(\lambda_{3}t)), this discussion suggests that it may be useful to have a 3×33\times 3 matrix whose eigenvalues are the sums of any two eigenvalues of exp(At)\exp(At), and a 1×11\times 1 matrix whose eigenvalue is the sum of the three eigenvalues of exp(At)\exp(At). With this geometric motivation in mind, we turn to recall the notions of the multiplicative and additive compounds of a matrix. For more details and proofs, see e.g. [11, Ch. 6][33].

Refer to caption
Figure 1: The evolution of lines, areas, and volumes under the the LTI (1).

III Compound Matrices

Let An×mA\in\mathbb{C}^{n\times m}. Fix k{1,,min{n,m}}k\in\{1,\dots,\min\{n,m\}\}. Let Q(k,n)Q(k,n) denote the set of increasing sequences of kk integers in {1,,n}\{1,\dots,n\}, ordered lexicographically. For example,

Q(2,3)={{1,2},{1,3},{2,3}}.Q(2,3)=\{\{1,2\},\{1,3\},\{2,3\}\}.

For αQ(k,n),βQ(k,m)\alpha\in Q(k,n),\beta\in Q(k,m), let A[α|β]A[\alpha|\beta] denote the k×kk\times k submtarix obtained by taking the entries of AA in the rows indexed by α\alpha and the columns indexed by β\beta. For example

A[{1,2}|{2,3}]=[a12a13a22a23].A[\{1,2\}|\{2,3\}]=\begin{bmatrix}a_{12}&a_{13}\\ a_{22}&a_{23}\end{bmatrix}.

The minor of AA corresponding to α,β\alpha,\beta is A(α|β):=det(A[α|β])A(\alpha|\beta):=\det(A[\alpha|\beta]). For example, Q(n,n)Q(n,n) includes the single set α={1,,n}\alpha=\{1,\dots,n\} and A(α|α)=det(A)A(\alpha|\alpha)=\det(A).

Definition 1

Let ACn×mA\in C^{n\times m} and fix k{1,,min{n,m}}k\in\{1,\dots,\min\{n,m\}\}. The kk-multiplicative compound of AA, denoted A(k)A^{(k)}, is the (nk)×(mk)\binom{n}{k}\times\binom{m}{k} matrix that contains all the kk-order minors of AA ordered lexicographically.

For example, if n=m=3n=m=3 and k=2k=2 then

A(2)=[A({12}|{12})A({12}|{13})A({12}|{23})A({13}|{12})A({13}|{13})A({13}|{23})A({23}|{12})A({23}|{13})A({23}|{23})].A^{(2)}=\begin{bmatrix}A(\{12\}|\{12\})&A(\{12\}|\{13\})&A(\{12\}|\{23\})\\ A(\{13\}|\{12\})&A(\{13\}|\{13\})&A(\{13\}|\{23\})\\ A(\{23\}|\{12\})&A(\{23\}|\{13\})&A(\{23\}|\{23\})\end{bmatrix}.

In particular, Definition 1 implies that A(1)=AA^{(1)}=A, and if n=mn=m then A(n)=det(A)A^{(n)}=\det(A).

Let An×m,Bm×pA\in\mathbb{C}^{n\times m},\;B\in\mathbb{C}^{m\times p}. The Cauchy-Binet Formula (see e.g. [12]) asserts that for any k{1,,min{n,m,p}}k\in\{1,\dots,\min\{n,m,p\}\},

(AB)(k)=A(k)B(k).\displaystyle(AB)^{(k)}=A^{(k)}B^{(k)}. (2)

Hence the term multiplicative compound. Note that for n=m=pn=m=p, Eq. (2) with k=nk=n reduces to the familiar formula det(AB)=det(A)det(B)\det(AB)=\det(A)\det(B).

Let IsI_{s} denote the s×ss\times s identity matrix. Definition 1 implies that In(k)=IrI_{n}^{(k)}=I_{r}, where r:=(nk)r:=\binom{n}{k}. Hence, if An×nA\in\mathbb{R}^{n\times n} is non-singular then (AA1)(k)=Ir(AA^{-1})^{(k)}=I_{r} and combining this with (2) yields (A1)(k)=(A(k))1(A^{-1})^{(k)}=(A^{(k)})^{-1}. In particular, if AA is non-singular then so is A(k)A^{(k)}.

The kk-multiplicative compound has an important spectral property. For An×nA\in\mathbb{C}^{n\times n}, let λi\lambda_{i}i=1,,ni=1,\dots,n, denote the eigenvalues of AA. Then the eigenvalues of A(k)A^{(k)} are all the products

=1kλi, with 1i1<i2<<ikn.\prod_{\ell=1}^{k}\lambda_{i_{\ell}},\text{ with }1\leq i_{1}<i_{2}<\dots<i_{k}\leq n. (3)

For example, suppose that n=3n=3 and A=[a11a12a130a22a2300a33].A=\begin{bmatrix}a_{11}&a_{12}&a_{13}\\ 0&a_{22}&a_{23}\\ 0&0&a_{33}\end{bmatrix}. Then a calculation gives

A(2)=[a11a22a11a23a12a23a13a220a11a33a12a3300a22a33],A^{(2)}=\begin{bmatrix}a_{11}a_{22}&a_{11}a_{23}&a_{12}a_{23}-a_{13}a_{22}\\ 0&a_{11}a_{33}&a_{12}a_{33}\\ 0&0&a_{22}a_{33}\end{bmatrix},

so, clearly, the eigenvalues of A(2)A^{(2)} are of the form (3).

Definition 2

Let An×nA\in\mathbb{C}^{n\times n}. The kk-additive compound matrix of AA is defined by:

A[k]:=ddϵ(I+ϵA)(k)|ϵ=0.\displaystyle A^{[k]}:=\frac{d}{d\epsilon}(I+\epsilon A)^{(k)}|_{\epsilon=0}.

Note that this implies that A[k]=ddϵ(exp(Aϵ))(k)|ϵ=0A^{[k]}=\frac{d}{d\epsilon}(\exp(A\epsilon))^{(k)}|_{\epsilon=0}, and also that

(I+ϵA)(k)=I+ϵA[k]+o(ϵ),\displaystyle(I+\epsilon A)^{(k)}=I+\epsilon A^{[k]}+o(\epsilon), (4)

In other words, A[k]A^{[k]} is the first-order term in the Taylor expansion of (I+ϵA)(k)(I+\epsilon A)^{(k)}.

Let λi\lambda_{i}i=1,,ni=1,\dots,n, denote the eigenvalues of AA. Then the eigenvalues of (I+ϵA)(k)(I+\epsilon A)^{(k)} are the products =1k(1+ϵλi)\prod_{\ell=1}^{k}(1+\epsilon\lambda_{i_{\ell}}), and (4) implies that the eigenvalues of A[k]A^{[k]} are all the sums

=1kλi, with 1i1<i2<<ikn.\sum_{\ell=1}^{k}\lambda_{i_{\ell}},\text{ with }1\leq i_{1}<i_{2}<\dots<i_{k}\leq n.

Another important implication of the definitions above is that for any A,Bn×nA,B\in\mathbb{C}^{n\times n} we have

(A+B)[k]=A[k]+B[k].(A+B)^{[k]}=A^{[k]}+B^{[k]}.

This justifies the term additive compound. Moreover, the mapping AA[k]A\to A^{[k]} is linear.

The following result gives a useful explicit formula for A[k]A^{[k]} in terms of the entries aija_{ij} of AA. Recall that any entry of A(k)A^{(k)} is a minor A(α|β)A(\alpha|\beta). Thus, it is natural to index the entries of A(k)A^{(k)} and A[k]A^{[k]} using α,βQ(k,n)\alpha,\beta\in Q(k,n).

Proposition 1

Fix α,βQ(k,n)\alpha,\beta\in Q(k,n) and let α={i1,,ik}\alpha=\{i_{1},\dots,i_{k}\} and β={j1,,jk}\beta=\{j_{1},\dots,j_{k}\}. Then the entry of A[k]A^{[k]} corresponding to (α,β)(\alpha,\beta) is equal to:

  1. 1.

    =1kaii\sum_{\ell=1}^{k}a_{i_{\ell}i_{\ell}}, if i=ji_{\ell}=j_{\ell} for all {1,,k}\ell\in\{1,\ldots,k\};

  2. 2.

    (1)+maijm(-1)^{\ell+m}a_{i_{\ell}j_{m}}, if all the indices in α\alpha and β\beta agree, except for a single index ijmi_{\ell}\neq j_{m}; and

  3. 3.

    0, otherwise.

Example 1

For A4×4A\in\mathbb{R}^{4\times 4} and k=3k=3, Prop. 1 yields

A[3]=[a11+a22+a33a34a24a14a43a11+a22+a44a23a13a42a32a11+a33+a44a12a41a31a21a22+a33+a44].A^{[3]}=\left[\scalebox{0.67}{\mbox{$\displaystyle\begin{array}[]{ccccccc}a_{11}+a_{22}+a_{33}&a_{34}&-a_{24}&a_{14}\\ a_{43}&a_{11}+a_{22}+a_{44}&a_{23}&-a_{13}\\ -a_{42}&a_{32}&a_{11}+a_{33}+a_{44}&a_{12}\\ a_{41}&-a_{31}&a_{21}&a_{22}+a_{33}+a_{44}\end{array}$}}\right].

The entry in the second row and fourth column of A[3]A^{[3]} corresponds to (α|β)=({1,2,4}|{2,3,4})(\alpha|\beta)=(\{1,2,4\}|\{2,3,4\}). As α\alpha and β\beta agree in all indices except for the index i1=1i_{1}=1 and j2=3j_{2}=3, this entry is equal to (1)1+2a13=a13(-1)^{1+2}a_{13}=-a_{13}.

Note that Prop. 1 implies in particular that A[n]=tr(A)A^{[n]}=\operatorname{tr}(A).

The next section describes applications of compound matrices for dynamical systems described by ODEs. For more details and proofs, see [33, 29].

IV Compound Matrices and ODEs

Fix an interval [a,b][a,b]\in\mathbb{R}. Let A:[a,b]n×nA:[a,b]\to\mathbb{R}^{n\times n} be a continuous matrix function, and consider the LTV system:

x˙(t)=A(t)x(t),x(a)=x0.\dot{x}(t)=A(t)x(t),\quad x(a)=x_{0}. (5)

The solution is given by x(t)=Φ(t,a)x(a)x(t)=\Phi(t,a)x(a), where Φ(t,a)\Phi(t,a) is the solution at time tt of the matrix differential equation

Φ˙(s)=A(s)Φ(s),Φ(a)=In.\dot{\Phi}(s)=A(s)\Phi(s),\quad\Phi(a)=I_{n}. (6)

Fix k{1,,n}k\in\{1,\dots,n\} and let r:=(nk)r:=\binom{n}{k}. A natural question is: how do the kk-order minors of Φ(t)\Phi(t) evolve in time? The next result provides a beautiful formula for the evolution of Φ(k)(t):=(Φ(t))(k)\Phi^{(k)}(t):=(\Phi(t))^{(k)}.

Proposition 2

If Φ\Phi satisfies (6) then

ddtΦ(k)(t)=A[k](t)Φ(k)(t),Φ(k)(a)=Ir,\frac{d}{dt}\Phi^{(k)}(t)=A^{[k]}(t)\Phi^{(k)}(t),\quad\Phi^{(k)}(a)=I_{r}, (7)

where A[k](t):=(A(t))[k]A^{[k]}(t):=(A(t))^{[k]}.

Thus, the k×kk\times k minors of Φ\Phi also satisfy an LTV. In particular, if A(t)AA(t)\equiv A and a=0a=0 then Φ(t)=exp(At)\Phi(t)=\exp(At) so Φ(k)(t)=(exp(At))(k)\Phi^{(k)}(t)=(\exp(At))^{(k)} and (7) gives

(exp(At))(k)=exp(A[k]t).(\exp(At))^{(k)}=\exp(A^{[k]}t).

Note also that for k=nk=n, Prop. 2 yields

ddtdet(Φ(t))=tr(A(t))det(Φ(t)),\frac{d}{dt}\det(\Phi(t))=\operatorname{tr}(A(t))\det(\Phi(t)),

which is the Abel-Jacobi-Liouville identity.

Roughly speaking, Prop. 2 implies that under the LTV dynamics (5), kk-dimensional polygons evolve according to the dynamics (7).

We now turn to consider the nonlinear system:

x˙(t)=f(t,x).\dot{x}(t)=f(t,x). (8)

For the sake of simplicity, we assume that the initial time is zero, and that the system admits a convex and compact state-space Ω\Omega. We also assume that fC1f\in C^{1}. The Jacobian of the vector field ff is J(t,x):=xf(t,x)J(t,x):=\frac{\partial}{\partial x}f(t,x).

Compound matrices can be used to analyze (8) by using an LTV called the variational equation associated with (8). To define it, fix a,bΩa,b\in\Omega. Let z(t):=x(t,a)x(t,b)z(t):=x(t,a)-x(t,b), and for s[0,1]s\in[0,1], let γ(s):=sx(t,a)+(1s)x(t,b)\gamma(s):=sx(t,a)+(1-s)x(t,b), i.e. the line connecting x(t,a)x(t,a) and x(t,b)x(t,b). Then

z˙(t)\displaystyle\dot{z}(t) =f(t,x(t,a))f(t,x(t,b))\displaystyle=f(t,x(t,a))-f(t,x(t,b))
=01sf(t,γ(s))ds,\displaystyle=\int_{0}^{1}\frac{\partial}{\partial s}f(t,\gamma(s))\mathop{}\!\mathrm{d}s,

and this gives the variational equation:

z˙(t)=Aab(t)z(t),\dot{z}(t)=A^{ab}(t)z(t), (9)

where

Aab(t):=01J(t,γ(s))ds.A^{ab}(t):=\int_{0}^{1}J(t,\gamma(s))\mathop{}\!\mathrm{d}s. (10)

We can use the results above to describe a powerful approach for deriving useful “kk-generalizations” of important classes of dynamical systems including cooperative systems [36], contracting systems [3, 24], and diagonally stable systems [20].

V kk-generalizations of dynamical systems

Consider the LTV (5). Suppose that A(t)A(t) satisfies a specific property, e.g. property may be that A(t)A(t) is Metzler for all tt (so the LTV is positive) or that μ(A(t))η<0\mu(A(t))\leq-\eta<0 for all tt, where μ:n×n\mu:\mathbb{R}^{n\times n}\to\mathbb{R} is a matrix measure (so the LTV is contracting). Fix k{1,,n}k\in\{1,\dots,n\}. We say that the LTV satisfies kk-property if A[k]A^{[k]} (rather than AA) satisfies property. For example, the LTV is kk-positive if A[k](t)A^{[k]}(t) is Metzler for all tt; the LTV is kk-contracting if μ(A[k](t))η<0\mu(A^{[k]}(t))\leq-\eta<0 for all tt, and so on.

This generalization approach makes sense for two reasons. First, when k=1k=1A[k]A^{[k]} reduces to A[1]=AA^{[1]}=A, so kk-property is clearly a generalization of property. Second, we know that A[k]A^{[k]} has a clear geometric meaning, as it describes the evolution of kk-dimensional polygons along the dynamics.

The same idea can be applied to the nonlinear system (8) using the variational equation (9). For example, if μ(J(t,x))η<0\mu(J(t,x))\leq-\eta<0 for all t0t\geq 0 and xΩx\in\Omega then (8) is contracting: the distance between any two solutions (in the norm that induced μ\mu) decays at an exponential rate. If we replace this by the condition μ(J[k](t,x))η<0\mu(J^{[k]}(t,x))\leq-\eta<0 for all t0t\geq 0 and xΩx\in\Omega then (8) is called kk-contracting. Roughly speaking, this means that the volume of kk-dimensional polygons decays to zero exponentially along the flow of the nonlinear system. We now turn to describe several such kk-generalizations.

VI kk-contracting systems

kk-contracting systems were introduced in [42] (see also the unpublished preprint [26] for some preliminary ideas). For k=1k=1 these reduce to contracting systems. This generalization was motivated in part by the seminal work of Muldowney [29] who considered nonlinear systems that, in the new terminology, are 22-contracting. He derived several interesting results for time-invariant 22-contracting systems. For example, every bounded trajectory of a time-invariant, nonlinear, 22-contracting system converges to an equilibrium (but, unlike in the case of contracting systems, the equilibrium is not necessarily unique).

For the sake of simplicity, we introduce the ideas in the context of an LTV system. The analysis of nonlinear systems is based on assuming that their variational equation is a kk-contracting LTV.

Recall that a vector norm ||:n+|\cdot|:\mathbb{R}^{n}\to\mathbb{R}_{+} induces a matrix norm ||||:n×n+||\cdot||:\mathbb{R}^{n\times n}\to\mathbb{R}_{+} by:

A:=max|x|=1|Ax|,\displaystyle||A||:=\max_{|x|=1}|Ax|,

and a matrix measure μ():n×n\mu(\cdot):\mathbb{R}^{n\times n}\to\mathbb{R} by:

μ(A):=limϵ0I+ϵA1ϵ.\displaystyle\mu(A):=\lim_{\epsilon\downarrow 0}\frac{||I+\epsilon A||-1}{\epsilon}.

For p{1,2,}p\in\{1,2,\infty\}, let ||p:n+|\cdot|_{p}:\mathbb{R}^{n}\to\mathbb{R}_{+} denote the LpL_{p} vector norm, that is, |x|1:=i=1n|xi||x|_{1}:=\sum_{i=1}^{n}|x_{i}|, |x|2:=xTx|x|_{2}:=\sqrt{x^{T}x}, and |x|:=maxi{1,,n}|xi||x|_{\infty}:=\max_{i\in\{1,\dots,n\}}|x_{i}|. The induced matrix measures are [39]:

μ1(A)\displaystyle\mu_{1}(A) =maxj{1,,n}(ajj+i=1ijn|aij|),\displaystyle=\max_{j\in\{1,\dots,n\}}(a_{jj}+\sum_{\begin{subarray}{c}i=1\\ i\neq j\end{subarray}}^{n}|a_{ij}|), (11)
μ2(A)\displaystyle\mu_{2}(A) =λ1(A+AT)/2,\displaystyle=\lambda_{1}({A+A^{T}})/2,
μ(A)\displaystyle\mu_{\infty}(A) =maxi{1,,n}(aii+j=1jin|aij|),\displaystyle=\max_{i\in\{1,\dots,n\}}(a_{ii}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{n}|a_{ij}|),

where λi(S)\lambda_{i}(S) denotes the ii-th largest eigenvalue of the symmetric matrix SS, that is,

λ1(S)λ2(S)λn(S).\lambda_{1}(S)\geq\lambda_{2}(S)\geq\cdots\geq\lambda_{n}(S).
Definition 3

The LTV (5) is called kk-contracting with respect to (w.r.t.) the norm |||\cdot| if

μ(A[k](t))η<0 for all t0,\mu(A^{[k]}(t))\leq-\eta<0\text{ for all }t\geq 0, (12)

where μ\mu is the matrix measure induced by |||\cdot|.

Note that for k=1k=1 condition (12) reduces to the standard infinitesimal condition for contraction [3]. For the LpL_{p} norms, with p{1,2,}p\in\{1,2,\infty\}, this condition is easy to check using the explicit expressions for μ1\mu_{1}, μ2\mu_{2}, and μ\mu_{\infty}. This carries over to kk-contraction, as combining Prop. 1 with (11) gives [29]:

μ1(A[k])\displaystyle\mu_{1}(A^{[k]}) =maxαQ(k,n)(p=1kaαp,αp+jα(|aj,α1|++|aj,αk|)),\displaystyle=\max_{\alpha\in Q(k,n)}(\sum_{p=1}^{k}a_{\alpha_{p},\alpha_{p}}+\sum_{\begin{subarray}{c}j\notin\alpha\end{subarray}}(|a_{j,\alpha_{1}}|+\cdots+|a_{j,\alpha_{k}}|)),
μ2(A[k])\displaystyle\mu_{2}(A^{[k]}) =i=1kλi(A+AT)/2,\displaystyle=\sum_{i=1}^{k}\lambda_{i}({A+A^{T}})/2,
μ(A[k])\displaystyle\mu_{\infty}(A^{[k]}) =maxαQ(k,n)(p=1kaαp,αp+jα(|aα1,j|++|aαk,j|)).\displaystyle=\max_{\alpha\in Q(k,n)}(\sum_{p=1}^{k}a_{\alpha_{p},\alpha_{p}}+\sum_{\begin{subarray}{c}j\notin\alpha\end{subarray}}(|a_{\alpha_{1},j}|+\cdots+|a_{\alpha_{k},j}|)).

For k=nk=n, A[n]A^{[n]} is the scalar tr(A)\operatorname{tr}(A), so condition (12) becomes tr(A(t))η<0\operatorname{tr}(A(t))\leq-\eta<0 for all t0t\geq 0.

Combining Coppel’s inequality [6] with (7) yields the following result.

Proposition 3

If the LTV (5) is kk-contracting then

Φ(k)(t)exp(ηt)Φ(k)(0)=exp(ηt)||\Phi^{(k)}(t)||\leq\exp(-\eta t)||\Phi^{(k)}(0)||=\exp(-\eta t)

for all t0t\geq 0.

Geometrically, this means that under the LTV dynamics the volume of kk-dimensional polygons converges to zero exponentially. The next example illustrates this.

Example 2

Consider the LTV (5) with n=2n=2 and A(t)=[102cos(t)0].A(t)=\begin{bmatrix}-1&0\\ -2\cos(t)&0\end{bmatrix}. The corresponding transition matrix is: Φ(t)=[exp(t)01+exp(t)(cos(t)sin(t))1].\Phi(t)=\begin{bmatrix}\exp(-t)&0\\ -1+\exp(-t)(\cos(t)-\sin(t))&1\end{bmatrix}. This implies that the LTV is uniformly stable, and that for any x(0)2x(0)\in\mathbb{R}^{2} we have

limtx(t,x(0))=[0x2(0)x1(0)].\lim_{t\to\infty}x(t,x(0))=\begin{bmatrix}0\\ x_{2}(0)-x_{1}(0)\end{bmatrix}.

The LTV is not contracting, w.r.t. any norm, as there is more than a single equilibrium. However, A[2](t)=tr(A(t))1A^{[2]}(t)=\operatorname{tr}(A(t))\equiv-1, so the system is 22-contracting. Let S2S\subset\mathbb{R}^{2} denote the unit square, and let S(t):=x(t,S)S(t):=x(t,S), that is, the evolution at time tt of the unit square under the dynamics. Fig. 2 depicts S(t)+2tS(t)+2t for several values of tt, where the shift by 2t2t is for the sake of clarity. It may be seen that the area of S(t)S(t) decays with tt, and that S(t)S(t) converges to a line.

Refer to caption
Figure 2: Evolution of the unit square in Example 2.

As noted above, time-invariant 22-contracting systems have a “well-odered” asymptotic behaviour [29, 23], and this has been used to derive a global analysis of important models from epidemiology (see, e.g. [22]). A recent paper [30] extended some of these results to systems that are not necessarily 22-contracting, but can be represented as the serial interconnections of kk-contracting systems, with k{1,2}k\in\{1,2\}.

VII α\alpha-compounds and α\alpha-contracting systems

A recent paper [44] defined a generalizations called the α\alpha-multiplicative compound and α\alpha-additive compound of a matrix, where α\alpha is a real number. Let An×nA\in\mathbb{C}^{n\times n} be non-singular. If α=k+s\alpha=k+s, where k{1,,n1}k\in\{1,\dots,n-1\} and s(0,1)s\in(0,1) then the α\alpha-multiplicative of AA is defined by:

A(α):=(A(k))1s(A(k+1))s,A^{(\alpha)}:=(A^{(k)})^{1-s}\otimes(A^{(k+1)})^{s},

where \otimes denotes the Kronecker product. This is a kind of “multiplicative interpolation” between A(k)A^{(k)} and A(k+1)A^{(k+1)}. For example, A(2.1)=(A(2))0.9(A(3))0.1A^{(2.1)}=(A^{(2)})^{0.9}\otimes(A^{(3)})^{0.1}. The α\alpha-additive compound is defined just like the kk-additive compound, that is,

A[α]:=ddϵ(I+ϵA)(α)|ϵ=0,\displaystyle A^{[\alpha]}:=\frac{d}{d\epsilon}(I+\epsilon A)^{(\alpha)}|_{\epsilon=0},

and it was shown in [44] that this yields

A[α]=((1s)A[k])(sA[k+1]),A^{[\alpha]}=((1-s)A^{[k]})\oplus(sA^{[k+1]}),

where \oplus denotes the Kronecker sum.

The system (8) is called α\alpha-contracting w.r.t. the norm |||\cdot| if

μ(J[α](t,x))η<0,\mu(J^{[\alpha]}(t,x))\leq-\eta<0, (13)

for all t0t\geq 0 and all xx in the state space [44].

Using this notion, it is possible to restate the seminal results of Douady and Oesterlé [7] as follows.

Theorem 1

[44] Suppose that (8) is α\alpha-contracting for some α[1,n]\alpha\in[1,n]. Then any compact and strongly invariant set of the dynamics has a Hausdorff dimension smaller than α\alpha.

Roughly speaking, the dynamics contracts sets with a larger Hausdorff dimension.

The next example, adapted from [44], shows how these notions can be used to “de-chaotify” a nonlinear dynamical system by feedback.

Example 3

Thomas’ cyclically symmetric attractor [38, 4] is a popular example for a chaotic system. It is described by:

x˙1=\displaystyle\dot{x}_{1}= sin(x2)bx1,\displaystyle\sin(x_{2})-bx_{1},
x˙2=\displaystyle\dot{x}_{2}= sin(x3)bx2,\displaystyle\sin(x_{3})-bx_{2}, (14)
x˙3=\displaystyle\dot{x}_{3}= sin(x1)bx3,\displaystyle\sin(x_{1})-bx_{3},

where b>0b>0 is the dissipation constant. The convex and compact set D:={x3:b|x|1}D:=\{x\in\mathbb{R}^{3}:b|x|_{\infty}\leq 1\} is an invariant set of the dynamics.

Fig. 3 depicts the solution of the system emanating from [121]T\begin{bmatrix}1&-2&1\end{bmatrix}^{T} for b=0.1b=0.1. Note the symmetric strange attractor.

The Jacobian JfJ_{f} of the vector field in (3) is

Jf(x)=[bcos(x2)00bcos(x3)cos(x1)0b],\displaystyle J_{f}(x)=\begin{bmatrix}-b&\cos(x_{2})&0\\ 0&-b&\cos(x_{3})\\ \cos(x_{1})&0&-b\end{bmatrix},

and thus

Jf[2](x)=[2bcos(x3)002bcos(x2)cos(x1)02b],\displaystyle J_{f}^{[2]}(x)=\begin{bmatrix}-2b&\cos(x_{3})&0\\ 0&-2b&\cos(x_{2})\\ -\cos(x_{1})&0&-2b\end{bmatrix},

and Jf[3]=trace(J(x))=3bJ_{f}^{[3]}=\operatorname{trace}(J(x))=-3b. Since b>0b>0, this implies that the system is 33-contracting w.r.t. any norm. Let α=2+s\alpha=2+s, with s(0,1)s\in(0,1). Then

Jf[α](x)=(1s)Jf[2](x)sJf[3](x)\displaystyle J_{f}^{[\alpha]}(x)=(1-s)J_{f}^{[2]}(x)\oplus sJ_{f}^{[3]}(x)
=[(2+s)b(1s)cos(x3)00(2+s)b(1s)cos(x2)(1s)cos(x1)0(2+s)b].\displaystyle=\begin{bmatrix}-(2+s)b&(1-s)\cos(x_{3})&0\\ 0&-(2+s)b&(1-s)\cos(x_{2})\\ -(1-s)\cos(x_{1})&0&-(2+s)b\\ \end{bmatrix}.

This implies that

μ1(Jf[α](x))12bs(b+1), for all xD.\mu_{1}(J_{f}^{[\alpha]}(x))\leq 1-2b-s(b+1),\text{ for all }x\in D.

We conclude that for any b(0,1/2)b\in(0,1/2) the system is (2+s)(2+s)-contracting for any s>12b1+bs>\frac{1-2b}{1+b}.

We now show how α\alpha-contarction can be used to design a partial-state controller for the system guaranteeing that the closed-loop system has a “well-ordered” behaviour. Suppose that the closed-loop system is:

x˙=f(x)+g(x),\dot{x}=f(x)+g(x),

where gg is the controller. Let α=2+s\alpha=2+s, with s(0,1)s\in(0,1). The Jacobian of the closed-loop system is Jcl:=Jf+JgJ_{cl}:=J_{f}+J_{g}, so

μ1(Jcl[α])\displaystyle\mu_{1}(J_{cl}^{[\alpha]}) =μ1(Jf[α]+Jg[α])\displaystyle=\mu_{1}(J_{f}^{[\alpha]}+J_{g}^{[\alpha]})
μ1(Jf[α])+μ1(Jg[α])\displaystyle\leq\mu_{1}(J_{f}^{[\alpha]})+\mu_{1}(J_{g}^{[\alpha]})
12bs(b+1)+μ1(Jg[α]).\displaystyle\leq 1-2b-s(b+1)+\mu_{1}(J_{g}^{[\alpha]}).

This implies that the closed-loop system is α\alpha-contracting if

μ1(Jg[α](x))<s(b+1)+2b1 for all xD.\displaystyle\mu_{1}(J^{[\alpha]}_{g}(x))<s(b+1)+2b-1\text{ for all }x\in D. (15)

Consider, for example, the controller g(x1,x2)=cdiag(1,1,0)xg(x_{1},x_{2})=c\operatorname{diag}(1,1,0)x, with gain c<0c<0. Then Jg[α]=cdiag(2,1+s,1+s)J_{g}^{[\alpha]}=c\operatorname{diag}(2,1+s,1+s) and for any c<0c<0 condition (15) becomes

(1+s)c<s(b+1)+2b1.\displaystyle(1+s)c<s(b+1)+2b-1. (16)

This provides a simple recipe for determining the gain cc so that the closed-loop system is (2+s)(2+s)-contracting. For example, when s0s\to 0, Eq. (16) yields c<2b1c<2b-1, and this guarantees that the closed-loop system is 22-contracting. Recall that in a 22-contracting system every nonempty omega limit set is a single equilibrium, thus ruling out chaotic attractors and even non-trivial limit cycles [23]. Fig. 4 depicts the behaviour of the closed-loop system with b=0.1b=0.1 and c=2b1.1c=2b-1.1. The closed-loop system is thus 22-contracting, and as expected every solution converges to an equilibrium.

Refer to caption
Figure 3: A trajectory emanating from x(0)=[121]Tx(0)=\begin{bmatrix}1&-2&1\end{bmatrix}^{T}.
Refer to caption
Figure 4: Several trajectories of the closed-loop system. The circles denote the initial conditions of the trajectories.

VIII kk-positive systems

Ref. [40] introduced the notions of kk-positive and kk-cooperative systems. The LTV (5) is called kk-positive if A[k](t)A^{[k]}(t) is Metzler for all tt. For k=1k=1 this reduces to requiring that A(t)A(t) is Metzler for all tt. In this case the system is positive i.e. the flow maps +n\mathbb{R}^{n}_{+} to +n\mathbb{R}^{n}_{+} (and also n:=+n\mathbb{R}^{n}_{-}:=-\mathbb{R}^{n}_{+} to n\mathbb{R}^{n}_{-}[10]. In other words, the flow maps the set of vectors with zero sign variations to itself.

kk-positive systems map the set of vectors with up to k1k-1 sign variations to itself. To explain this, we recall some definitions and results from the theory of totally positive (TP) matrices, that is, matrices whose minors are all positive [9, 31].

For a vector xn{0}x\in\mathbb{R}^{n}\setminus\{0\}, let s(x)s^{-}(x) denote the number of sign variations in xx after deleting all its zero entries. For example, s([10023]T)=2s^{-}(\begin{bmatrix}-1&0&0&2&-3\end{bmatrix}^{T})=2. We define s(0):=0s^{-}(0):=0. For a vector xnx\in\mathbb{R}^{n}, let s+(x)s^{+}(x) denote the maximal possible number of sign variations in xx after setting every zero entry in xx to either 1-1 or +1+1. For example, s+([10023]T)=4s^{+}(\begin{bmatrix}-1&0&0&2&-3\end{bmatrix}^{T})=4. These definitions imply that 0s(x)s+(x)n10\leq s^{-}(x)\leq s^{+}(x)\leq n-1, for all xn.x\in\mathbb{R}^{n}.

For any k{1,,n}k\in\{1,\dots,n\}, define the sets

Pk\displaystyle P_{-}^{k} :={zn:s(z)k1},\displaystyle:=\{z\in\mathbb{R}^{n}:\;s^{-}(z)\leq k-1\},
P+k\displaystyle P_{+}^{k} :={zn:s+(z)k1}.\displaystyle:=\{z\in\mathbb{R}^{n}:\;s^{+}(z)\leq k-1\}. (17)

In other words, these are the sets of all vectors with up to k1k-1 sign variations. Then PkP_{-}^{k} is closed, and it can be shown that P+k=int(Pk)P_{+}^{k}=\operatorname{int}(P_{-}^{k}). For example,

P1=+nn,P+1=int(+n)int(n).\displaystyle P_{-}^{1}=\mathbb{R}_{+}^{n}\cup\mathbb{R}_{-}^{n},\;\;\;P_{+}^{1}=\operatorname{int}(\mathbb{R}_{+}^{n})\cup\operatorname{int}(\mathbb{R}_{-}^{n}).
Definition 4

The LTV (5) is called kk-positive on an interval [a,b][a,b] if for any a<t0<ba<t_{0}<b,

x(t0)Pkx(t,x(t0))Pk for all t0t<b,x(t_{0})\in P^{k}_{-}\implies x(t,x(t_{0}))\in P^{k}_{-}\text{ for all }t_{0}\leq t<b,

and is called strongly kk-positive if

x(t0)Pkx(t,x(t0))P+k for all t0<t<b.x(t_{0})\in P^{k}_{-}\implies x(t,x(t_{0}))\in P^{k}_{+}\text{ for all }t_{0}<t<b.

In other words, the sets of up to k1k-1 sign variations are invariant sets of the dynamics.

An important property of TP matrices is their sign variation diminishing property: if An×nA\in\mathbb{R}^{n\times n} is TP and xn{0}x\in\mathbb{R}^{n}\setminus\{0\} then s+(Ax)s(x)s^{+}(Ax)\leq s^{-}(x). In other words, multiplying a vector by a TP matrix can only decrease the number of sign variations. For our purposes, we need a more specialized result. Recall that An×nA\in\mathbb{R}^{n\times n} is called sign-regular of order kk if its minors of order kk are all non-positive or all non-negative, and strictly sign-regular of order kk if its minors of order kk are all positive or all negative

Proposition 4

[5] Let An×nA\in\mathbb{R}^{n\times n} be a non-singular matrix. Pick k{1,,n}k\in\{1,\dots,n\}. Then the following two conditions are equivalent:

  1. 1.

    For any xnx\in\mathbb{R}^{n} with s(x)k1s^{-}(x)\leq k-1, we have s(Ax)k1s^{-}(Ax)\leq k-1.

  2. 2.

    AA is sign-regular of order kk.

Also, the following two conditions are equivalent:

  1. I.

    For any xn{0}x\in\mathbb{R}^{n}\setminus\{0\} with s(x)k1s^{-}(x)\leq k-1, we have s+(Ax)k1s^{+}(Ax)\leq k-1.

  2. II.

    AA is strictly sign-regular of order kk.

Using these tools allows to characterize the behaviour of kk-positive LTVs.

Theorem 2

The LTV (5) is kk-positive on [a,b][a,b] iff A[k](t)A^{[k]}(t) is Metzler for all t(a,b)t\in(a,b). It is strongly kk-positive on [a,b][a,b] iff A[k](t)A^{[k]}(t) is Metzler for all t(a,b)t\in(a,b), and A[k](t)A^{[k]}(t) is irreducible for all t(a,b)t\in(a,b) except, perhaps, at isolated time points.

The proof is simple. Consider for example the second assertion in the theorem. The Metzler and irreducibility assumptions imply that the matrix differential system (7) is a positive linear system, and furthermore, that all the entries of Φ(k)(t,t0)\Phi^{(k)}(t,t_{0}) are positive for all t>t0t>t_{0}. Thus, Φ(t,t0)\Phi(t,t_{0}) is strictly sign-regular of order kk for all t>t0t>t_{0}. Since x(t,x(t0))=Φ(t,t0)x(t0)x(t,x(t_{0}))=\Phi(t,t_{0})x(t_{0}), applying Prop. 4 completes the proof.

This line of reasoning demonstrates a general and useful principle, namely, given conditions on A[k]A^{[k]} we can apply standard tools from dynamical systems theory to the “kk-compound dynamics” (7), and deduce results on the behaviour of the solution x(t)x(t) of (5).

A natural question is: when is A[k]A^{[k]} a Metzler matrix? This can be answered using Prop. 1 in terms of sign pattern conditions on the entries aija_{ij} of AA. This is useful as in fields like chemistry and systems biology, exact values of various parameters are typically unknown, but their signs may be inferred from various properties of the system [37].

Proposition 5

Let An×nA\in\mathbb{R}^{n\times n} with n3n\geq 3. Then

  1. 1.

    A[n1]A^{[n-1]} is Metzler iff aij0a_{ij}\geq 0 for all i,ji,j with iji-j odd, and aij0a_{ij}\leq 0 for all i,ji,j with iji\not=j and iji-j even;

  2. 2.

    for any odd kk in the range 1<k<n11<k<n-1, A[k]A^{[k]} is Metzler iff a1n,an10a_{1n},a_{n1}\geq 0, aij0a_{ij}\geq 0 for all |ij|=1|i-j|=1, and aij=0a_{ij}=0 for all 1<|ij|<n11<|i-j|<n-1;

  3. 3.

    for any even kk in the range 1<k<n11<k<n-1, A[k]A^{[k]} is Metzler iff a1n,an10a_{1n},a_{n1}\leq 0, aij0a_{ij}\geq 0 for all |ij|=1|i-j|=1, and aij=0a_{ij}=0 for all 1<|ij|<n11<|i-j|<n-1.

In Case 1) there exists a non-singular matrix TT such TAT1-TAT^{-1} is Metzler. In other words, there exists a coordinate transformation such that in the new coordinates the dynamics is competitive. Thus, kk-positive systems, with k{1,,n1}k\in\{1,\dots,n-1\}, may be viewed as a kind of interpolation from cooperative to competitive systems. In Case 2), AA is in particular Metzler. Case 3) is illustrated in the next example.

Example 4

Consider the case n=3n=3 and A=[112010.1301]A=\begin{bmatrix}-1&1&-2\\ 0&1&0.1\\ -3&0&1\end{bmatrix}. Note that AA is not Metzler, yet A[2]=[00.12001302]A^{[2]}=\begin{bmatrix}0&0.1&2\\ 0&0&1\\ 3&0&2\end{bmatrix} is Metzler (and irreducible). Thm. 2 guarantees that for any x(0)x(0) with s(x(0))1s^{-}(x(0))\leq 1, we have

s(x(t,x(0)))1 for all t0.s^{-}(x(t,x(0)))\leq 1\text{ for all }t\geq 0. (18)

Fig. 5 depicts s(x(t,x(0)))=s(exp(At)x(0))s^{-}(x(t,x(0)))=s^{-}(\exp(At)x(0)) for x(0)=[4211]Tx(0)=\begin{bmatrix}4&-21&-1\end{bmatrix}^{T}. Note that s(x(0))=1s^{-}(x(0))=1. It may be seen that s(x(t,x(0)))s^{-}(x(t,x(0))) decreases and then increases, but always satisfies the bound (18).

Refer to caption
Figure 5: s(x(t,x(0)))s^{-}(x(t,x(0))) as a function of tt.

VIII-A Totally positive differential systems

A matrix An×nA\in\mathbb{R}^{n\times n} is called a Jacobi matrix if AA is tri-diagonal with positive entries on the super- and sub-diagonals. An immediate implication of Prop. 5 is that A[k]A^{[k]} is Metzler and irreducible for all k{1,,n1}k\in\{1,\dots,n-1\} iff AA is Jacobi. It then follows that for any t>0t>0 the matrices (exp(At))(k)(\exp(At))^{(k)}, k=1,,nk=1,\dots,n, are positive, that is, exp(At)\exp(At) is TP for all t>0t>0. Combining this with Thm. 2 yields the following.

Proposition 6

[33] The following two conditions are equivalent.

  1. 1.

    AA is Jacobi.

  2. 2.

    for any x0n{0}x_{0}\in\mathbb{R}^{n}\setminus\{0\} the solution of the LTI x˙(t)=Ax(t)\dot{x}(t)=Ax(t), x(0)=x0x(0)=x_{0}, satisfies

    s+(x(t,x0))s(x0) for all t>0.s^{+}(x(t,x_{0}))\leq s^{-}(x_{0})\text{ for all }t>0.

In other words, s(x(t,x0))s^{-}(x(t,x_{0})) and also s+(x(t,x0))s^{+}(x(t,x_{0})) are non-increasing functions of tt, and may thus be considered as piece-wise constant Lyapunov functions for the dynamics.

Prop. 6 was proved by Schwarz [33], yet he only considered linear systems. It was recently shown [28] that important results on the asymptotic behaviour of time-invariant and periodic time-varying nonlinear systems with a Jacobian that is a Jacobi matrix for all t,xt,x [34, 35] follow from the fact that the associated variational equation is a totally positive LTV.

IX kk-cooperative systems

We now review the applications of kk-positivity to the time-invariant nonlinear system:

x˙=f(x),\dot{x}=f(x), (19)

with fC1f\in C^{1}. Let J(x):=xf(x)J(x):=\frac{\partial}{\partial x}f(x). We assume that the trajectories of (19) evolve on a convex and compact state-space Ωn\Omega\subseteq\mathbb{R}^{n}.

Recall that (19) is called cooperative if J(x)J(x) is Metzler for all xΩx\in\Omega. In other words, the variational equation associated with (19) is positive. The slightly stronger condition of strong cooperativity has far reaching implications. By Hirsch’s quasi-convergence theorem [36], almost every bounded trajectory converges to the set of equilibria.

It is natural to define kk-cooperativity by requiring that the variational equation associated with (19) is kk-positive.

Definition 5

[40] The nonlinear system (19) is called [strongly] kk-cooperative if the associated LTV (9) is [strongly] kk-positive for any a,bΩa,b\in\Omega.

Note that for k=1k=1 this reduces to the definition of a cooperative [strongly coopertive] dynamical system.

One immediate implication of Definition 5 is the existence of certain invariant sets of the dynamics.

Proposition 7

Suppose that (19) is kk-cooperative. Pick a,bΩa,b\in\Omega. Then

abPkx(t,a)x(t,b)Pk for all t0.a-b\in P^{k}_{-}\implies x(t,a)-x(t,b)\in P^{k}_{-}\text{ for all }t\geq 0.

If, furthermore, 0Ω0\in\Omega and 0 is an equilibrium point of (19), i.e. f(0)=0f(0)=0 then

aPkx(t,a)Pk for all t0.a\in P^{k}_{-}\implies x(t,a)\in P^{k}_{-}\text{ for all }t\geq 0.

The sign pattern conditions in Prop. 5 can be used to provide simple to verify sufficient conditions for [strong] kk-cooperativity of (19). Indeed, if J(x)J(x) satisfies a sign pattern condition for all xΩx\in\Omega then the integral of JJ in the variational equation (9) satisfies the same sign pattern, and thus so does AabA^{ab}. The next example, adapted from [40], illustrates this.

Example 5

Ref. [8] studied the nonlinear system

x˙1\displaystyle\dot{x}_{1} =f1(x1,xn),\displaystyle=f_{1}(x_{1},x_{n}),
x˙i\displaystyle\dot{x}_{i} =fi(xi1,xi,xi+1),i=2,,n1,\displaystyle=f_{i}(x_{i-1},x_{i},x_{i+1}),\quad i=2,\dots,n-1,
x˙n\displaystyle\dot{x}_{n} =fn(xn1,xn),\displaystyle=f_{n}(x_{n-1},x_{n}), (20)

with the following assumptions: the state-space Ωn\Omega\subseteq\mathbb{R}^{n} is convex, fiCn1f_{i}\in C^{n-1}, i=1,,ni=1,\dots,n, and there exist δi{1,1}\delta_{i}\in\{-1,1\}, i=1,,ni=1,\dots,n, such that

δ1xnf1(x)\displaystyle\delta_{1}\frac{\partial}{\partial x_{n}}f_{1}(x) >0,\displaystyle>0,
δ2x1f2(x),δ3x3f2(x)\displaystyle\delta_{2}\frac{\partial}{\partial x_{1}}f_{2}(x),\delta_{3}\frac{\partial}{\partial x_{3}}f_{2}(x) >0,\displaystyle>0,
\displaystyle\vdots
δn1xn2fn1(x),δnxnfn1(x)\displaystyle\delta_{n-1}\frac{\partial}{\partial x_{n-2}}f_{n-1}(x),\delta_{n}\frac{\partial}{\partial x_{n}}f_{n-1}(x) >0,\displaystyle>0,
δnxn1fn(x)\displaystyle\delta_{n}\frac{\partial}{\partial x_{n-1}}f_{n}(x) >0,\displaystyle>0,

for all xΩx\in\Omega. This is a generalization of the monotone cyclic feedback system analyzed in [25]. As noted in [8], we may assume without loss of generality that δ2=δ3==δn=1\delta_{2}=\delta_{3}=\dots=\delta_{n}=1 and δ1{1,1}\delta_{1}\in\{-1,1\}. Then the Jacobian of (5) has the form

J(x)=[00000sgn(δ1)>0>000000>0>000000000>0],J(x)=\begin{bmatrix}*&0&0&0&\dots&0&0&\operatorname{{\mathrm{s}gn}}(\delta_{1})\\ >0&*&>0&0&\dots&0&0&0\\ 0&>0&*&>0&\dots&0&0&0\\ &&&\vdots\\ 0&0&0&0&\dots&0&>0&*\\ \end{bmatrix},

for all xΩx\in\Omega. Here * denotes “don’t care”. Note that J(x)J(x) is irreducible for all xΩx\in\Omega.

If δ1=1\delta_{1}=1 then J(x)J(x) is Metzler, so the system is strongly 11-cooperative.

If δ1=1\delta_{1}=-1 then J(x)J(x) satisfies the sign pattern in Case 3 in Prop. 5, so the system is strongly 22-cooperative. (If nn is odd then J(x)J(x) also satisfies the sign pattern in Case 1, so there is a coordinate transformation for which the system is also strongly competitive.)

The main result in [40] is that strongly 22-cooperative systems satisfy a strong Poincaré-Bendixson property.

Theorem 3

Suppose that (19) is strongly 22-cooperative. Pick aΩa\in\Omega. If the omega limit set ω(a)\omega(a) does not include an equilibrium then it is a closed orbit.

The proof of this result is based on the seminal results of Sanchez [32]. Yet, it is considerably stronger than the main result in [32], as it applies to any trajectory emanating from Ω\Omega and not only to so called pseudo-ordered trajectories (see the definition in [32]).

The Poincaré-Bendixon property is useful because often it can be combined with a local analysis near the equilibrium points to provide a global picture of the dynamics. For a recent application of Thm. 3 to a model from systems biology, see [27].

X Conclusion

kk-compound matrices describe the evolution of kk-dimensional polygons along an LTV dynamics. This geometric property has important consequences in systems and control theory. This holds for both LTVs and also time-varying nonlinear systems, as their variational equation is an LTV.

Due to space limitations, we considered here only a partial list of applications. Another application, for example, is based on generalizing diagonal stability for the LTI x˙=Ax\dot{x}=Ax to kk-diagonal stability by requiring that there exists a diagonal and positive-definite matrix DD such that DA[k]+(A[k])TDDA^{[k]}+(A^{[k]})^{T}D is negative-definite [43].

Another interesting line of research is based on analyzing systems with inputs and outputs. A SISO system is called externally kk-positive if any input with up to kk sign variations induces an output with up to kk sign variations [13, 16, 14, 17, 15]. For LTIs with a zero initial condition the input-output mapping is described by a convolution with the impulse response and then external kk-positivity is related to interesting results in statistics [18] and the theory of infinite-dimensional linear operators [19].

References

  • [1] R. Alseidi, M. Margaliot, and J. Garloff, “On the spectral properties of nonsingular matrices that are strictly sign-regular for some order with applications to totally positive discrete-time systems,” J. Math. Anal. Appl., vol. 474, pp. 524–543, 2019.
  • [2] R. Alseidi, M. Margaliot, and J. Garloff, “Discrete-time kk-positive linear systems,” IEEE Trans. Automat. Control, vol. 66, no. 1, pp. 399–405, 2021.
  • [3] Z. Aminzare and E. D. Sontag, “Contraction methods for nonlinear systems: A brief introduction and some open problems,” in Proc. 53rd IEEE Conf. on Decision and Control, Los Angeles, CA, 2014, pp. 3835–3847.
  • [4] V. Basios, C. G. Antonopoulos, and A. Latifi, “Labyrinth chaos: Revisiting the elegant, chaotic, and hyperchaotic walks,” Chaos, vol. 30, no. 11, p. 113129, 2020.
  • [5] T. Ben-Avraham, G. Sharon, Y. Zarai, and M. Margaliot, “Dynamical systems with a cyclic sign variation diminishing property,” IEEE Trans. Automat. Control, vol. 65, pp. 941–954, 2020.
  • [6] W. A. Coppel, Stability and Asymptotic Behavior of Differential Equations.   Boston, MA: D. C. Heath, 1965.
  • [7] A. Douady and J. Oesterlé, “Dimension de Hausdorff des attracteurs,” C. R. Acad. Sc. Paris, vol. 290, pp. 1135–1138, 1980.
  • [8] A. S. Elkhader, “A result on a feedback system of ordinary differential equations,” J. Dyn. Diff. Equat., vol. 4, no. 3, pp. 399–418, 1992.
  • [9] S. M. Fallat and C. R. Johnson, Totally Nonnegative Matrices.   Princeton, NJ: Princeton University Press, 2011.
  • [10] L. Farina and S. Rinaldi, Positive Linear Systems: Theory and Applications.   John Wiley, 2000.
  • [11] M. Fiedler, Special Matrices and Their Applications in Numerical Mathematics, 2nd ed.   Mineola, NY: Dover Publications, 2008.
  • [12] D. Grinberg, “Notes on the combinatorial fundamentals of algebra,” 2020. [Online]. Available: https://arxiv.org/abs/2008.09862
  • [13] C. Grussler, T. B. Burghi, and S. Sojoudi, “Internally Hankel-positive systems,” 2021, arXiv preprint arXiv:2103.06962.
  • [14] C. Grussler, T. Damm, and R. Sepulchre, “Balanced truncation of kk-positive systems,” 2020, arXiv preprint arXiv:2006.13333.
  • [15] C. Grussler and R. Sepulchre, “Strongly unimodal systems,” in Proc. 18th Euro. Control Conf., 2019, pp. 3273–3278.
  • [16] ——, “Variation diminishing Hankel operators,” in Proc. 59th IEEE Conf. on Decision and Control, 2020, pp. 4529–4534.
  • [17] ——, “Variation diminishing linear time-invariant systems,” 2020, arXiv preprint arXiv:2006.10030.
  • [18] I. Ibragimov, “On the composition of unimodal distributions,” Theory of Probability & Its Applications, vol. 1, no. 2, p. 255–260, 1956.
  • [19] S. Karlin, Total Positivity, Volume 1.   Stanford, CA: Stanford University Press, 1968.
  • [20] E. Kaszkurewicz and A. Bhaya, Matrix Diagonal Stability in Systems and Computation.   New York, NY: Springer, 2000.
  • [21] R. Katz, M. Margaliot, and E. Fridman, “Entrainment to subharmonic trajectories in oscillatory discrete-time systems,” Automatica, vol. 116, p. 108919, 2020.
  • [22] M. Y. Li and J. S. Muldowney, “Global stability for the SEIR model in epidemiology,” Math. Biosciences, vol. 125, no. 2, pp. 155–164, 1995.
  • [23] ——, “On R. A. Smith’s autonomous convergence theorem,” Rocky Mountain J. Math., vol. 25, no. 1, pp. 365–378, 1995.
  • [24] W. Lohmiller and J.-J. E. Slotine, “On contraction analysis for non-linear systems,” Automatica, vol. 34, pp. 683–696, 1998.
  • [25] J. Mallet-Paret and H. L. Smith, “The Poincaré-Bendixson theorem for monotone cyclic feedback systems,” J. Dyn. Differ. Equ., vol. 2, no. 4, pp. 367–421, 1990.
  • [26] I. R. Manchester and J.-J. E. Slotine, “Combination properties of weakly contracting systems,” 2014. [Online]. Available: https://arxiv.org/abs/1408.5174
  • [27] M. Margaliot and E. D. Sontag, “Compact attractors of an antithetic integral feedback system have a simple structure,” 2019. [Online]. Available: https://www.biorxiv.org/content/early/2019/12/08/868000
  • [28] ——, “Revisiting totally positive differential systems: A tutorial and new results,” Automatica, vol. 101, pp. 1–14, 2019.
  • [29] J. S. Muldowney, “Compound matrices and ordinary differential equations,” Rocky Mountain J. Math., vol. 20, no. 4, pp. 857–872, 12 1990.
  • [30] R. Ofir, M. Margaliot, Y. Levron, and J.-J. Slotine, “Serial interconnections of 11-contracting and 22-contracting systems,” 2021, submitted.
  • [31] A. Pinkus, Totally Positive Matrices.   Cambridge, UK: Cambridge University Press, 2010.
  • [32] L. A. Sanchez, “Cones of rank 2 and the Poincaré-Bendixson property for a new class of monotone systems,” J. Diff. Eqns., vol. 246, no. 5, pp. 1978–1990, 2009.
  • [33] B. Schwarz, “Totally positive differential systems,” Pacific J. Math., vol. 32, no. 1, pp. 203–229, 1970.
  • [34] J. Smillie, “Competitive and cooperative tridiagonal systems of differential equations,” SIAM J. Math. Anal., vol. 15, pp. 530–534, 1984.
  • [35] H. L. Smith, “Periodic tridiagonal competitive and cooperative systems of differential equations,” SIAM J. Math. Anal., vol. 22, no. 4, pp. 1102–1109, 1991.
  • [36] ——, Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems, ser. Mathematical Surveys and Monographs.   Providence, RI: Amer. Math. Soc., 1995, vol. 41.
  • [37] E. D. Sontag, “Monotone and near-monotone biochemical networks,” Systems and Synthetic Biology, vol. 1, pp. 59–87, 2007.
  • [38] R. Thomas, “Deterministic chaos seen in terms of feedback circuits: Analysis, synthesis, “labyrinth chaos”,” Int. J. Bifurc. Chaos., vol. 9, no. 10, pp. 1889–1905, 1999.
  • [39] M. Vidyasagar, Nonlinear Systems Analysis.   Englewood Cliffs, NJ: Prentice Hall, 1978.
  • [40] E. Weiss and M. Margaliot, “A generalization of linear positive systems with applications to nonlinear systems: Invariant sets and the Poincaré-Bendixson property,” Automatica, vol. 123, p. 109358, 2021.
  • [41] E. Weiss and M. Margaliot, “Is my system of odes k-cooperative?” IEEE Control Systems Letters, vol. 5, no. 1, pp. 73–78, 2021.
  • [42] C. Wu, I. Kanevskiy, and M. Margaliot, “kk-order contraction: theory and applications,” 2020, submitted. [Online]. Available: https://arxiv.org/abs/2008.10321
  • [43] C. Wu and M. Margaliot, “Diagonal stability of discrete-time kk-positive linear systems with applications to nonlinear systems,” 2020, submitted. [Online]. Available: arXiv:2102.02144
  • [44] C. Wu, R. Pines, M. Margaliot, and J.-J. Slotine, “Generalization of the multiplicative and additive compounds of square matrices and contraction in the Hausdorff dimension,” 2020, arXiv 2012.13441.