This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Multi-Agent Recurrent Rendezvous Using Drive-Based Motivation

Craig Thompson and Paul Reverdy
Abstract

Recent papers have introduced the Motivation Dynamics framework, which uses bifurcations to encode decision-making behavior in an autonomous mobile agent. In this paper, we consider the multi-agent extension of the Motivation Dynamics framework and show how the framework can be extended to encode persistent multi-agent rendezvous behaviors. We analytically characterize the bifurcation properties of the resulting system, and numerically show that it exhibits complex recurrent behavior suggestive of a strange attractor.

1 Introduction

Consider a group of people monitoring wildlife in a large area. Each person has a different location where they monitor the nest or den of some local animal, say a songbird. An individual will record the bird, make notes on its behavior, and otherwise gather information. Then, once sufficient information has been gathered, all members of the group rendezvous with each other to compare notes. Then, the group splits up again to gather more information. Abstractly, this is a problem of recurrent rendezvous alternating with designated tasks. In this paper we focus on developing a control scheme to automatically stabilize such a recurrent rendezvous behavior for a system of autonomous mobile robots.

The multi-agent rendezvous problem sits at the intersection of robot motion planning and distributed control. In the problem, multiple autonomous robots must coordinate and arrive at a location in space which is not predefined. There may be constraints on what an agent knows about the space or the other agents. Additional challenges such as obstacles, adversaries, or noise may be present.

The multi-agent rendezvous problem has seen significant attention in recent decades [1, 5, 14, 3], with applications such as coverage and exploration [13, 22], persistent recharging [15, 16], and simultaneous location and mapping (SLAM) [24]. Analyzing rendezvous is typically done using graph based approaches [15, 17] (where one views the agents as nodes in a communication network, and analyzes various graph metrics) and dynamical systems tools [6].

We distinguish two cases of the rendezvous problem. First is the case of one-time rendezvous, or non-recurrent rendezvous. Here the rendezvous problem is solved once the agents reach consensus. That is, if the problem begins at time t0t_{0}, the problem is solved if there is some time t>t0t^{*}>t_{0} when the positions of the agents have reached (or are close enough to) a point in the state space (possibly arbitrary). The second is the case of recurrent rendezvous. Here agents must always eventually rendezvous. That is, the recurrent problem is solved if for every time t>t0t>t_{0} there exists some later time t>tt^{*}>t when the agents reach a common point in the state space. There is some work on recurrent rendezvous, such as addressing the issue of autonomous vehicles needing to repetitively recharge [15, 16], however most attention is given to the non-recurrent case. In this paper we focus on the recurrent case.

When designing controllers to execute repetitive tasks, one approach is for an individual agent to use a decision making model to coordinate these tasks. Here we consider a task based approach to the problem of persistent multi-agent rendezvous. Task based autonomous behavior has been widely studied [7, 8, 11]: complex behaviors are constructed from smaller, simpler tasks. Each individual task is encoded in a vector field (e.g., going to a target state is encoded as an attracting fixed point on the state space). The goal of the autonomous controller is to then coordinate these tasks to achieve said high-level behavior. Ultimately, this coordination can be framed as a form of decision making, which is a large and active area of research.

An individual in a decision making model has an action space 𝒜\mathcal{A}, and state space 𝒮\mathcal{S}. If at𝒜a_{t}\in\mathcal{A} and st𝒮s_{t}\in\mathcal{S} are the action and state at time tt we can define the history as the tuple Ht=(s0,a0,s1,,at1,st)H_{t}=(s_{0},a_{0},s_{1},\dots,a_{t-1},s_{t}). A policy ρ\rho is a map ρ:Htat\rho:H_{t}\to a_{t}. In some cases we have a memory-less policy. That is, a policy that only depends on the current state sts_{t}. While we have used notation that implies discrete time, this generalizes to the continuous time case in a natural way. For our applications it will be most reasonable to consider the continuous time case, and that will be the assumption from here on.

In our task-based framework an action is choosing which of nn tasks to pursue. It is convenient for us to phrase our policy as an optimization problem. We consider an expanded state space s=(x,v)s=(x,v) where xdx\in\mathbb{R}^{d} is our physical state and v+nv\in\mathbb{R}_{+}^{n} is our value state. The value state encodes the relative values of pursuing each of the nn tasks. Our policy is then to pick the action which coincides with the task with maximal value. When framing the action/policy/state system as a control system, it is useful to introduce a motivation state mm\in\mathcal{M} which tracks the current task being pursued. Here \mathcal{M} is the space of possible motivation states, and it may be discrete or continuous.

Most decision making research reduces the problem to discrete decision states: Given nn tasks, pick exactly one to pursue. This can be formulated mathematically. Consider a motivation vector m{0,1}nm\in\{0,1\}^{n} such that mi=1m_{i}=1 iff argmaxj(vj)=i\arg\max_{j}(v_{j})=i, and mj=0m_{j}=0 otherwise. We call this the discrete motivation case. A natural extension, and one that has been of recent interest, is the continuous motivation case. The difference is that we consider the motivation vector mm to live on the (n1)(n-1)-simplex Δn\Delta^{n}, where Δn={xn:xi0,ixi=1}\Delta^{n}=\{x\in\mathbb{R}^{n}:x_{i}\geq 0,\sum_{i}x_{i}=1\}. Then, following the work in [18, 20, 21], we use a pitchfork bifurcation to encode a dynamical operation analogous to the argmax\arg\max operation from the discrete case. Following [20], we term the resulting dynamical system motivation dynamics. Considering a continuous motivation state for our tasks produces interesting dynamics in the control system that are not otherwise seen in the discrete case. Moreover, it is possible to make entire system smooth, allowing for the use of smooth function analysis, as well as direct numerical integration.

The work we present here is complementary to other recent work using bifurcation theory to construct decision-making mechanisms, e.g., [10, 4, 19, 2]. In contrast to these works, here we focus not on the decision-making mechanism itself, but on connecting the mechanism to physically-embedded tasks, i.e., the requirement for the various agents to move and perform rendezvous in response to their decisions.

The main contribution of this paper is to address the persistent rendezvous problem using task-based navigation with a continuous motivation state. In the process, we develop a networked dynamical system that exhibits stable recurrent rendezvous encoded by an attractor. Concretely, our contributions are two-fold. First we develop the multi-agent extension of a control scheme originally developed in [20] and expanded upon in [23]. The scheme utilizes the so-called unfolding pitchfork bifurcation to coordinate agents with binary tasks. We then adapt this framework to the persistent rendezvous problem, and analyze the resulting dynamical system in a highly symmetric case. We derive analytical guarantees of the system behavior, and show that it achieves persistent rendezvous under appropriate conditions. In Theorem 3.7 we consider a limited form of the system and extract a closed-form critical parameter value for the breaking of system deadlock. Then, in Theorem 3.8 we view the full system, and using results from the previous theorem, we prove the existence of such critical parameter values, along with a guaranteed bound for breaking deadlock. Next, by varying parameters λ\lambda and σ\sigma (defined below), we characterize distinct regimes of behavior for the system (see Figure 1). Finally we discuss the results and provide a list of further research directions to be considered.

[Uncaptioned image]
Figure 1: Plot of behavioral regimes. Theorem 3.7 addresses the λ=\lambda=\infty deadlock case, and Theorem 3.8 addresses the deadlock bifurcation line for finite λ\lambda.

2 Preliminaries

We use the Motivation Dynamics framework [20]. Here, we extend the Motivation Dynamics framework to the NN-agent case. We assume that each agent kk has physical state xkdx_{k}\in\mathbb{R}^{d} and that its physical dynamics are fully actuated, i.e., that x˙k=u\dot{x}_{k}=u, with xk,udx_{k},u\in\mathbb{R}^{d}. We assume that each agent has two tasks, one of which is to travel to a designated location, and the other is to rendezvous with the other agents. The values associated with these two tasks for agent kk are encoded in vk+2v_{k}\in\mathbb{R}_{+}^{2} The two-task assumption allows us to reduce the motivation space, Δ2\Delta^{2}, to an interval on the real line. It is convenient to consider the motivation state mkm_{k} of an agent to live on the interval [1,1][-1,1], where each extreme of the interval uniquely corresponds with one of the two tasks. Let the state of agent kk be denoted by ξk=(xkT,mk,vkT)T\xi_{k}=(x_{k}^{T},m_{k},v_{k}^{T})^{T}, where xkdx_{k}\in\mathbb{R}^{d} is the agent’s physical location (or configuration), mk[1,1]m_{k}\in[-1,1] is the agent’s motivation state, and vk+2v_{k}\in\mathbb{R}_{+}^{2} is the agent’s value state. We assume that the agent’s physical dynamics are fully-actuated, i.e., Denote Ξ\Xi as the matrix

Ξ=(ξ1||ξN)=(XMV),\Xi=\left(\xi_{1}\Big{|}\cdots\Big{|}\xi_{N}\right)=\begin{pmatrix}X\\ M\\ V\end{pmatrix}, (1)

where X=(x1||xN)X=\left(x_{1}|\cdots|x_{N}\right), M=(m1,,mk)M=(m_{1},\dots,m_{k}), and V=(v1||vN)V=\left(v_{1}|\cdots|v_{N}\right).

For the time being, we will assume that the agents have two tasks, where their value dynamics are only influenced by their value state and the physical configuration of all agents. In this case agent kk is controlled by the closed loop, fully continuous dynamical system

x˙k\displaystyle\dot{x}_{k} =fxk(X,M)=1+mk2Fk,1(X)+1mk2Fk,2(X),\displaystyle=f_{x_{k}}(X,M)=\frac{1+m_{k}}{2}F_{k,1}(X)+\frac{1-m_{k}}{2}F_{k,2}(X), (2a)
m˙k\displaystyle\dot{m}_{k} =fmk(M,V)=σk(1mk2)(mk+3αk),\displaystyle=f_{m_{k}}(M,V)=\sigma_{k}(1-m_{k}^{2})(m_{k}+3\alpha_{k}), (2b)
v˙k\displaystyle\dot{v}_{k} =fvk(V,X)=λk(φk(X)vk),\displaystyle=f_{v_{k}}(V,X)=\lambda_{k}(\varphi_{k}(X)-v_{k}), (2c)

where αk=(vk,1vk,2)/(vk,1+vk,2)\alpha_{k}=(v_{k,1}-v_{k,2})/(v_{k,1}+v_{k,2}). Compactly we have

ξ˙k=fξk(Ξ),\dot{\xi}_{k}=f_{\xi_{k}}(\Xi), (3)

and in matrix form

Ξ˙=g(Ξ).\dot{\Xi}=g(\Xi). (4)

We will also denote the dynamics of the matrices X,M,VX,M,V by X˙=fX(X,M)\dot{X}=f_{X}(X,M), M˙=fM(M,V)\dot{M}=f_{M}(M,V), V˙=fV(V,X)\dot{V}=f_{V}(V,X).

2.1 Rendezvous with Designated Tasks

By convention we will have task 1 be the designated task of a given agent, and task 2 will be rendezvous. We choose point visitation as the designated task. We have

φk,1(X)=12xkxk2,\varphi_{k,1}(X)=\frac{1}{2}\left\|x_{k}-x^{*}_{k}\right\|^{2},

and Fk,1(X)=φk,1(X)=(xkxk)F_{k,1}(X)=-\nabla\varphi_{k,1}(X)=-(x_{k}-x^{*}_{k}), where xkx^{*}_{k} is the designated point for agent kk to visit.

For the rendezvous task we choose navigation to the centroid. We set

φk,2(X)=N12Nxk1N1j=1jkNxj2,\varphi_{k,2}(X)=\frac{N-1}{2N}\left\|x_{k}-\frac{1}{N-1}\sum_{\begin{subarray}{c}j=1\\ j\neq k\end{subarray}}^{N}x_{j}\right\|^{2},

which gives

Fk,2(X)=φk,2(X)=N1N(xk1N1jkxj).F_{k,2}(X)=-\nabla\varphi_{k,2}(X)=-\frac{N-1}{N}\left(x_{k}-\frac{1}{N-1}\sum_{j\neq k}x_{j}\right).

Thus our first task looks like navigation to a designated point, and our second task looks like navigation to the centroid of the other agents.

3 The Symmetric Case

Assuming NN agents and xk2x_{k}\in\mathbb{R}^{2}, denote RR as the 2×22\times 2 rotation matrix given by

R=(cos(2πN)sin(2πN)sin(2πN)cos(2πN)).R=\begin{pmatrix}\cos\left(\frac{2\pi}{N}\right)&-\sin\left(\frac{2\pi}{N}\right)\\ \sin\left(\frac{2\pi}{N}\right)&\cos\left(\frac{2\pi}{N}\right)\end{pmatrix}. (5)

We set the designated task locations as xk=Rk1ux_{k}^{*}=R^{k-1}u, where u=(1,0)Tu=(1,0)^{T} (i.e. NN symmetrically distributed points on the unit circle). At this point it is prudent to make the change of coordinates yk=R(k1)xky_{k}=R^{-(k-1)}x_{k}. Denoting the transformed XX matrix by YY, this results in

φk,1(Y)=12yku2,\varphi_{k,1}(Y)=\frac{1}{2}\left\|y_{k}-u\right\|^{2},
φk,2(Y)=N12Nyk1N1j=1jkNRjkyj2,\varphi_{k,2}(Y)=\frac{N-1}{2N}\left\|y_{k}-\frac{1}{N-1}\sum_{\begin{subarray}{c}j=1\\ j\neq k\end{subarray}}^{N}R^{j-k}y_{j}\right\|^{2},

and

y˙k=fyk(Y,M)=1+mk2(uyk)+1mk2(1NjkRjkyjN1Nyk).\dot{y}_{k}=f_{y_{k}}(Y,M)=\frac{1+m_{k}}{2}(u-y_{k})+\frac{1-m_{k}}{2}\left(\frac{1}{N}\sum_{j\neq k}R^{j-k}y_{j}-\frac{N-1}{N}y_{k}\right).

Moreover, we will consider symmetric time scales for the motivation and value dynamics. That is, we set σk=σ\sigma_{k}=\sigma and λk=λ\lambda_{k}=\lambda for all kk. Let us set zk=(ykT,mk,vkT)z_{k}=(y_{k}^{T},m_{k},v_{k}^{T}) and

Z=(z1||zN)=(YMV)5×N.Z=\left(z_{1}\Big{|}\cdots\Big{|}z_{N}\right)=\begin{pmatrix}Y\\ M\\ V\end{pmatrix}\in\mathbb{R}^{5\times N}.

Then we may compactly represent our symmetric-case system by the dynamical equation

Z˙=f(Z)\dot{Z}=f(Z) (6)

The fact that this is the symmetric case allows us to utilize a group invariance of the system. First we note that following definition of equivariance for a map.

Definition 3.1.

Consider a map h:𝒜𝒜h:\mathcal{A}\to\mathcal{A} and a group Γ\Gamma with a well defined group action on elements x𝒜x\in\mathcal{A}. We say hh is equivariant under Γ\Gamma if for all γΓ\gamma\in\Gamma and x𝒜x\in\mathcal{A} we have that

h(γx)=γh(x)h(\gamma\cdot x)=\gamma\cdot h(x) (7)

With equivariance in mind, now we establish a symmetry group for the dynamics (6).

Lemma 3.2.

Let CnC_{n} be the cyclical group generated by the permutation γ:(1,2,,n)(n,1,2,,n1)\gamma:(1,2,\dots,n)\mapsto(n,1,2,\dots,n-1) (i.e. the “shift-by-one” permutation on nn elements). Let γ\gamma act on a matrix Mm×nM\in\mathbb{R}^{m\times n} by permuting the columns of MM in the same way. Then we have that the mapping ff given in (6) is equivariant under CNC_{N} for N3N\geq 3.

Proof.

Let γ\gamma represent the shift-by-one group action described above. Since γ\gamma generates CNC_{N} it suffices to prove that

γf(Z)=f(γZ).\gamma\cdot f(Z)=f(\gamma\cdot Z).

Let γ\gamma map the kk-th column of ZZ to γ(k)\gamma(k). We have

γfyk(Y,M)=fyγ(k)(Y,M)=1+mγ(k)2(uyγ(k))+1mγ(k)2(1Njγ(k)Rjγ(k)yjN1Nyγ(k))\gamma\cdot f_{y_{k}}(Y,M)=f_{y_{\gamma(k)}}(Y,M)=\frac{1+m_{\gamma(k)}}{2}(u-y_{\gamma(k)})+\frac{1-m_{\gamma(k)}}{2}\left(\frac{1}{N}\sum_{j\neq\gamma(k)}R^{j-\gamma(k)}y_{j}-\frac{N-1}{N}y_{\gamma(k)}\right)

and

fyk(γY,γM)=1+mγ(k)2(uyγ(k))+1mγ(k)2(1NjkRjkyγ(j)N1Nyγ(k))f_{y_{k}}(\gamma\cdot Y,\gamma\cdot M)=\frac{1+m_{\gamma(k)}}{2}(u-y_{\gamma(k)})+\frac{1-m_{\gamma(k)}}{2}\left(\frac{1}{N}\sum_{j\neq k}R^{j-k}y_{\gamma(j)}-\frac{N-1}{N}y_{\gamma(k)}\right)

Thus we must show that

jγ(k)Rjγ(k)yj=jkRjkyγ(j).\sum_{j\neq\gamma(k)}R^{j-\gamma(k)}y_{j}=\sum_{j\neq k}R^{j-k}y_{\gamma(j)}.

One notes that (γ(j)γ(k))modN=(jk)modN(\gamma(j)-\gamma(k))\mod N=(j-k)\mod N, and thus Rγ(j)γ(k)=RjkR^{\gamma(j)-\gamma(k)}=R^{j-k}. Then by re-indexing the sum we may write

jγ(k)Rjγ(k)yj=jγ(k)Rγ1(j)kyj=jkRjkyγ(j).\sum_{j\neq\gamma(k)}R^{j-\gamma(k)}y_{j}=\sum_{j\neq\gamma(k)}R^{\gamma^{-1}(j)-k}y_{j}=\sum_{j\neq k}R^{j-k}y_{\gamma(j)}.

This gives us γfY(Y,M)=fY(γY,γM)\gamma\cdot f_{Y}(Y,M)=f_{Y}(\gamma\cdot Y,\gamma\cdot M). For fV(V,Y)f_{V}(V,Y) it suffices to show φγ(k)(Y)=φk(γY)\varphi_{\gamma(k)}(Y)=\varphi_{k}(\gamma Y). This follows by applying the same argument as the one given above for fYf_{Y}. Finally, one can see by inspection that γfM(M,V)=fM(γM,γV)\gamma\cdot f_{M}(M,V)=f_{M}(\gamma\cdot M,\gamma\cdot V). Thus we have that γCN\forall\gamma^{\prime}\in C_{N}

γf(Z)=f(γZ).\gamma^{\prime}\cdot f(Z)=f(\gamma^{\prime}\cdot Z).

\blacksquare

Definition 3.3.

Consider a vector space 𝒱\mathcal{V} with a well defined group action for elements of a group Γ\Gamma. We define the fixed point subspace of a subgroup GΓG\in\Gamma by

Fix[G]={x𝒱|γx=x,γG}.\text{\emph{Fix}}[G]=\left\{x\in\mathcal{V}|~{}\gamma\cdot x=x,~{}\forall\gamma\in G\right\}. (8)

We note that the fixed point subspace of CNC_{N} is the one generated by matrices ZZ with all columns the same (i.e. zi=zji,jz_{i}=z_{j}~{}\forall i,j). We will refer to this subspace as the symmetric subspace. We will see later that said subspace possesses some very interesting properties.

3.1 Deadlock

The symmetric case displays many interesting behaviors for different (σ,λ)(\sigma,\lambda) parameter regimes, and some of these boundaries between regimes admit direct analysis. One such boundary is that of the deadlock case.

Definition 3.4.

We define deadlock as an equilibrium point ZZ_{*} of the system (6) such that ZZ_{*} is asymptotically stable with |mk|<1|m_{k}|<1 for all kk. (That is, all motivation variables are in a state of “indecision”.)

Indeed, if we search for deadlock on the symmetric subspace, we find a candidate equilibrium.

Lemma 3.5.

Let NN\in\mathbb{N} and consider N3N\geq 3. There is a potential deadlock equilibrium point on the symmetric subspace of (6) which we will denote Z=(YT,MT,VT)TZ_{*}=(Y_{*}^{T},M_{*}^{T},V_{*}^{T})^{T}, given by

yk=(y,0)T,mk=2y1,vk,1=12(y1)2,vk,2=N12N(y)2,k=1,,Ny_{k}=(y^{*},0)^{T},\quad m_{k}=2y^{*}-1,\quad v_{k,1}=\frac{1}{2}(y^{*}-1)^{2},\quad v_{k,2}=\frac{N-1}{2N}(y^{*})^{2},\quad\forall k=1,\dots,N (9)

where β=y(0,1)\beta=y^{*}\in(0,1) solves

(2N1)β3(3N1)β2(N1)β+N1=0.(2N-1)\beta^{3}-(3N-1)\beta^{2}-(N-1)\beta+N-1=0. (10)
Proof.

We are considering the symmetry subspace, and so set z1=z2=zNz_{1}=z_{2}=\cdots z_{N}. In order to be a deadlock equilibrium, |mk|<1|m_{k}|<1 by Definition 3.4. Thus, we must have that mk=3(vk,1vk,2)/(vk,1+vk,2)m_{k}=-3(v_{k,1}-v_{k,2})/(v_{k,1}+v_{k,2}). Moreover, we have that vk,1=φk,1(Y)v_{k,1}=\varphi_{k,1}(Y) and vk,2=φk,2(Y)v_{k,2}=\varphi_{k,2}(Y). Substituting both into fyk(Y,M)=0f_{y_{k}}(Y,M)=0 will give the cubic in (10). Back substituting will give the equilibrium values for mkm_{k} and vkv_{k}.

We must verify that there is exactly one root of (10) in the interval (0,1)(0,1). If we input β=1,0,1,2\beta=-1,0,1,2 into the cubic we get 3N,N1,N,3N3-3N,N-1,-N,3N-3 respectively. We see that for N3N\geq 3 there are three consecutive changes in sign of the cubic. This implies that the three roots of the cubic are all real, and lie in the intervals (1,0)(-1,0), (0,1)(0,1), and (1,2)(1,2). Roots which are greater than 1 and less than 0 will invalidate the |m|<1|m|<1 assumption. Thus we must take the root in (0,1)(0,1).

\blacksquare

A detailed numerical search suggests that the deadlock equilibrium described in Lemma 3.5 is unique. On the basis of this numerical evidence we make the following claim.

Claim 3.6.

Let N3N\geq 3 and let ZZ_{*} be the deadlock equilibrium defined in Lemma 3.5. Then ZZ_{*} is the unique equilibrium on the symmetry subspace.

To study the stability of our system we must still consider a Jacobian matrix that is 5N×5N5N\times 5N. To get a better sense of the structure of the Jacobian, and how we might determine stability conditions, we will first consider the dynamics in the singularly perturbed limit of λ\lambda\to\infty.

3.1.1 The λ\lambda\to\infty Limit

We split our system, denoting W=(YT,MT)TW=(Y^{T},M^{T})^{T} as our slow manifold variables, and the value state VV as our fast manifold variable. If we then take the λ\lambda\to\infty limit we have that

vk,1=φk,1(Y)andvk,2=φk,2(Y).v_{k,1}=\varphi_{k,1}(Y)\quad\text{and}\quad v_{k,2}=\varphi_{k,2}(Y).

This results in the slow manifold system

y˙k\displaystyle\dot{y}_{k} =fyk(Y,M)=1+mk2(uyk)+1mk2(1NjkRjkyjN1Nyk),\displaystyle=f_{y_{k}}(Y,M)=\frac{1+m_{k}}{2}(u-y_{k})+\frac{1-m_{k}}{2}\left(\frac{1}{N}\sum_{j\neq k}R^{j-k}y_{j}-\frac{N-1}{N}y_{k}\right), (11a)
m˙k\displaystyle\dot{m}_{k} =fmk(mk,Y)=σ(1mk2)(mk+3φk,1(Y)φk,2(Y)φk,1(Y)+φk,2(Y)).\displaystyle=f_{m_{k}}(m_{k},Y)=\sigma(1-m_{k}^{2})\left(m_{k}+3\frac{\varphi_{k,1}(Y)-\varphi_{k,2}(Y)}{\varphi_{k,1}(Y)+\varphi_{k,2}(Y)}\right). (11b)

We will compactly represent the system by

W˙=f~(W).\dot{W}=\tilde{f}(W). (12)

and we denote the corresponding deadlock point for the reduced system by

W=(YT,MT)T,W_{*}=(Y_{*}^{T},M_{*}^{T})^{T}, (13)

where YY_{*} and MM_{*} are still the same as in (9). In Appendix A we derive that the Jacobian Df~D\tilde{f} can be represented compactly, and that by (24) we may write the characteristic polynomial as

det(Df~μI3N)=det(M~+k=1ND~kadj(A~B~μI3)C~k)det(A~B~μI3)N2\det(D\tilde{f}-\mu I_{3N})=\det\left(\tilde{M}+\sum_{k=1}^{N}\tilde{D}_{k}adj(\tilde{A}-\tilde{B}-\mu I_{3})\tilde{C}_{k}\right)\det\left(\tilde{A}-\tilde{B}-\mu I_{3}\right)^{N-2} (14)

This leads us to establish our first result

Theorem 3.7.

There exists a critical value σ=σ\sigma=\sigma^{*} for which the singularly-perturbed system (12) experiences a bifurcation which breaks the stability of the deadlock point (13):

σ14y(1y).\sigma^{*}\coloneqq\frac{1}{4y^{*}(1-y^{*})}. (15)
Proof.

From Lemma A.6, we have a compact form for the characteristic polynomial given by

det(Df~μI3N)=(G1(μ;σ)+G2(μ;σ))2(G1(μ;σ))N2,\det(D\tilde{f}-\mu I_{3N})=(G_{1}(\mu;\sigma)+G_{2}(\mu;\sigma))^{2}(G_{1}(\mu;\sigma))^{N-2},

where G1G_{1} and G2G_{2} are given in the statement of Theorem A.6. We note that both G1(μ;σ)+G2(μ;σ)G_{1}(\mu;\sigma)+G_{2}(\mu;\sigma) and G1(μ;σ)G_{1}(\mu;\sigma) are cubic in μ\mu. Thus, one can write closed-form expressions for their roots. The expression for the roots of G1(μ;σ)+G2(μ;σ)G_{1}(\mu;\sigma)+G_{2}(\mu;\sigma) is very complicated, and does not admit direct analysis of the critical σ\sigma value for bifurcation. However, G1(μ;σ)G_{1}(\mu;\sigma) factors nicely across μ\mu. Specifically, it may be written as

(μ+1)[(μ+1)(4σy(1y)μ)2σ(1y)2(1+y)φ1+φ2],(\mu+1)\left[(\mu+1)(4\sigma y^{*}(1-y^{*})-\mu)-\frac{2\sigma(1-y^{*})^{2}(1+y^{*})}{\varphi_{1}^{*}+\varphi_{2}^{*}}\right],

where φi=φk,i(Y)\varphi_{i}^{*}=\varphi_{k,i}(Y_{*}) for i=1,2i=1,2 are the task function values at the deadlock point. Direct computation of the roots gives

μ1\displaystyle\mu_{1} =1\displaystyle=-1
μ2,3\displaystyle\mu_{2,3} =12(4σy(1y)1±(4σy(1y)1)2+16σy(1y)8σ(1y)2(1+y)φ1+φ2)\displaystyle=\frac{1}{2}\left(4\sigma y^{*}(1-y^{*})-1\pm\sqrt{(4\sigma y^{*}(1-y^{*})-1)^{2}+16\sigma y^{*}(1-y^{*})-\frac{8\sigma(1-y^{*})^{2}(1+y^{*})}{\varphi_{1}^{*}+\varphi_{2}^{*}}}\right)

If we can show that

16σy(1y)8σ(1y)2(1+y)φ1+φ2<016\sigma y^{*}(1-y^{*})-\frac{8\sigma(1-y^{*})^{2}(1+y^{*})}{\varphi_{1}^{*}+\varphi_{2}^{*}}<0

then we can guarantee that μ2,3\mu_{2,3} have negative real parts for σ<1/(4y(1y))\sigma<1/(4y^{*}(1-y^{*})). Noting that 8σy(1y)>08\sigma y^{*}(1-y^{*})>0 can be factored from the expression above, it suffices to show

2y1(y)2φ1+φ2<0,2y^{*}-\frac{1-(y^{*})^{2}}{\varphi_{1}^{*}+\varphi_{2}^{*}}<0,

which is the case, as φi<1/4\varphi_{i}^{*}<1/4 for i=1,2i=1,2 and y<1/2y^{*}<1/2 (this can be checked by looking for sign switches in the left-hand-side of (10) on the interval (0,1/2)(0,1/2)).

Next we must show that the eigenvalues are complex, i.e.

(4σy(1y)1)2+16σy(1y)8σ(1y)2(1+y)φ1+φ2<0(4\sigma y^{*}(1-y^{*})-1)^{2}+16\sigma y^{*}(1-y^{*})-\frac{8\sigma(1-y^{*})^{2}(1+y^{*})}{\varphi_{1}^{*}+\varphi_{2}^{*}}<0

for σ=1/(4y(1y))\sigma=1/(4y^{*}(1-y^{*})). It suffices to show

21(y)2y(φ1+φ2)<0.2-\frac{1-(y^{*})^{2}}{y^{*}(\varphi_{1}^{*}+\varphi_{2}^{*})}<0.

This once again holds due to the fact that φi<1/4\varphi_{i}^{*}<1/4 for i=1,2i=1,2 and y<1/2y^{*}<1/2.

It remains to show the non-tangency condition of our roots, i.e., that the roots pass through the imaginary axis. The real part of the eigenvalues is given by (4σy(1y)1)/2(4\sigma y^{*}(1-y^{*})-1)/2, so the derivative with respect to σ\sigma is 2y(1y)2y^{*}(1-y^{*}). This is positive since y(0,1)y^{*}\in(0,1).

\blacksquare

3.1.2 The case of finite λ\lambda

It is possible to derive an explicit form for the characteristic polynomial of the relaxed system, but that polynomial would still be of degree 5, and in general would require numerical computations to find the roots. Indeed, given a fixed value of λ\lambda, we can numerically approximate the deadlock-breaking value of σ\sigma, and vice versa. The results of such a computation are shown in Figure 2. In place of deriving an exact form for the characteristic polynomial, we prove a lower bound on σ\sigma that guarantees deadlock-breaking.

Theorem 3.8.

There exists a finite lower bound σ^\hat{\sigma} such that σ>σ^\sigma>\hat{\sigma} implies that the equilibrium point ZZ_{*} of the system (6) is not stable. The bound is given by

σ^=Nλ+N+y12Ny(1y)\hat{\sigma}=\frac{N\lambda+N+y^{*}-1}{2Ny^{*}(1-y^{*})} (16)

where yy^{*} is the solution in the interval (0,1)(0,1) to (10).

Proof.

By Lemma B.4 we have that

tr(Df)=2(1yNNλ)+4Nσy(1y).\text{tr}(Df)=2\left(1-y^{*}-N-N\lambda\right)+4N\sigma y^{*}(1-y^{*}).

The trace of a matrix is the sum of the eigenvalues, thus if the trace has positive real part there mus be at least one eigenvalue with positive real part. Setting tr(Df)0\text{tr}(Df)\geq 0 and solving for σ\sigma gives the desired lower bound.

\blacksquare

Refer to caption
Figure 2: The parameter space split into deadlock and non-deadlock regimes. Here we are considering N=3N=3 agents.

As a demonstration we have included two examples of a subset of the state trajectories under deadlock and non-deadlock in Figure 3. We track the xk,1x_{k,1}, mkm_{k} and αk\alpha_{k} states and we plot all three agents together at once. Here αk=(vk,1vk,2)/(vk,1+vk,2)\alpha_{k}=(v_{k,1}-v_{k,2})/(v_{k,1}+v_{k,2}), which is the normalized difference between value states.

Refer to caption
(a) Deadlock (σ=0.1\sigma=0.1)
Refer to caption
(b) No Deadlock (σ=0.3\sigma=0.3)
Figure 3: Comparison of deadlock and non-deadlock trajectories. In both cases λ=1\lambda=1 and the initial conditions are the same.

3.2 Syncronicity and Asynchronicity

When we consider σ>σ^\sigma>\hat{\sigma}, as given in (16), and finite λ\lambda there are interesting dynamical regimes that emerge. Specifically, the system produces both synchronous and asynchronous rendezvous, dependent on initial conditions and parameter regimes. At this point, further direct analysis of the system becomes far less tractable, and yields very little information. Instead we will investigate these regimes numerically. For the remainder of this section we will consider the case N=3N=3, as it is the simplest case that still experiences rich behavior.

Once we are in the deadlock broken regime (See Figure 2) the system can experience either synchronous or asynchronous rendezvous (See Figure 4). Synchronicity for fixed σ\sigma and λ\lambda depends only on the initial conditions of the system. We should stress the system still experiences recurrent rendezvous, even in the asynchronous case. One way to see this is to consider the average distance to the centroid across all agents as it varies over time. We define the rendezvous metric by

d(X(t))1Nk=1Nxk(t)1Nj=1Nxj(t),d(X(t))\coloneqq\frac{1}{N}\sum_{k=1}^{N}\left\|x_{k}(t)-\frac{1}{N}\sum_{j=1}^{N}x_{j}(t)\right\|, (17)

or for the transformed variable YY we have

d~(Y(t))1Nk=1Nyk(t)1Nj=1NRjkyj(t).\tilde{d}(Y(t))\coloneqq\frac{1}{N}\sum_{k=1}^{N}\left\|y_{k}(t)-\frac{1}{N}\sum_{j=1}^{N}R^{j-k}y_{j}(t)\right\|. (18)

We plot two examples in Figure 5.

Refer to caption
(a) Synchronous Rendezvous
Refer to caption
(b) Asynchronous Rendezvous
Figure 4: Comparison of synchronous and asynchronous trajectories. In both cases σ=4\sigma=4 and λ=1\lambda=1. The only difference is in the initial conditions.
Refer to caption
(a) Synchronous Rendezvous
Refer to caption
(b) Asynchronous Rendezvous
Figure 5: Comparison of the rendezvous metric given in (17) for the synchronous and asynchronous trajectories from Figure 4.

Despite not performing as well as the synchronous case, the asynchronous case shows that the agents repetitively get close to one another. Under reasonably relaxed constraints this constitutes recurrent rendezvous. In fact, numerical evidence supports that there is an attractor of the system on which the asynchronous orbits live. The attractor itself is 15 dimensional, and so we can only plot slices of the state space. We consider the α=(α1,α2,α3)\alpha=(\alpha_{1},\alpha_{2},\alpha_{3}) space of the 3 agent system, recalling that αk=(vk,1vk,2)/(vk,1+vk,2)\alpha_{k}=(v_{k,1}-v_{k,2})/(v_{k,1}+v_{k,2}), and the m=(m1,m2,m3)m=(m_{1},m_{2},m_{3}) space. Plotting trajectories in both spaces yields Figure 6.

Refer to caption
Refer to caption
Figure 6: Projections of the attractor that generates asynchronous trajectories. The left plot is a projection of the attractor into α\alpha-space and the right plot is a projection of the attractor into mm-space. For both plots the red line indicates the projection of the synchronous subspace (i.e. the z1=z2=z3z_{1}=z_{2}=z_{3} subspace). Here λ=1\lambda=1.

We can further characterize this attractor by taking a Poincaré section in the α\alpha space. Consider the vector α=(1,1,1)\alpha^{*}=(1,1,1). We construct our section by tracking when α(t)α=0\alpha(t)\cdot\alpha^{*}=0, i.e. when the trajectory punctures the plane defined by α\alpha^{*}. In addition, if we track the sign change we can see that two limit cycles emerge in the section. Let τj[0,T]\tau_{j}\in[0,T] indicate the jj-th point in time such that α(τj)α=0\alpha(\tau_{j})\cdot\alpha^{*}=0. If we plot the even and odd indices separately, i.e. plot α(τ2)\alpha(\tau_{2\ell}) and α(τ21)\alpha(\tau_{2\ell-1}) for =1,2,\ell=1,2,\dots, we see that two limit cycles emerge in the Poincaré section (See Figure 7). These limit cycles further support the claim that the orbits are living on an attractor.

Refer to caption
Figure 7: We plot the even and odd indices of α(τj)\alpha(\tau_{j}). The points indicate the actual punctures of the Poincaré section, while the lines connecting them indicate adjacency in the ordering. Blue indicates the odd indices, and green indicates the even indices. The red line is the symmetry subspace α1=α2=α3\alpha_{1}=\alpha_{2}=\alpha_{3}. Here λ=1\lambda=1.

This attractor leads us to make the following claim:

Claim 3.9.

The system (6) has an attractor 𝒜σ,λ2×[1,1]N×+2N\mathcal{A}_{\sigma,\lambda}\subset\mathbb{R}^{2}\times[-1,1]^{N}\times\mathbb{R}_{+}^{2N} whose characteristics depend on the parameters σ,λ\sigma,\lambda.

Moreover, there exists a region of the parameter space such that 𝒜σ,λ\mathcal{A}_{\sigma,\lambda} is not on the synchronous manifold, and is not a fixed point.

This allows us to postulate a theorem.

Theorem 3.10.

Assume that Claim 3.9 holds. Let Z(t)Z(t) be a solution to (6). Then fix ε>0\varepsilon>0, and consider the rendezvous metric from (18). If Z(0)𝒜σ,λZ(0)\in\mathcal{A}_{\sigma,\lambda} and there exists a time t10t_{1}\geq 0 such that d~(Y(t1))=δ>0\tilde{d}(Y(t_{1}))=\delta>0, then there exists a time t2>t1t_{2}>t_{1} such that d~(Y(t2))<δ+ε\tilde{d}(Y(t_{2}))<\delta+\varepsilon.

Proof.

An attractor 𝒜\mathcal{A} of a dynamical system has two necessary properties: (1) 𝒜\mathcal{A} is forward invariant, i.e. any trajectory starting in 𝒜\mathcal{A} stays in 𝒜\mathcal{A}. (2) There is no proper subset of 𝒜\mathcal{A} which has this property.

Properties (1) and (2) imply given any ε>0\varepsilon>0, any point a𝒜a\in\mathcal{A}, and any trajectory zz such that z(t1)𝒜z(t_{1})\in\mathcal{A}, there is always a point in time t2>t1t_{2}>t_{1} such that z(t2)a<ε\|z(t_{2})-a\|<\varepsilon. If no such point in time existed, then we could define the ε\varepsilon-ball centered on aa by Bε(a)B_{\varepsilon}(a), and the set 𝒜Bε(a)\mathcal{A}\setminus B_{\varepsilon}(a) would satisfy property (1), which would directly contradict property (2).

Our claim in Theorem 3.10 then follows from the continuity of the rendezvous metric d~\tilde{d}: Given ε>0\varepsilon>0, there exists η\eta such that if Z(t2)Z(t1)<η\|Z(t_{2})-Z(t_{1})\|<\eta, continuity of d~\tilde{d} implies that |d~(Y(t2))d~(Y(t1))|<ε|\tilde{d}(Y(t_{2}))-\tilde{d}(Y(t_{1}))|<\varepsilon. By the preceding argument, such times t1,t2t_{1},t_{2} must exist.

\blacksquare

The idea here is that if the orbit (which lives on this attractor) produces a small value δ\delta for the rendezvous metric, then there is a future time when the same orbit will pass close enough to the same point to produce a rendezvous metric that is sufficiently close to δ\delta.

4 Discussion

In this paper, we have shown how the motivation dynamics architecture and associated results introduced in [20] can be extended to the multi-agent case. In particular, we have demonstrated the feasibility of using the unfolding pitchfork decision-making mechanism to coordinate recurrent rendezvous tasks between autonomous agents. Moreover, we have shown that the parameters of the multi-agent system (6) can be chosen in such a way as to guarantee that the system does not reach a deadlock state, and that there will always eventually be a rendezvous event. In the language of formal methods [12], our system satisfies the following specification: always ((eventually rendezvous) and (eventually visit designated location)). The connection between the structure of the dynamical system and the discrete specification that it satisfies is beyond the scope of this paper, but is of great interest for future work.

Several interesting research questions follow from the results of this paper. First, in the problem as it is presented in this paper, each agent has full knowledge of the others’ positions. In other words, the communication graph is complete and communication is uncorrupted by noise or delays. The results presented in this paper could naturally be generalized to the case of an incomplete but connected communications graph, or the cases where communication among agents is corrupted by stochastic noise or communication delays. These generalizations should be feasible given standard tools from the distributed consensus literature.

Second, the agents’ tasks may be more subtle than the tasks considered here, where each agent has two tasks: exactly one designated point-attractor task and one rendezvous task where the agents must precisely rendezvous at a point. If approximate rendezvous, e.g., to a neighborhood, is acceptable, what further results might be possible? In the case that there are MM point-attractor tasks and NMN\neq M agents, how can the system automatically distribute these tasks among the agents?

Third, there is a fundamental mathematical question of characterizing the attractor whose existence we postulate in Claim 3.9. It is likely that symmetric bifurcation theory [9] will provide a set of tools to enable this analysis. Answering these questions will push forward knowledge of how the unfolding pitchfork can be used to coordinate high-level autonomous behaviors.

References

  • [1] S. Alpern. The rendezvous search problem. SIAM Journal on Control and Optimization, 33(3):673–683, 1995.
  • [2] A. Bizyaeva, T. Sorochkin, A. Franci, and N. E. Leonard. Control of agreement and disagreement cascades with distributed inputs, 2021.
  • [3] D. V. Dimarogonas and K. J. Kyriakopoulos. On the rendezvous problem for multiple nonholonomic agents. IEEE Transactions on automatic control, 52(5):916–922, 2007.
  • [4] A. Franci, M. Golubitsky, and N. E. Leonard. The dynamics of multi-agent multi-option decision making. arXiv:1909.05765, 2019.
  • [5] B. A. Francis and M. Maggiore. Flocking and rendezvous in distributed robotics. Springer, 2016.
  • [6] P. Friudenberg and S. Koziol. Mobile robot rendezvous using potential fields combined with parallel navigation. IEEE Access, 6:16948–16957, 2018.
  • [7] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-Pérez. Integrated task and motion planning. Annual review of control, robotics, and autonomous systems, 4:265–293, 2021.
  • [8] M. Ghallab, D. Nau, and P. Traverso. Automated planning and acting. Cambridge University Press, 2016.
  • [9] M. Golubitsky, I. Stewart, and D. G. Schaeffer. Singularities and groups in bifurcation theory: Vol. II. Number 69 in Applied Mathematical Sciences. Springer-Verlag, New York, 1988.
  • [10] R. Gray, A. Franci, V. Srivastava, and N. E. Leonard. Multi-agent decision-making dynamics inspired by honeybees. IEEE Transactions on Control of Network Systems, 5(2):793–806, 2018.
  • [11] E. Karpas and D. Magazzeni. Automated planning for robotics. Annual Review of Control, Robotics, and Autonomous Systems, 3:417–439, 2020.
  • [12] H. Kress-Gazit, M. Lahijanian, and V. Raman. Synthesis for Robots: Guarantees and Feedback for Robot Behavior. Annual Review of Control, Robotics, and Autonomous Systems, 1(1):211–236, 2018.
  • [13] B. Li, B. Moridian, and N. Mahmoudian. Underwater multi-robot persistent area coverage mission planning. In OCEANS 2016 MTS/IEEE Monterey, pages 1–6. IEEE, 2016.
  • [14] J. Lin, A. Morse, and B. Anderson. The multi-agent rendezvous problem. an extended summary. In Cooperative Control, pages 257–289. Springer, 2005.
  • [15] N. Mathew, S. L. Smith, and S. L. Waslander. A graph-based approach to multi-robot rendezvous for recharging in persistent tasks. In 2013 IEEE International Conference on Robotics and Automation, pages 3497–3502. IEEE, 2013.
  • [16] N. Mathew, S. L. Smith, and S. L. Waslander. Multirobot rendezvous planning for recharging in persistent tasks. IEEE Transactions on Robotics, 31(1):128–142, 2015.
  • [17] M. Meghjani and G. Dudek. Multi-robot exploration and rendezvous on graphs. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5270–5276. IEEE, 2012.
  • [18] D. Pais, P. M. Hogan, T. Schlegel, N. R. Franks, N. E. Leonard, and J. A. Marshall. A mechanism for value-sensitive decision-making. PloS one, 8(9):e73216, 2013.
  • [19] P. Reverdy. Dynamical, value-based decision making among nn options, 2020.
  • [20] P. B. Reverdy and D. E. Koditschek. A dynamical system for prioritizing and coordinating motivations. SIAM Journal on Applied Dynamical Systems, 17(2):1683–1715, 2018.
  • [21] P. B. Reverdy, V. Vasilopoulos, and D. E. Koditschek. Motivation dynamics for autonomous composition of navigation tasks. IEEE Transactions on Robotics, 2021.
  • [22] N. Roy and G. Dudek. Collaborative robot exploration and rendezvous: Algorithms, performance bounds and observations. Autonomous Robots, 11(2):117–136, 2001.
  • [23] C. Thompson and P. Reverdy. Drive-based motivation for coordination of limit cycle behaviors. In 2019 IEEE 58th Conference on Decision and Control (CDC), pages 244–249. IEEE, 2019.
  • [24] X. S. Zhou and S. I. Roumeliotis. Multi-robot slam with unknown initial correspondence: The robot rendezvous case. In 2006 IEEE/RSJ international conference on intelligent robots and systems, pages 1785–1792. IEEE, 2006.

Appendix A Jacobian in λ\lambda\to\infty Limit

In this appendix, we compute the Jacobian for our system (6) in the singular limit λ\lambda\to\infty. We also derive an expression for its characteristic polynomial.

Lemma A.1.

Consider the system (6) in the limit λ\lambda\to\infty given by (12). The Jacobian of (12) at the equilibrium point (13) has the form

Df~=(A~B~12B~13B~21A~B~23B~31B~32A~)D\tilde{f}=\begin{pmatrix}\tilde{A}&\tilde{B}_{12}&\tilde{B}_{13}&\cdots\\ \tilde{B}_{21}&\tilde{A}&\tilde{B}_{23}&\cdots\\ \tilde{B}_{31}&\tilde{B}_{32}&\tilde{A}&\cdots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix} (19)

where

A~=(1yNN01201yNN08σy(1y)(2(y)22y1)φ1+φ204σy(1y)),\tilde{A}=\begin{pmatrix}\frac{1-y^{*}-N}{N}&0&\frac{1}{2}\\ 0&\frac{1-y^{*}-N}{N}&0\\ \frac{8\sigma y^{*}(1-y^{*})(2(y^{*})^{2}-2y^{*}-1)}{\varphi_{1}^{*}+\varphi_{2}^{*}}&0&4\sigma y^{*}(1-y^{*})\\ \end{pmatrix}, (20)

with φ1=12(1y)2\varphi_{1}^{*}=\frac{1}{2}(1-y^{*})^{2}, φ2=N12N(y)2\varphi_{2}^{*}=\frac{N-1}{2N}(y^{*})^{2}, and

B~ij=(1yNRji02×18σy(1y)(y2)(N1)(φ1+φ2)(y,0)Rji0),withRk=(cos(2πkN)sin(2πkN)sin(2πkN)cos(2πkN)).\tilde{B}_{ij}=\begin{pmatrix}\frac{1-y^{*}}{N}R^{j-i}&0_{2\times 1}\\ -\frac{8\sigma y^{*}(1-y^{*})(y^{*}-2)}{(N-1)(\varphi_{1}^{*}+\varphi_{2}^{*})}(y^{*},0)R^{j-i}&0\end{pmatrix},\quad\text{with}\quad R^{k}=\begin{pmatrix}\cos\left(\frac{2\pi k}{N}\right)&-\sin\left(\frac{2\pi k}{N}\right)\\ \sin\left(\frac{2\pi k}{N}\right)&\cos\left(\frac{2\pi k}{N}\right)\end{pmatrix}. (21)
Proof.

Direct computation.

\blacksquare

A.1 Computation of Characteristic Polynomial of Df~D\tilde{f}

First we state a lemma which allows us to compactly represent the Jacobian of (12) at the equilibrium point.

Lemma A.2.

The Jacobian (19) of (12) at the equilibrium given by (13) can be compactly represented by

Df~=IN(A~B~)+V~TU~,D\tilde{f}=I_{N}\otimes(\tilde{A}-\tilde{B})+\tilde{V}^{T}\tilde{U}, (22)

where A~\tilde{A} is given in (20), ~{}\otimes indicates the Kronecker product, and B~B~ii\tilde{B}\coloneqq\tilde{B}_{ii}, where B~ij\tilde{B}_{ij} is given in (21) (note that B~ii\tilde{B}_{ii} has no dependence on ii). The block matrices V~\tilde{V} and U~\tilde{U} are given by V~(C~1TC~2TC~NT)\tilde{V}\coloneqq(\tilde{C}_{1}^{T}~{}\tilde{C}_{2}^{T}~{}\cdots~{}\tilde{C}_{N}^{T}) and U~(D~1D~2D~N)\tilde{U}\coloneqq(\tilde{D}_{1}~{}\tilde{D}_{2}~{}\cdots~{}\tilde{D}_{N}), where the components Ci~\tilde{C_{i}} and Di~\tilde{D_{i}} are given by

C~i(1yNRi02×18σy(1y)(y2)(N1)(φ1+φ2)(y,0)Ri0),D~j(Rj02×101×20).\tilde{C}_{i}\coloneqq\begin{pmatrix}\frac{1-y^{*}}{N}R^{-i}&0_{2\times 1}\\ -\frac{8\sigma y^{*}(1-y^{*})(y^{*}-2)}{(N-1)(\varphi_{1}^{*}+\varphi_{2}^{*})}(y^{*},0)R^{-i}&0\end{pmatrix},\qquad\tilde{D}_{j}\coloneqq\begin{pmatrix}R^{j}&0_{2\times 1}\\ 0_{1\times 2}&0\end{pmatrix}.
Proof.

Observe that C~iD~j=B~ij\tilde{C}_{i}\tilde{D}_{j}=\tilde{B}_{ij}. Then have that

(B~11B~12B~13B~21B~22B~23B~31B~32B~33)=V~TU~.\begin{pmatrix}\tilde{B}_{11}&\tilde{B}_{12}&\tilde{B}_{13}&\cdots\\ \tilde{B}_{21}&\tilde{B}_{22}&\tilde{B}_{23}&\cdots\\ \tilde{B}_{31}&\tilde{B}_{32}&\tilde{B}_{33}&\cdots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}=\tilde{V}^{T}\tilde{U}.

Thus it follows that

D~f~=(A~B~12B~13B~21A~B~23B~31B~32A~)=IN(A~B~)+V~TU~.\tilde{D}\tilde{f}=\begin{pmatrix}\tilde{A}&\tilde{B}_{12}&\tilde{B}_{13}&\cdots\\ \tilde{B}_{21}&\tilde{A}&\tilde{B}_{23}&\cdots\\ \tilde{B}_{31}&\tilde{B}_{32}&\tilde{A}&\cdots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}=I_{N}\otimes(\tilde{A}-\tilde{B})+\tilde{V}^{T}\tilde{U}.

\blacksquare

We can compute the characteristic polynomial by using the representation given in (22) and the Matrix Determinant Lemma, which we will state here without proof.

Lemma A.3.

Let AA be an n×nn\times n invertible matrix, and let UU and VV be m×nm\times n matrices. Then

det(A+VTU)=det(Im+UA1VT)det(A).\det(A+V^{T}U)=\det(I_{m}+UA^{-1}V^{T})\det(A). (23)

With this we may reduce the determinant of Df~μI3ND\tilde{f}-\mu I_{3N} to the product of determinants of 3×33\times 3 matrices.

Lemma A.4.

Let NN\in\mathbb{N} and consider N3N\geq 3. Let A~,B~,C~k,D~k,V~,U~\tilde{A},\tilde{B},\tilde{C}_{k},\tilde{D}_{k},\tilde{V},\tilde{U} be defined the same as in Lemma A.2. Then we may write the characteristic polynomial of Df~D\tilde{f} as

det(Df~μI3N)=det(M~+k=1ND~kadj(A~B~μI3)C~k)det(A~B~μI3)N2,\det(D\tilde{f}-\mu I_{3N})=\det\left(\tilde{M}+\sum_{k=1}^{N}\tilde{D}_{k}adj(\tilde{A}-\tilde{B}-\mu I_{3})\tilde{C}_{k}\right)\det\left(\tilde{A}-\tilde{B}-\mu I_{3}\right)^{N-2}, (24)

where M~\tilde{M} is given by

M~(det(A~B~μI3)000det(A~B~μI3)0001).\tilde{M}\coloneqq\begin{pmatrix}\det(\tilde{A}-\tilde{B}-\mu I_{3})&0&0\\ 0&\det(\tilde{A}-\tilde{B}-\mu I_{3})&0\\ 0&0&1&\\ \end{pmatrix}.
Proof.

We have that

det(Df~μI3N)\displaystyle\det(D\tilde{f}-\mu I_{3N}) =det(IN(A~B~μI3)+V~TU~)\displaystyle=\det\left(I_{N}\otimes(\tilde{A}-\tilde{B}-\mu I_{3})+\tilde{V}^{T}\tilde{U}\right)
=det(I3+U~(IN(A~B~μI3))1V~)det(IN(A~B~μI3))\displaystyle=\det\left(I_{3}+\tilde{U}(I_{N}\otimes(\tilde{A}-\tilde{B}-\mu I_{3}))^{-1}\tilde{V}\right)\det\left(I_{N}\otimes(\tilde{A}-\tilde{B}-\mu I_{3})\right)
=det(I3+U~(IN(A~B~μI3)1)V~)det(A~B~μI3)N\displaystyle=\det\left(I_{3}+\tilde{U}(I_{N}\otimes(\tilde{A}-\tilde{B}-\mu I_{3})^{-1})\tilde{V}\right)\det\left(\tilde{A}-\tilde{B}-\mu I_{3}\right)^{N}
=det(I3+k=1ND~k(A~B~μI3)1C~k)det(A~B~μI3)N\displaystyle=\det\left(I_{3}+\sum_{k=1}^{N}\tilde{D}_{k}(\tilde{A}-\tilde{B}-\mu I_{3})^{-1}\tilde{C}_{k}\right)\det\left(\tilde{A}-\tilde{B}-\mu I_{3}\right)^{N}
=det(I3+1det(A~B~μI3)k=1ND~kadj(A~B~μI3)C~k)det(A~B~μI3)N\displaystyle=\det\left(I_{3}+\tfrac{1}{\det(\tilde{A}-\tilde{B}-\mu I_{3})}\sum_{k=1}^{N}\tilde{D}_{k}adj(\tilde{A}-\tilde{B}-\mu I_{3})\tilde{C}_{k}\right)\det\left(\tilde{A}-\tilde{B}-\mu I_{3}\right)^{N}
=det(M~+k=1ND~kadj(A~B~μI3)C~k)det(A~B~μI3)N2.\displaystyle=\det\left(\tilde{M}+\sum_{k=1}^{N}\tilde{D}_{k}adj(\tilde{A}-\tilde{B}-\mu I_{3})\tilde{C}_{k}\right)\det\left(\tilde{A}-\tilde{B}-\mu I_{3}\right)^{N-2}.

The second equality in the derivation comes from the Matrix Determinant Lemma, and the final equality comes from the fact that the determinant of an n×nn\times n matrix is nn-linear in the rows of the matrix and that D~kadj(A~B~μI3)C~k\tilde{D}_{k}adj(\tilde{A}-\tilde{B}-\mu I_{3})\tilde{C}_{k} is all 0 below the second row.

\blacksquare

Before computing the characteristic polynomial we need to evaluate the expression k=1ND~kadj(A~B~μI3)C~k\sum_{k=1}^{N}\tilde{D}_{k}adj(\tilde{A}-\tilde{B}-\mu I_{3})\tilde{C}_{k}. To do this we need the following lemma.

Lemma A.5.

Let NN\in\mathbb{N} and consider N3N\geq 3. Define RR to be the same as in (5). Then we have

k=1NRk(a00b)Rk=N2(a+b00a+b).\sum_{k=1}^{N}R^{k}\begin{pmatrix}a&0\\ 0&b\end{pmatrix}R^{-k}=\frac{N}{2}\begin{pmatrix}a+b&0\\ 0&a+b\end{pmatrix}. (25)
Proof.

Denote θk=2πkN\theta_{k}=\frac{2\pi k}{N}. We have that

Rk(1000)Rk=(cos2θksinθkcosθksinθkcosθksin2θk)=12(1+cos(2θk)sin(2θk)sin(2θk)1cos(2θk)).\displaystyle R^{k}\begin{pmatrix}1&0\\ 0&0\end{pmatrix}R^{-k}=\begin{pmatrix}\cos^{2}\theta_{k}&\sin\theta_{k}\cos\theta_{k}\\ \sin\theta_{k}\cos\theta_{k}&\sin^{2}\theta_{k}\end{pmatrix}=\frac{1}{2}\begin{pmatrix}1+\cos(2\theta_{k})&\sin(2\theta_{k})\\ \sin(2\theta_{k})&1-\cos(2\theta_{k})\end{pmatrix}.

We also have that

k=1k=Ncos(2θk)=k=1k=Nsin(2θk)=0.\sum_{k=1}^{k=N}\cos(2\theta_{k})=\sum_{k=1}^{k=N}\sin(2\theta_{k})=0.

This comes from the fact that the sum of roots of unity is 0. That is

k=1k=Nexp(i2πmkN)=0for m=1,,N1.\sum_{k=1}^{k=N}\exp\left(\frac{i2\pi mk}{N}\right)=0\qquad\text{for }m=1,\dots,N-1.

Thus

k=1NRk(1000)Rk=N2(1001).\sum_{k=1}^{N}R^{k}\begin{pmatrix}1&0\\ 0&0\end{pmatrix}R^{-k}=\frac{N}{2}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}.

Similarly one can show that

k=1NRk(0001)Rk=N2(1001).\sum_{k=1}^{N}R^{k}\begin{pmatrix}0&0\\ 0&1\end{pmatrix}R^{-k}=\frac{N}{2}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}.

Thus we have that

k=1NRk(a00b)Rk=ak=1NRk(1000)Rk+bk=1NRk(0001)Rk=N2(a+b00a+b).\sum_{k=1}^{N}R^{k}\begin{pmatrix}a&0\\ 0&b\end{pmatrix}R^{-k}=a\sum_{k=1}^{N}R^{k}\begin{pmatrix}1&0\\ 0&0\end{pmatrix}R^{-k}+b\sum_{k=1}^{N}R^{k}\begin{pmatrix}0&0\\ 0&1\end{pmatrix}R^{-k}=\frac{N}{2}\begin{pmatrix}a+b&0\\ 0&a+b\end{pmatrix}.

\blacksquare

Now we are in a place to compute the characteristic polynomial.

Lemma A.6.

The characteristic polynomial of Df~D\tilde{f} is given by

(G1(μ;σ)+G2(μ;σ))2(G1(μ;σ))N2,(G_{1}(\mu;\sigma)+G_{2}(\mu;\sigma))^{2}(G_{1}(\mu;\sigma))^{N-2}, (26)

where

G1(μ;σ)=(μ+1)[(μ+1)(4σy(1y)μ)2σ(1y)2(1+y)φ1+φ2]G_{1}(\mu;\sigma)=(\mu+1)\left[(\mu+1)(4\sigma y^{*}(1-y^{*})-\mu)-\frac{2\sigma(1-y^{*})^{2}(1+y^{*})}{\varphi_{1}^{*}+\varphi_{2}^{*}}\right]

and

G2(μ;σ)=1y2(g1(μ;σ)+g3(μ;σ))4Nσ(1y)(y2)(y)2(N1)(φ1+φ2)g2(μ;σ)G_{2}(\mu;\sigma)=\frac{1-y^{*}}{2}(g_{1}(\mu;\sigma)+g_{3}(\mu;\sigma))-\frac{4N\sigma(1-y^{*})(y^{*}-2)(y^{*})^{2}}{(N-1)(\varphi_{1}^{*}+\varphi_{2}^{*})}g_{2}(\mu;\sigma)

with

g1(μ;σ)\displaystyle g_{1}(\mu;\sigma) =(1+μ)[4σy(1y)μ]\displaystyle=-(1+\mu)\left[4\sigma y^{*}(1-y^{*})-\mu\right]
g2(μ;σ)\displaystyle g_{2}(\mu;\sigma) =1+μ2\displaystyle=\frac{1+\mu}{2}
g3(μ;σ)\displaystyle g_{3}(\mu;\sigma) =(1+μ)[4σy(1y)μ]+4σ(1y)2(1+y)φ1+φ2\displaystyle=-(1+\mu)\left[4\sigma y^{*}(1-y^{*})-\mu\right]+\frac{4\sigma(1-y^{*})^{2}(1+y^{*})}{\varphi_{1}^{*}+\varphi_{2}^{*}}
Proof.

The adjugate of A~B~μI3\tilde{A}-\tilde{B}-\mu I_{3} has the form

adj(A~B~μI3)=(g1(μ;σ)0g2(μ;σ)0g3(μ;σ)0g4(μ;σ)0g5(μ;σ))adj(\tilde{A}-\tilde{B}-\mu I_{3})=\begin{pmatrix}g_{1}(\mu;\sigma)&0&g_{2}(\mu;\sigma)\\ 0&g_{3}(\mu;\sigma)&0\\ g_{4}(\mu;\sigma)&0&g_{5}(\mu;\sigma)\end{pmatrix}

where g1g_{1}, g2g_{2}, and g3g_{3} are given as in the theorem above. The expressions for g4g_{4} and g5g_{5} are not necessary, as one can compute

D~kadj(A~B~μI3)C~k=(Q~k00000)\tilde{D}_{k}adj(\tilde{A}-\tilde{B}-\mu I_{3})\tilde{C}_{k}=\begin{pmatrix}\tilde{Q}_{k}&\begin{array}[]{c}0\\ 0\end{array}\\ \begin{array}[]{cc}0&0\end{array}&0\end{pmatrix}

where

Q~k=Rk[1yN(g1(μ;σ)00g3(μ;σ))4Nσ(1y)(y2)y(N1)(φ1+φ2)(yg2(μ;σ)000)]Rk\tilde{Q}_{k}=R^{k}\left[\frac{1-y^{*}}{N}\begin{pmatrix}g_{1}(\mu;\sigma)&0\\ 0&g_{3}(\mu;\sigma)\end{pmatrix}-\frac{4N\sigma(1-y^{*})(y^{*}-2)y^{*}}{(N-1)(\varphi_{1}^{*}+\varphi_{2}^{*})}\begin{pmatrix}y^{*}g_{2}(\mu;\sigma)&0\\ 0&0\end{pmatrix}\right]R^{-k}

Applying Lemma A.5 gives us k=1NQ~k=I2G2(μ;σ)\sum_{k=1}^{N}\tilde{Q}_{k}=I_{2}G_{2}(\mu;\sigma).

Now note that G1(μ;σ)G_{1}(\mu;\sigma) is precisely det(A~B~μI3)\det(\tilde{A}-\tilde{B}-\mu I_{3}). If we combine these derivations with the form of det(Df~μI3N)\det(D\tilde{f}-\mu I_{3N}) given in (24) we arrive at the desired expression:

det(Df~μI3N)=(G1(μ;σ)+G2(μ;σ))2(G1(μ;σ))N2.\det(D\tilde{f}-\mu I_{3N})=(G_{1}(\mu;\sigma)+G_{2}(\mu;\sigma))^{2}(G_{1}(\mu;\sigma))^{N-2}.

\blacksquare

Appendix B Jacobian for finite λ\lambda

In this appendix we consider the full system (6). Using similar procedure as in Appendix A we a simplified expression for the characteristic polynomial and trace of the Jacobian of (6) at the equilibriup point (9).

Lemma B.1.

Consider the system equilibrium given by (9). The Jacobian of (6) at that equilibrium has the form

Df=(AB12B13B21AB23B31B32A)Df=\begin{pmatrix}A&B_{12}&B_{13}&\cdots\\ B_{21}&A&B_{23}&\cdots\\ B_{31}&B_{32}&A&\cdots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix} (27)

where there are 5N5N rows and columns, and AA and BijB_{ij} are 5×55\times 5 matrices. Specifically we have

A=(1yNN0120001yNN000004σy(1y)24σy(1y)φ2(φ1+φ2)224σy(1y)φ1(φ1+φ2)2λ(y1)00λ0λy000λ),A=\begin{pmatrix}\frac{1-y^{*}-N}{N}&0&\frac{1}{2}&0&0\\ 0&\frac{1-y^{*}-N}{N}&0&0&0\\ 0&0&4\sigma y^{*}(1-y^{*})&\frac{24\sigma y^{*}(1-y^{*})\varphi_{2}^{*}}{(\varphi_{1}^{*}+\varphi_{2}^{*})^{2}}&-\frac{24\sigma y^{*}(1-y^{*})\varphi_{1}^{*}}{(\varphi_{1}^{*}+\varphi_{2}^{*})^{2}}\\ \lambda(y^{*}-1)&0&0&-\lambda&0\\ \lambda y^{*}&0&0&0&-\lambda\end{pmatrix}, (28)

with φ1=12(1y)2\varphi_{1}^{*}=\frac{1}{2}(1-y^{*})^{2}, φ2=N12N(y)2\varphi_{2}^{*}=\frac{N-1}{2N}(y^{*})^{2}, and

Bij=(1yNRji02×302×202×3λN1(y,0)Rji01×3),withRk=(cos(2πkN)sin(2πkN)sin(2πkN)cos(2πkN)).B_{ij}=\begin{pmatrix}\frac{1-y^{*}}{N}R^{j-i}&0_{2\times 3}\\ 0_{2\times 2}&0_{2\times 3}\\ \frac{-\lambda}{N-1}(y^{*},0)R^{j-i}&0_{1\times 3}\end{pmatrix},\quad\text{with}\quad R^{k}=\begin{pmatrix}\cos\left(\frac{2\pi k}{N}\right)&-\sin\left(\frac{2\pi k}{N}\right)\\ \sin\left(\frac{2\pi k}{N}\right)&\cos\left(\frac{2\pi k}{N}\right)\end{pmatrix}. (29)
Proof.

Direct computation.

\blacksquare

B.1 Computation of Characteristic Polynomial of DfDf

Lemma B.2.

The Jacobian of (6) at that equilibrium given by (9) can be compactly represented by

Df=IN(AB)+VTU.Df=I_{N}\otimes(A-B)+V^{T}U. (30)

where “~{}\otimes” indicates the Kronecker product. Here B=BiiB=B_{ii}, where BijB_{ij} is given by (29) (note that BiiB_{ii} has no dependence on ii). Additionally we have

Ci(1yNRi02×302×202×3λN1(y,0)Ri01×3),Dj(Rj02×303×203×3),C_{i}\coloneqq\begin{pmatrix}\frac{1-y^{*}}{N}R^{-i}&0_{2\times 3}\\ 0_{2\times 2}&0_{2\times 3}\\ \frac{-\lambda}{N-1}(y^{*},0)R^{-i}&0_{1\times 3}\end{pmatrix},\qquad D_{j}\coloneqq\begin{pmatrix}R^{j}&0_{2\times 3}\\ 0_{3\times 2}&0_{3\times 3}\end{pmatrix},

and block matrices V(C1TC2TCNT)V\coloneqq(C_{1}^{T}~{}C_{2}^{T}~{}\cdots~{}C_{N}^{T}) and U(D1D2DN)U\coloneqq(D_{1}~{}D_{2}~{}\cdots~{}D_{N})

Proof.

Observe that CiDj=BijC_{i}D_{j}=B_{ij}. Then have that

(B11B12B13B21B22B23B31B32B33)=VTU.\begin{pmatrix}B_{11}&B_{12}&B_{13}&\cdots\\ B_{21}&B_{22}&B_{23}&\cdots\\ B_{31}&B_{32}&B_{33}&\cdots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}=V^{T}U.

Thus it follows that

Df=(AB12B13B21AB23B31B32A)=IN(AB)+VTU.Df=\begin{pmatrix}A&B_{12}&B_{13}&\cdots\\ B_{21}&A&B_{23}&\cdots\\ B_{31}&B_{32}&A&\cdots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}=I_{N}\otimes(A-B)+V^{T}U.

\blacksquare

Lemma B.3.

Let NN\in\mathbb{N} and consider N3N\geq 3. Let A,B,Ck,Dk,V,UA,B,C_{k},D_{k},V,U be defined as in Lemma B.2. Then we may write the characteristic polynomial of DfDf as

det(DfμI3N)=det(M+k=1NDkadj(ABμI3)Ck)det(ABμI3)N2,\det(Df-\mu I_{3N})=\det\left(M+\sum_{k=1}^{N}D_{k}adj(A-B-\mu I_{3})C_{k}\right)\det\left(A-B-\mu I_{3}\right)^{N-2}, (31)

where MM is given by

M=(det(ABμI3)00000det(ABμI3)000001000001000001).M=\begin{pmatrix}\det(A-B-\mu I_{3})&0&0&0&0\\ 0&\det(A-B-\mu I_{3})&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1\\ \end{pmatrix}.

B.2 Trace of DfDf

Lemma B.4.

The trace of DfDf is given by

Tr(Df)=2(1yNNλ)+4Nσy(1y).\text{Tr}(Df)=2\left(1-y^{*}-N-N\lambda\right)+4N\sigma y^{*}(1-y^{*}). (32)
Proof.

This follows from direct computation using (27) and (28)\eqref{eq:A}.

\blacksquare