This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Two Dimensional Swarm Formation in Time-invariant External Potential:
Modelling, Analysis, and Control

Yanran Wang [email protected]    Takashi Hikihara [email protected] Department of Electrical Engineering, Kyoto University.
Abstract

Clustering formation has been observed in many organisms in Nature. It has the desirable properties for designing energy efficient protocols for Wireless Senor Networks (WSNs). In this paper, we present a new approach for energy efficient WSNs protocol which investigate how cluster formation of sensors response to external time-invariant energy potential. In this approach, the necessity of data transmission to Base Station is eliminated, thereby conserving energy for WSNs. We define swarm formation topology, and estimate the curvature of external potential manifold by analyzing the change of the swarm formation in time. We also introduce a dynamic formation control algorithm for maintaining defined swarm formation topology in external potential.

Wireless Sensor Networks, nonlinear dynamic, Koopman operator theory, dynamic mode decomposition, curvature, swarm formation

I Introduction

Wireless Sensor Networks (WSNs) have attracted much attention due to its ability to provide ubiquitous and multi-faceted situational awareness with a host of applications ranging from structural health monitoring, habitat surveillance, and target detection to power system management, smart car parking, and wireless luggage tags Erol-Kantarci and Mouftah (2011); Gao and Jin ; Li et al. (2002); Ling et al. (2009); Srikanth et al. . WSN depends on spatially distributed sensor node to measure and collect the desire environmental data within its sensing range, and then transmit to a control center called Base Station (BS). The ideal WSN should be autonomous, robust, scalable, and with extended network lifetime. However, WSNs are usually deployed in hostile environments where energy resources is limited. As energy is constrained and data transmission is most energy costly, WSNs algorithms need to be architect in ways where data transmission, especially to BS, is minimized. Clustering become an often utilized technique in designing energy efficient algorithm for WSNs.

Cluster formations can be readily observed in Nature, such as bird flocking and fish schooling. These organism systems are expressing collective motion behaviours and are studied in a relatively novel interdisciplinary field of research, Swarm Intelligence (SI). Individual agents in the swarm are simple agents with limited sensing abilities and computational rules, and interacting with each other locally. Nevertheless, the swarm as a whole demostrates emergent global behaviours which are unknown to the individuals. SI systems have the desirable properties of being distributed, autonomous, scalable, and robust Blum and Li (2008). All of which are key in designing algorithms for WSNs.

Plentiful of clustering algorithm has been developed Gomathi and Sivanesan (2019); Jin et al. (2008); Mehdi Afsar and Tayarani-N (2014). However, besides the main objective of energy conservation, the existing clustering algorithms mainly focus on the optimalization of sensor protocal routines to enhance WSNs in scalability, fault-tolerance, data aggregation, load balancing and network topology stability. All of which data transmission to BS is unavoidable. To address the energy efficiency challenge from a new perspective, this paper combines SI concept with WSNs. We focuses on designing WSNs algorithm where desired environmental information are obtained through analyzing the change in sensor cluster formations (swarm formation) rather than information collected directly through individual sensors, thereby minimizing the energy expenditure in data tranmission between sensors and eliminates the necessity of BS.

We identify the envrionmental information as a external potential hypersuface MM of (n1)(n-1)-dimensions (n=3n=3 or n=4n=4). Mathematically, MM is a Riemannian manifold defined by the set of solutions to a single equation

F(x1,,xn)=0,\displaystyle F(x_{1},\ldots,x_{n})=0, (1)

where FF is a CC^{\infty} function. We introduce a formation analysis algorithm which uses swarm formation in external potential to estimate curvature, which is invariant under isometry, of the manifold MM.

This paper is organized as the follows. In section II, we introduce the topology of swarm lattice formation and demonstrate how to construct such formation in an arbitary external potential. In section III, we explain the theoretical basis and computations for the formation analysis algorithm. In section IV, we formulate an formation control algorithm which shows how the swarm lattice formation can be obtained by dynamical control of the individual agents. In section V, we discuss the simulation results of both formation analysis and formation control algorithms.

II Swarm formation

The topology definition of the swarm formation is inspired by Reynolds Reynolds (1987) and Olfati-Saber Olfati-Saber (2006). In this paper, we consider line formation. In the line formation, agents are being divided into two groups: one head agent and the following agents. The head agent acts as an initial stimuli to the swarm motion. Its dynamics can be predefined and are not affected by the following agents in the swarm.

Define a path graph PnP_{n} as a pair (𝒱,)(\mathcal{V},\mathcal{E}) that consists of a set of vertices 𝒱={v1,v2,,vn}\mathcal{V}=\{v_{1},v_{2},\dots,v_{n}\} and a set of edges \mathcal{E} such that {vi,vi+1}\mathcal{E}\subseteq\{v_{i},v_{i+1}\}, where i=1,2,,n1i=1,2,\dots,n-1. Each vertex represents an agent of the swarm, while the edges represent the inter-agent communications. Let qiq_{i}\in\mathbb{R} denote the position of agent viv_{i} for all vi𝒱v_{i}\in\mathcal{V}. The vector q=(q1,,qn)Tq=(q_{1},...,q_{n})^{T} is the configuration of all agents of the swarm. The inter-agent distance, that is the length of edges, is defined to be the geodesic length between two connected vertices over G(q)G(q). To maintain identical inter-agent distance, we consider an algebraic constraint on the edges,

dis(vi,vi+1)=d,viV,d.\displaystyle dis(\mathcal{E}_{v_{i},v_{i+1}})=d,\hskip 14.22636pt\forall\,\,v_{i}\in V,\hskip 14.22636ptd\in\mathbb{R}. (2)

A configuration qq that satisfying the set of constraints in (2) is refered as a lattice formation.

To formulate lattice formation in any arbitary external potential MM, we need to first define the trajectory of the head agent, then construct a representation of edges – a parallel vector field that is metrically orthogonal to the head agent trajectory, and finally calculate the geodesic deviation vector field. The trajectories of the following agents are the integral curvces of the geodesic deviation vector field. The theoretical basis Lee (1997) and details of the formulation are discussed as follows.

Let UnU\subseteq\mathbb{R}^{n} be a non-empty open subset and F:UF:U\rightarrow\mathbb{R} a CC^{\infty} function defining the external potential. Let MU×M\subseteq U\times\mathbb{R} be the graph of ff. The closed subset MM in U×U\times\mathbb{R} projects homeomorphically onto UU with inverse (x1,,xn)(x1,,xn,F(x1,,xn))(x_{1},\ldots,x_{n})\mapsto(x_{1},\ldots,x_{n},F(x_{1},\ldots,x_{n})) that is a smooth mapping from UU to U×U\times\mathbb{R}. MM is a closed smooth submanifold of U×U\times\mathbb{R}. Using the standard Riemannian metric on U×n+1U\times\mathbb{R}\subseteq\mathbb{R}^{n+1}, the induced metric gg on MM at a point pMp\in M is

g(p)=qi|p,qj|ppdqi(p)dqj(p)\displaystyle g(p)=\langle\partial_{q_{i}}|_{p},\partial_{q_{j}}|_{p}\rangle_{p}\,dq_{i}(p)\otimes dq_{j}(p) (3)

with coordinate chart {qi}\{q_{i}\} on MM. Each qi|pTpM\partial_{q_{i}}|_{p}\in T_{p}M can be represented as a linear combination of {xi|p}Tp(n+1)\{\partial_{x_{i}}|_{p}\}\in T_{p}(\mathbb{R}^{n+1}), given as

qi|p=xi|p+xif(p)xn+1|p.\displaystyle\partial_{q_{i}}|_{p}=\partial_{x_{i}}|_{p}+\partial_{x_{i}}f(p)\partial_{x_{n+1}}|_{p}. (4)

Consider the aforementioned graph MM as a CC^{\infty} Riemannian manifold. Given a curve, C:[a,b]MC:[a,b]\longrightarrow M, a vector field XX along CC is any section of the tangent bundle TMTM over CC (X:[a,b]TMX:[a,b]\longrightarrow TM, projection π:TMM\pi:TM\longrightarrow M, such that πX=C\pi\circ X=C). If MM is a smooth manifold, all vector field on the manifold are also smooth. We denote the collection of all smooth vector fields on manifold MM as 𝔛(M)\mathfrak{X}(M). For a Riemannian manifold (M,g)(M,g), the Levi-Civita connection g\nabla_{g} on M is the unique connection on TMTM that has both metric compatibility and torsion freeness. The Christoffel symbols of the second kind are the connection coefficients (in a local chart) of the Levi-Civita connection denoted as

Γbca=12gad(cgdb+bgdcdgbc).\displaystyle\Gamma^{a}_{\,\,bc}=\frac{1}{2}g^{ad}(\partial_{c}g_{db}+\partial_{b}g_{dc}-\partial_{d}g_{bc}). (5)

For a Riemannian manifold (M,g)(M,g), a curve is called geodesic with respect to the connection g\nabla_{g} if its acceleration it zero. That is a curve γ\gamma where γ˙γ˙=0\nabla_{\dot{\gamma}}\dot{\gamma}=0. A geodesic curve in n-dimensional Riemannian manifold can be expressed as a system of second order ordinary differential equations,

d2γλdt2+Γμνλdγμdtdγνdt=0.\displaystyle\frac{d^{2}\gamma^{\lambda}}{dt^{2}}+\Gamma^{\lambda}_{\,\,\mu\nu}\frac{d\gamma^{\mu}}{dt}\frac{d\gamma^{\nu}}{dt}=0. (6)

All geodesics are the shortest path between any two points on the manifold.

We predefine the trajectory of the head agent to be a geodesic curve on manifold. Base on (6), for a two-dimensional manifold (M,g)(M,g), head agent trajectory h(t)h(t) can be expressed as a system of ordinary differential equations

h˙1=h3,h˙2=h4,h˙3=Γxxx(h3)22Γxyxh3h4Γyyx(h4)2,h˙4=Γxxy(h3)22Γxyyh3h4Γyyy(h4)2,\displaystyle\begin{split}\dot{h}_{1}&=h_{3},\\ \dot{h}_{2}&=h_{4},\\ \dot{h}_{3}&=-\Gamma^{x}_{\,\,xx}(h_{3})^{2}-2\Gamma^{x}_{\,\,xy}h_{3}h_{4}-\Gamma^{x}_{\,\,yy}(h_{4})^{2},\\ \dot{h}_{4}&=-\Gamma^{y}_{\,\,xx}(h_{3})^{2}-2\Gamma^{y}_{\,\,xy}h_{3}h_{4}-\Gamma^{y}_{\,\,yy}(h_{4})^{2},\end{split} (7)

where basis {x,y}\{x,y\} are used in the index, and h˙\dot{h} are the first derivative with respect to time. Each Christoffel symbols depends entirely on the metric at a certain point in MM in terms of basis {x,y}\{x,y\}. Given initial conditions, [x1,x2,x˙1,x˙2][x_{1},x_{2},\dot{x}_{1},\dot{x}_{2}], (7) is guaranteed to have a solution according to the Picard-Lindelo¨\rm{\ddot{o}}f theorem. This choice of head agent trajectory is made to simplify agent’s dynamic control. On the geodesic, the head agent is traveling with constant velocity given initial position and velocity, thus no requirement for outside reference beacon.

The distance of the edges of lattice formation needs to be identical. For edges representation, a parallel vector field KK that is orthogonal to the head agent trajectory h(t)h(t) is constructed as

KK=0.\displaystyle\nabla_{K}K=0. (8)

KK forms a family of geodesics where we require the initial position and velocity conditions (u0j,v0j)(u^{j}_{0},v^{j}_{0}) for jjth geodesic γjK\gamma_{j}\in K are u0j=h(tj)u^{j}_{0}=h(t_{j}), v0j=v:g(v,h˙(tj))=0v^{j}_{0}=v:g(v,\dot{h}(t_{j}))=0, jj\in\mathbb{Z}. The frequency of inter-agent communications is defined by the number of geodesics in KK within given traveling time.

The separation vector s(t)s(t) connects a point γ(t)\gamma(t) on one geodesic to a point γ(t)+s(t)\gamma(t)+s(t) on a nearby geodesic at the same time. For parallel vector field KK, we can construct a separation vector field SS such that sSs\in S are the separation vector discribed above. For the swarm lattice formation, the following agents’ trajectories are the integral curves of SS. Fig. 1 gives a visual example of two-dimensinal swarm lattice formation traveling in external potential.

Refer to caption
Figure 1: Two dimensional visual example of swarm lattice formation in external potential F=sin(x1a)+cos(x2a)F=\sin(\frac{x_{1}}{a})+\cos(\frac{x_{2}}{a}), a=2a=2. Travel time t=20t=20. Head agent in red, with initial condition [0, 0,cos(π/18),sin(π/18)][0,\,0,\,\cos(\pi/18),\,\sin(\pi/18)]; following agents in orange.

III Formation analysis

In Fig. 1, one can notice the change in swarm lattice formation as it travels due to the curvature of the external potential manifold. As an invariant property under isometry, curvature tensor gives valuable information about the external potential manifold itself. We quantify this change as the acceleration of the separation vectors sSs\in S along KK, which is equivalent to the change in the difference of neighbouring agents’ velocities. Geodesic deviation equation relates the acceleration of the separation vector between two neighbouring geodesic curves to Riemann curvature tensor. The theoretical elements Lee (1997); Ivancevic and Ivancevic (2007) and analysis method are discribed in details as follows.

The Riemann curvature tensor is a (1,3)(1,3) tensor defined through the Lie bracket on (M,g)(M,g) as

R(X,Y)Z=[X,Y]ZXYZ+YXZ,\displaystyle R(X,Y)Z=\nabla_{[X,Y]}Z-\nabla_{X}\nabla_{Y}Z+\nabla_{Y}\nabla_{X}Z, (9)

where X,Y,Z𝔛(M)X,Y,Z\in\mathfrak{X}(M), and R(X,Y)ZR(X,Y)Z is vector-valued. R(X,Y)ZR(X,Y)Z can be express in local chart as

Rσμνρ=μΓνσρνΓμσρ+ΓμλρΓνσλΓνλρΓμσλ.\displaystyle R^{\rho}_{\,\,\,\sigma\mu\nu}=\partial_{\mu}\Gamma^{\rho}_{\,\,\nu\sigma}-\partial_{\nu}\Gamma^{\rho}_{\,\,\mu\sigma}+\Gamma^{\rho}_{\,\,\mu\lambda}\Gamma^{\lambda}_{\,\,\nu\sigma}-\Gamma^{\rho}_{\,\,\nu\lambda}\Gamma^{\lambda}_{\,\,\mu\sigma}. (10)

By our definition in section II, the separation vector ss can be written as vectors,

s(t)=γ~(t)γ(t),\displaystyle s(t)=\widetilde{\gamma}(t)-\gamma(t), (11)

between two nearby geodesics. The separation acceleration field WW is

W=γ˙γ˙S=γ˙V,\displaystyle W=\nabla_{\dot{\gamma}}\nabla_{\dot{\gamma}}S=\nabla_{\dot{\gamma}}V, (12)

where VV is the separation velocity field . In a local chart, WW and VV can be expressed as

Vρ\displaystyle V^{\rho} =dSρdt+Γμνργ˙μSν,\displaystyle=\frac{dS^{\rho}}{dt}+\Gamma^{\rho}_{\mu\nu}\dot{\gamma}^{\mu}S^{\nu}, (13)
Wρ\displaystyle W^{\rho} =dVρdt+Γλσργ˙λVσ,\displaystyle=\frac{dV^{\rho}}{dt}+\Gamma^{\rho}_{\lambda\sigma}\dot{\gamma}^{\lambda}V^{\sigma}, (14)

where γ˙dγdt\dot{\gamma}\equiv\displaystyle\frac{d\gamma}{dt}. Combining (13) and (14) gives

Wρ=d2Sρdt2+2Γμνργ˙μdSνdt+Γμνργλγ˙λγ˙μSν+Γμνργ¨μSν+ΓλσρΓμνσγ˙μγ˙λSν,\displaystyle\begin{split}W^{\rho}=&\frac{d^{2}S^{\rho}}{dt^{2}}+2\Gamma^{\rho}_{\,\,\mu\nu}\dot{\gamma}^{\mu}\frac{dS^{\nu}}{dt}+\frac{\partial\Gamma^{\rho}_{\,\,\mu\nu}}{\partial\gamma^{\lambda}}\dot{\gamma}^{\lambda}\dot{\gamma}^{\mu}S^{\nu}\\ &+\Gamma^{\rho}_{\,\,\mu\nu}\ddot{\gamma}^{\mu}S^{\nu}+\Gamma^{\rho}_{\,\,\lambda\sigma}\Gamma^{\sigma}_{\,\,\mu\nu}\dot{\gamma}^{\mu}\dot{\gamma}^{\lambda}S^{\nu},\end{split} (15)

where γ¨d2γdt2\ddot{\gamma}\equiv\displaystyle\frac{d^{2}\gamma}{dt^{2}}. Rearranging (11) to be

γ~(t)=γ(t)+s(t),\displaystyle\widetilde{\gamma}(t)=\gamma(t)+s(t), (16)

where separation vector s(t)s(t) is treated as an expansion parameter. Make use of the fact that both γ\gamma and γ~\widetilde{\gamma} are geodesics, then we have

0=d2Sρdt2+2Γμνργ˙μdSνdt+Γμνργλγ˙μγ˙νSλ.\displaystyle 0=\frac{d^{2}S^{\rho}}{dt^{2}}+2\Gamma^{\rho}_{\,\,\mu\nu}\dot{\gamma}^{\mu}\frac{dS^{\nu}}{dt}+\frac{\partial\Gamma^{\rho}_{\,\,\mu\nu}}{\partial\gamma^{\lambda}}\dot{\gamma}^{\mu}\dot{\gamma}^{\nu}S^{\lambda}. (17)

Inserting (17) into (15), the first-order geodesic deviation equation is

Wρ=(ΓμλργνΓμνργλ+ΓμλσΓσνρΓλσρΓμνσ)γ˙μSνγ˙λRμνλργ˙μSνγ˙λ.\displaystyle\begin{split}W^{\rho}=&-(\frac{\partial\Gamma^{\rho}_{\,\,\mu\lambda}}{\partial\gamma^{\nu}}-\frac{\partial\Gamma^{\rho}_{\,\,\mu\nu}}{\partial\gamma^{\lambda}}+\Gamma^{\sigma}_{\,\,\mu\lambda}\Gamma^{\rho}_{\,\,\sigma\nu}-\Gamma^{\rho}_{\,\,\lambda\sigma}\Gamma^{\sigma}_{\,\,\mu\nu})\\ &\dot{\gamma}^{\mu}S^{\nu}\dot{\gamma}^{\lambda}\\ \equiv&-R^{\rho}_{\,\,\,\mu\nu\lambda}\dot{\gamma}^{\mu}S^{\nu}\dot{\gamma}^{\lambda}.\end{split} (18)

A vector field JJ along a geodesic γ\gamma is called a Jacobi field if

J¨+R(J,γ˙)γ˙=0\displaystyle\ddot{J}+R(J,\dot{\gamma})\dot{\gamma}=0 (19)

where J¨ddtddtJ\ddot{J}\equiv\nabla_{\frac{d}{dt}}\nabla_{\frac{d}{dt}}J, and γ˙dγdt\dot{\gamma}\equiv\frac{d\gamma}{dt}. Obviously, the deviation vector field SS is a Jacobi field.

Furthermore, sectional curvature is a equivalent but more geometrical description of the curvature of Riemannian manifolds. Let tangent 2-plane, Πp\Pi_{p}, be the two dimensional subspace in TpMT_{p}M defined as Πpspan{u,v}\Pi_{p}\equiv span\{u,v\}, with u,vTpMu,v\in T_{p}M. Sectional curvature κ\kappa of (M,g)(M,g) at a point pMp\in M with respect to the plane Πp\Pi_{p} is defined as

κ(Πp)=κ(Xp,Yp)=R(X,Y)X,Yp|X|p2|Y|p2X,Yp2\displaystyle\begin{split}\kappa(\Pi_{p})&=\kappa(X_{p},Y_{p})\\ &=\frac{\langle R(X,Y)X,Y\rangle_{p}}{|X|^{2}_{p}|Y|^{2}_{p}-\langle X,Y\rangle^{2}_{p}}\end{split} (20)

Substitude the vector fields KK and SS we constructed in section II into (20), we have

κ(K,S)=R(K,S)K,S|K|2|S|2K,S2.\displaystyle\begin{split}\kappa(K,S)&=\frac{\langle R(K,S)K,S\rangle}{|K|^{2}|S|^{2}-\langle K,S\rangle^{2}}.\end{split} (21)

Also SS is in fact a Jacobi field along KK that is also orthogonal to KK. That is Wτ2S=R(K,S)KW\equiv\nabla^{2}_{\tau}S=-R(K,S)K, and K,S=0\langle K,S\rangle=0. Combining with the fact that the integral curves of KK are geodesics set to have velocity |γ˙|=1|\dot{\gamma}|=1, the equation for sectional curvature is simplified to

W=κ(K,S)S.\displaystyle W=-\kappa(K,S)S. (22)

For a two dimensional Riemannian manifold (M,g)(M,g), there is only one sectional curvature at each point pMp\in M.

In summary, by recording the change of swarm lattice formation in external potential, observer can quantify two variables: SS, agents’ velocities; WW, the second-order differences of nearby agents’ velocities. The geodesic deviation equation (22) relates WW and SS by sectional curvature κ\kappa of the external potential manifold.

IV Formation control

To stay in lattice formation as the swarm travling through external potential, individual agent needs dynamic control of its trajectory. The trajectory of individual agent can be considered as a dynamical system of the form Strogatz (1994)

ddt𝐱(t)=𝐅(𝐱(t)),\displaystyle\frac{d}{dt}\mathbf{x}(t)=\mathbf{F}(\mathbf{x}(t)), (23)

where 𝐱\mathbf{x} is the state of the system, that is the position of the agent, and 𝐅\mathbf{F} is a vector field depends on the system. In practice, we consider the equivalent discrete-time dynamical system,

𝐱k+1=𝐅(𝐱k),\displaystyle\mathbf{x}_{k+1}=\mathbf{F}(\mathbf{x}_{k}), (24)

where 𝐱k\mathbf{x}_{k} is the sampling of agent trajectory in (23) discretely in timesteps Δt\Delta t. If we can assume the system is of linear dynamic, then we can work with the form

ddt𝐱=𝐀𝐱,\displaystyle\frac{d}{dt}\mathbf{x}=\mathbf{Ax}, (25)

where it admits a closed-form solution

𝐱(t0+t)=e𝐀t𝐱(t0).\displaystyle\mathbf{x}(t_{0}+t)=e^{\mathbf{A}t}\mathbf{x}(t_{0}). (26)

The entire system dynamic is characterized by the eigenvalues and eigenvectors of the matrix 𝐀\mathbf{A}. Using spectral decomposition, one can simplify the dynamic system to

ddt𝐰=λ𝐰,\displaystyle\frac{d}{dt}\mathbf{w}=\mathbf{\lambda w}, (27)

with 𝐀=𝐏λ𝐏1\mathbf{A}=\mathbf{P\lambda P}^{-1}, and 𝐰=𝐏1𝐱\mathbf{w}=\mathbf{P}^{-1}\mathbf{x}. Using 𝐀\mathbf{A}, we can also predict agents’ trajectories in time, thereby controlling the swarm to be in its lattice formation as time evolves.

Even though it is desirable to work with linear dynamical systems, the curvature property of the external potential manifold, and thus the trajectories of agents are essentially nonlinear. To analyze nonlinear dynamic with linear technique, we utilize Koopman operator theory.

IV.1 Koopman operator theory and dynamic mode decomposition

Bernard O. Koopman has proved the posibility of representing a nonlinear dynamical system in terms of an infinite-dimensional linear operator acting on a Hilbert space of measurement functions of the state of the system. The basic elements of Koopman spectral analysis is dicussed below Brunton and Kutz (2019); Mauroy, Mezic, and Susuki (2020); Susuki et al. (2016); Mezić (2015).

Consider a real-valued measurement function g:𝐌g:\mathbf{M}\longrightarrow\mathbb{R}, known as observables, which belongs to the infinite dimensional Hilbert space. The Koopman operator KtK_{t} is an infinite-dimensional linear operator that acts on the observable gg as

Ktg=g𝐅t,\displaystyle K_{t}g=g\circ\mathbf{F}_{t}, (28)

where 𝐅t\mathbf{F}_{t} is the system dynamic, and \circ is the composition operator. For discrete-time system with timestep Δt\Delta t, it becomes

g(𝐱k+1)=KΔtg(𝐱k).\displaystyle g(\mathbf{x}_{k+1})=K_{\Delta t}g(\mathbf{x}_{k}). (29)

Even though Koopman operator is linear, it is also infinite dimensional. Thus it is important to identify key measurement functions that evolve linearly with the flow of the dynamic. Eigen-decomposition of the Koopman operator can provide us with such a set of measurement functions that captures the dynamics of the system and also behaves linearly in time. A discrete-time Koopman eigenfunction φ(𝐱)\varphi(\mathbf{x}) and its corresponding eigenvalue λ\lambda satisfies

φ(𝐱k+1)=KΔtφ(𝐱k)=λφ(𝐱k).\displaystyle\varphi(\mathbf{x}_{k+1})=K_{\Delta t}\varphi(\mathbf{x}_{k})=\lambda\varphi(\mathbf{x}_{k}). (30)

Nonlinear dynamics become linear in these eigenfunction coordinate.

In a general dynamic system, the measurement functions can be arranged into a vector 𝐠\mathbf{g}:

𝐠(𝐱)=[g1(𝐱)g2(𝐱)gm(𝐱)].\displaystyle\mathbf{g}(\mathbf{x})=\begin{bmatrix}g_{1}(\mathbf{x})\\ g_{2}(\mathbf{x})\\ \vdots\\ g_{m}(\mathbf{x})\\ \end{bmatrix}. (31)

Each measurement functions may be expanded in terms of eigenfunctions φj(𝐱)\varphi_{j}(\mathbf{x}), thus vector 𝐠\mathbf{g} can be written as:

𝐠(𝐱)=j=1φj(𝐱)𝐯j,\displaystyle\mathbf{g}(\mathbf{x})=\sum_{j=1}^{\infty}\varphi_{j}(\mathbf{x})\mathbf{v}_{j}, (32)

where 𝐯j\mathbf{v}_{j} is the jj-th Koopman mode associated with the eigenfunction φj\varphi_{j}. Given this decomposition, we can represent the dynamics of the system in terms of measurement function 𝐠\mathbf{g} as

𝐠(𝐱k)=KΔtk𝐠(𝐱0)=KΔtkj=0φj(𝐱0)𝐯j=j=0KΔtkφj(𝐱0)𝐯j=j=0λjkφj(𝐱0)𝐯j.\displaystyle\begin{split}\mathbf{g}(\mathbf{x}_{k})&=K^{k}_{\Delta t}\mathbf{g}(\mathbf{x}_{0})\\ &=K^{k}_{\Delta t}\sum_{j=0}^{\infty}\varphi_{j}(\mathbf{x}_{0})\mathbf{v}_{j}\\ &=\sum_{j=0}^{\infty}K^{k}_{\Delta t}\varphi_{j}(\mathbf{x}_{0})\mathbf{v}_{j}\\ &=\sum_{j=0}^{\infty}\lambda^{k}_{j}\varphi_{j}(\mathbf{x}_{0})\mathbf{v}_{j}.\end{split} (33)

The sequence of triples {(λj,φj,𝐯j)}j=0\{(\lambda_{j},\varphi_{j},\mathbf{v}_{j})\}^{\infty}_{j=0} is the Koopman mode decomposition.

Finding such Koopman mode is extremely diffcult even for system with known governing equations. For systems with unknown governing equation, such as in our situation, dynamic mode decomposition (DMD) algorithm is adopted. As one of the modal decomposition technique, DMD is first introduced in fluid dynamics community to analyze high-dimensional fluid state. It provides information about the dynamics of a flow in superposition of modes (similar to eigenmodes), even if the flow is nonlinear. DMD analysis can be considered as a approximation to Koopman spectral analysis, and also provides a numerial method for computing Koopman modes. In this paper, the formation control algorithm is inspired by online DMD framework from Zhang et al. Zhang et al. (2019).

We consider a sequential set of data vectors {𝐱0,𝐱1,,𝐱m}\{\mathbf{x}_{0},\mathbf{x}_{1},\dots,\mathbf{x}_{m}\}, where each 𝐱kn\mathbf{x}_{k}\in\mathbb{R}^{n}, are system states of time t0t_{0} to tmt_{m}. These data can be arraged into two matrices

𝐗\displaystyle\mathbf{X} =[|||𝐱0𝐱1𝐱m1|||],\displaystyle=\begin{bmatrix}|&|&&|\\ \mathbf{x}_{0}&\mathbf{x}_{1}&\dots&\mathbf{x}_{m-1}\\ |&|&&|\\ \end{bmatrix}, (34)
𝐘\displaystyle\mathbf{Y} =[|||𝐱1𝐱2𝐱m|||].\displaystyle=\begin{bmatrix}|&|&&|\\ \mathbf{x}_{1}&\mathbf{x}_{2}&\dots&\mathbf{x}_{m}\\ |&|&&|\\ \end{bmatrix}. (35)

We assume there exist an operator 𝐀\mathbf{A} that approximates the nonlinear dynamic of the system as

𝐱k+1𝐀𝐱k.\displaystyle\mathbf{x}_{k+1}\approx\mathbf{A}\mathbf{x}_{k}. (36)

Then the best-fit operator 𝐀\mathbf{A} is defined as

𝐀=argmin𝐀𝐘𝐀𝐗F,\displaystyle\mathbf{A}=\mathop{\arg\min}_{\mathbf{A}}\left\lVert\mathbf{Y}-\mathbf{AX}\right\rVert_{F}, (37)

where F\left\lVert\cdot\right\rVert_{F} is the Frobenius norm. The unique solution to least-square problem is given by

𝐀=𝐘𝐗,\displaystyle\mathbf{A}=\mathbf{YX}^{\dagger}, (38)

where XX^{\dagger} denotes the Moore-Penrose pseudoinverse of 𝐗\mathbf{X}.

Having 𝐀\mathbf{A}, computed from data from t0t_{0} up to tmt_{m}, we can predict xm+1x_{m+1} of the system, that is controlling agent’s trajectory. Since new data are feeded in to the system as time progresses, 𝐀\mathbf{A} is also changing. Unlike the standard DMD algorithm, online DMD algorithm updates operator 𝐀\mathbf{A} using the new system data, providing a more reliable operator 𝐀\mathbf{A} for the prediction of future system states. The algorithm does not compute the least-square problem of the whole system once new data is been updated. Instead, it computes 𝐀k+1\mathbf{A}_{k+1} given 𝐀k\mathbf{A}_{k} and new pairs of data (xk+1,yk+1)(x_{k+1},y_{k+1}), on the assumption that 𝐀k+1\mathbf{A}_{k+1} is close to 𝐀k\mathbf{A}_{k} in some sense.

Using (38), we have

𝐀k=𝐘k𝐗kT(𝐗k𝐗kT)1.\displaystyle\mathbf{A}_{k}=\mathbf{Y}_{k}\mathbf{X}_{k}^{T}(\mathbf{X}_{k}\mathbf{X}_{k}^{T})^{-1}. (39)

We define two new matrices 𝐏k\mathbf{P}_{k} and 𝐐k\mathbf{Q}_{k},

𝐐k\displaystyle\mathbf{Q}_{k} =𝐘k𝐗kT,\displaystyle=\mathbf{Y}_{k}\mathbf{X}_{k}^{T}, (40)
𝐏k\displaystyle\mathbf{P}_{k} =(𝐗k𝐗kT)1,\displaystyle=(\mathbf{X}_{k}\mathbf{X}_{k}^{T})^{-1}, (41)

so that 𝐀k=𝐐k𝐏k\mathbf{A}_{k}=\mathbf{Q}_{k}\mathbf{P}_{k}. 𝐏k\mathbf{P}_{k} is well-defined if we ensure 𝐗k𝐗kT\mathbf{X}_{k}\mathbf{X}_{k}^{T} is invertible. The operator 𝐀\mathbf{A} at time tk+1t_{k+1} is related to 𝐀k\mathbf{A}_{k} as

𝐀k+1=𝐐k+1𝐏k+1=(𝐐k+𝐲k+1𝐱k+1T)(𝐏k1+𝐱k+1𝐱k+1T)1.\displaystyle\begin{split}\mathbf{A}_{k+1}&=\mathbf{Q}_{k+1}\mathbf{P}_{k+1}\\ &=(\mathbf{Q}_{k}+\mathbf{y}_{k+1}\mathbf{x}_{k+1}^{T})(\mathbf{P}_{k}^{-1}+\mathbf{x}_{k+1}\mathbf{x}_{k+1}^{T})^{-1}.\end{split} (42)

Using Sherman-Morrison formula, we can express 𝐏k+1\mathbf{P}_{k+1} as

𝐏k+1=(𝐏k1+𝐱k+1𝐱k+1T)1=𝐏k𝐏k𝐱k+1𝐱k+1T𝐏k1+𝐱k+1T𝐏k𝐱k+1.\displaystyle\begin{split}\mathbf{P}_{k+1}&=(\mathbf{P}_{k}^{-1}+\mathbf{x}_{k+1}\mathbf{x}_{k+1}^{T})^{-1}\\ &=\mathbf{P}_{k}-\frac{\mathbf{P}_{k}\mathbf{x}_{k+1}\mathbf{x}_{k+1}^{T}\mathbf{P}_{k}}{1+\mathbf{x}_{k+1}^{T}\mathbf{P}_{k}\mathbf{x}_{k+1}}.\end{split} (43)

𝐏k+1\mathbf{P}_{k+1} can be updated more efficiently, without the computation of inverses. Combining (43) and (42), we obtain the formula

𝐀k+1=𝐀k+(𝐲k+1𝐀k𝐱k+1)𝐱k+1T𝐏k1+𝐱k+1T𝐏k𝐱k+1.\displaystyle\mathbf{A}_{k+1}=\mathbf{A}_{k}+\frac{(\mathbf{y}_{k+1}-\mathbf{A}_{k}\mathbf{x}_{k+1})\mathbf{x}_{k+1}^{T}\mathbf{P}_{k}}{1+\mathbf{x}_{k+1}^{T}\mathbf{P}_{k}\mathbf{x}_{k+1}}. (44)

𝐀k+1\mathbf{A}_{k+1} is computed using 𝐀k\mathbf{A}_{k} and new data pair {𝐱k+1,𝐲k+1}\{\mathbf{x}_{k+1},\mathbf{y}_{k+1}\}.

IV.2 Formation control algorithm

The aforementioned Online-DMD algorithm is a data-driven algorithm which predicts nonlinear dynamics. Therefore, even without knowledge of the external energy potential, it is still possible to control the lattice formation of the swarm. The general scheme of the control algorithm is shown in Fig. 2.

Refer to caption
Figure 2: Dynamic control algorithm scheme. White box indicates raw system data obtained from measurements; black box indicates algorithm calculated data; strip box indicates corrected data.

Notice, the predicted agent trajectory is not equivalent to the ideal trajectory of lattic formation, correction term need to be added to the prediction. The details of Online DMD algorithm modeled specific to control the lattice formation of swarm is shown next.

Algorithm 1.

(Online DMD)

  1. 1.

    Collecting system data as it evolves in time. Arrange data into two matrices

    𝐗\displaystyle\mathbf{X} [x0x1xk1],\displaystyle\equiv[x_{0}\,\,x_{1}\,\,\,\dots\,\,\,x_{k-1}],
    𝐘\displaystyle\mathbf{Y} [x1x2xk].\displaystyle\equiv[x_{1}\,\,x_{2}\,\,\,\dots\,\,\,x_{k}].
  2. 2.

    Compute 𝐀k\mathbf{A}_{k} and 𝐏k\mathbf{P}_{k} from (38) and (41).

  3. 3.

    Predict 𝐲k+1\mathbf{y}_{k+1} from 𝐲k+1=𝐀k𝐱k+1=𝐀k𝐲k\mathbf{y}_{k+1}=\mathbf{A}_{k}\mathbf{x}_{k+1}=\mathbf{A}_{k}\mathbf{y}_{k}.

  4. 4.

    Correcting 𝐲k+1\mathbf{y}_{k+1} by agent’s measurements.

  5. 5.

    Update 𝐀k\mathbf{A}_{k} and 𝐏k\mathbf{P}_{k} using corrected data pair (𝐱k+1,𝐲k+1)(\mathbf{x}_{k+1},\mathbf{y}_{k+1}), according to (42) and (43).

This algorithm is scalable to the rest of the following agents in the swarm, thereby the whole swarm can be dynamically controlled to stay in lattice formation.

V Simulation and discussion

V.1 Formation analysis algorithm

The formation analysis algorithm in section III is perfomred on swarm lattice formation in three different external potentials:

  1. 1.

    elliptic paraboloid: x12+x22a\displaystyle\frac{x_{1}^{2}+x_{2}^{2}}{a},

  2. 2.

    hyperbolic paraboloid: x12x22a\displaystyle\frac{x_{1}^{2}-x_{2}^{2}}{a},

  3. 3.

    sinusoidal and cosusoidal: sin(x1a)+cos(x2a)\sin(\displaystyle\frac{x_{1}}{a})+\cos(\displaystyle\frac{x_{2}}{a}),

parametrized by aa. All simulations used 100 following agents, with traveling time t=10t=10.

Refer to caption
Figure 3: Percentage error for sectional curvature estimation. All simulations used same initial conditions for head agent and have inter-agent distance constraint d=0.1d=0.1; communication frequency ts1=0.1,ts2=0.5t_{s1}=0.1,t_{s2}=0.5. Square represents the mean error.
Refer to caption
Figure 4: Percentage error for sectional curvature estimation. All simulations used same initial conditions for head agent and communication frequency ts=0.1t_{s}=0.1. Square is the mean error.

The algorithm is able to estimate the sectional curvature of all three external potential manifolds with some extend of accuracy. Overall, the estimation accuracy increase with decrease of curvature; while frequency of inter-agent communications and inter-agent distance constraint does not significantly affect algorithm accuracy shown in Fig. 3 and Fig. 4. However, inter-agent distance directly affect the area of which the swarm is sensing. The accuracy of the algorithm is linked to the following features of the sensing area. For external potential F(x1,x2)=sin(x1a)+cos(x2a)F(x_{1},x_{2})=\sin(\frac{x_{1}}{a})+\cos(\frac{x_{2}}{a}), both features is repsented depending on the swarm sensing area.

V.1.1 Jaboci field conjugate points

For external potential manifold with non-negative sectional curvature at each points, such as the elliptic paraboloid potential, there exist conjugate points in vector field KK. Consider p,qMp,q\in M are two points connected by a geodesic γ\gamma. p,qp,q are conjugate points along γ\gamma if there exists a non-zero Jacobi field along γ\gamma that vanishes at pp and qq. Conjugate points are when the geodesic fails, locally, to be the minimum length curve connecting two points. Thus, our geodesic deviation based algorithm also fails. An visual example of conjugate point is shown in Fig. 5. Additional agent’s protocols needs to be installed to identify and bypass conjugate points.

Refer to caption
(a)
Refer to caption
(b)
Figure 5: Visual example of conjugate point. External potential F(x1,x2)=sin(x1a)+cos(x2a)F(x_{1},x_{2})=\sin(\frac{x_{1}}{a})+\cos(\frac{x_{2}}{a}), a=2a=2, head agent initial condition [5,2,cos(4/π),sin(4/π)][-5,-2,\cos(4/\pi),\sin(4/\pi)]. (a) Two dimensional view of swarm trajectory. Head agent trajectory in red, following agents’ trajectory in magenta. (b) Enlargment of (a) around conjugate point.

V.1.2 Striction curve

External potential two-dimensinal manifold with non-positive sectional curvature, such as the hyperbolic paraboloid, are saddle surface, and thus a ruled surface. For noncylindrical ruled surface, it always has a parameterization of the form

𝐫(u,v)=𝝈(u)+v𝜹(u),\displaystyle\mathbf{r}(u,v)=\bm{\sigma}(u)+v\bm{\delta}(u), (45)

where 𝝈\bm{\sigma} is called the striction curve. In particular, hyperbolic paraboloid is a doubly ruled surface that has two striction curves. In our simulation of hyperbolic paraboloid potential, the two striction curves are [x1,±x2][x_{1},\pm\,x_{2}]. Shown in Fig. 6, if the head agent is travling on striction curves, the swarm is unable to estimate the sectional curvature. The acceleration of separation vector field is zero, equivalent to a flat space (zero curvature).

Refer to caption
(a)
Refer to caption
(b)
Figure 6: Sectional curvature estimation and two-dimensional swarm trajectorise. External potential F(x1,x2)=x12x22aF(x_{1},x_{2})=\frac{x_{1}^{2}-x_{2}^{2}}{a}, a=20a=20. Head agent travling on striction curve [x1,x2][x_{1},x_{2}]. (a) Sectional curvature estimation. Estimated curvature in red; calculated curvature in blue. (b) Two dimensional view of swarm trajectories. Head agent trajectory in red, following agents’ trajectory in magenta.

With the same number of agents in the swarm, the communication frequency and inter-agent distance affects the sensing area. For external potential manifold with feature one, smaller sensing area is more likely to avoid conjugate points; while for external potential manifold with feature two, larger sensing area provides with more information about the manifold. The overall shape of the swarm reveals the external potential is not zero in Fig. 6, eventhough the curvature estimation is zero.

V.2 Formation control algorithm

In the simulation for formation control algorithm in section IV, we assume the agents can measure its relative distance to its neighbours. Head agent has its own course of trajectory, thus not affected by other agents in the swarm.

Refer to caption
Figure 7: Velocity trajectory of the first following agent. Ideal lattice formation velocity in blue, original data DMD predicted velocity in red, approximate data DMD predicted velocity in yellow. External potential F(x1,x2)=x12+x22a,a=20F(x_{1},x_{2})=\displaystyle\frac{x_{1}^{2}+x_{2}^{2}}{a},\,a=20. Head agent initial condition [0,0,1,0][0,0,1,0] (left), [5,0,cos(6/π),sin(6/π)][-5,0,\cos(6/\pi),\sin(6/\pi)] (right).

All following agent take its right-hand neighbour (closer to the head agent) as beacon to measure its own relative position. In ideal lattice formation, following agents are traveling in parallel and at a fixed distance to its right-hand neighbour, thus measurable in theory.

Fig. 7 shows the velocity trajectory of the following agent that is next to the head agent. In this simulation, we use timestep Δt=0.1\Delta t=0.1, data size k=3k=3, observable (𝐱k,𝐲k)(\mathbf{x}_{k},\mathbf{y}_{k}) to be the velocity of agent. In practice, the data 𝐗,𝐘\mathbf{X},\mathbf{Y} can only be collected by measuring the deviation of k+1k+1 agents travling on nearby geodesics. Therefore, without any information of the external potential, the initial velocity of these agents can only be approximated in Eucliean metric, not the external potential metric itself. Fig. 7 shows that with reliable correction step, the algorithm has reliable prediction of the agent’s dynamic even uses data that is approximated in Eucliean metric.

However, when agents have near linear dynamic (constant velocity), the algorithm decreases in accuracy and eventually fails. This is expected as the algorithm is based on linear regression method. Therefore, an additional control protocol needs to be implanted when agents have linear dynamic, and the agents should have autonomous decision on when to switch the dynamic control protocols.

V.3 Margin of algorithm

In formation analysis algorithm simulations we have discussed for same numbers of agents, by changing communication frequency and inter-agent distance constrait, how the sensing area of the swarm is beneficial in some cases while in other cases affects the accuracy of the swarm formation analysis. Moreover, for the same sensing area, the number of agents have a positive correlation to the accuracy of the formation analysis.Size of the swarm also plays a role in swarm formation control. In formation control algorithm simulations, the control algorithm requires agents to be able to communicate regarding their relative distance and angle in terms of the external potential manifold metric. In practice, this types of sensing is extremely difficult. Approximation of this inter-agent distance can be made in Eucliean metric by on-board agent sensories. We notice when using the same number of agents while either increases the inter-agent distance or increases the communication frequency to enlarge sensing area, the Euclidean metric approximate increases its error.

In summury, smaller swarm size is more controlable but with lower accuracy in external potential estimation, and vice versa. This conflict is due to the fact for formation analysis, we utilize the nonlinearity in agents’ trajectory to estimate an nonlinear property, namely the external potential manifold curvature; while in formation control, we relay on linear approximation in Eucliean metric to correct the agent’s trajectory predictions. Therefore, the balance between number of agents, communication frequency, and inter-agent distance is crucial in optimalize our approach of energy efficient WSNs algorithm.

VI Conclusion

We introduced a new approach for designing energy efficient WSNs algorithms inspired by swarm intelligence. In this approach, we identify clustering WSN as swarm, and external potentials as manifolds. By observing the change of swarm lattice formation in external potential, we are able to estimate the curvature of the manifold, which are valuable information of the external potential. To maintain lattice formation in external potential, fomation control algorithm uses DMD to predict the optimal agent’s velocity in time thus guiding the trajectory of following agents.

Acknowledgements.
Y.W is supported by the Japanese Government MEXT Scholarship Program. Y.W thanks V. Putkaradze for advices and encouragement. T.H acknowledges I. Mezic and Y. Susuki for helpful discussions.

References