This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Neuromimetic Control — A Linear Model Paradigm

John Baillieul & Zexin Sun
Abstract

Stylized models of the neurodynamics that underpin sensory motor control in animals are proposed and studied. The voluntary motions of animals are typically initiated by high level intentions created in the primary cortex through a combination of perceptions of the current state of the environment along with memories of past reactions to similar states. Muscle movements are produced as a result of neural processes in which the parallel activity of large multiplicities of neurons generate signals that collectively lead to desired actions. Essential to coordinated muscle movement are intentionality, prediction, regions of the cortex dealing with misperceptions of sensory cues, and a significant level of resilience with respect to disruptions in the neural pathways through which signals must propagate. While linear models of feedback control systems have been well studied over many decades, this paper proposes and analyzes a class of models whose aims are to capture some of the essential features of neural control of movement. Whereas most linear models of feedback systems entail a state component whose dimension is higher than the number of inputs or outputs, the work that follows will treat models in which the numbers of input and output channels greatly exceed the state dimension. While we begin by considering continuous-time systems governed by differential equations, the aim will be to treat systems whose evolution involves classes of inputs that emulate neural spike trains. Within the proposed class of models, the paper will study resilience to channel dropouts, the ways in which noise and uncertainty can be mitigated by an appropriate notion of consensus among noisy inputs, and finally, by a simple model in which binary activations of a multiplicity of input channels produce a dynamic response that closely approximates the dynamics of a prescribed linear system whose inputs are continuous functions of time.

footnotetext:                   
John Baillieul is with the Departments of Mechanical Engineering, Electrical and Computer Engineering, and the Division of Systems Engineering at Boston University, Boston, MA 02115. Zexin Sun is with the Division of Systems Engineering at Boston University. The authors may be reached at {johnb, zxsun}@bu.edu.
Support from various sources including the Office of Naval Research grants N00014-10-1-0952, N00014-17-1-2075, and N00014-19-1-2571 is gratefully acknowledged.
A condensed version of this paper has been submitted to the 60thIEEE Conference on Decision and Control.

Keywords: Neuromimetic control, parallel quantized actuation, channel intermittency, neural emulation

I Introduction

The past two decades have seen dramatic increases in research devoted to information processing in networked control systems, [1]. At the same time, rapidly advancing technologies have spurred correspondingly expanded research activity in cyber-physical systems and autonomous machines ranging from assistive robots to self-driving automobiles, [2]. Against this backdrop, a new focus of research that seeks connections between control systems and neuroscience has emerged, ([3, 6, 4]), The aim is to understand how neurobiology should inform the design of control systems that can not only react in real-time to environmental stimuli but also display predictive and adaptive capabilities, [5]. More fundamentally, however, the work described below seeks to understand how these capabilities might emerge from parallel activity of sets of simple inputs that play a role analogous to sets of neurons in biological systems. What follows are preliminary results treating linear control systems in which simple inputs steer the system by collective activity. Principles of resilience with respect to channel dropouts and intermittency of channel availability are explored, and the stage is set for further study of prediction and learning. The paper is organized as follows. Section II introduces problems in designing resilient feedback stabilized motions of linear systems with (very) large numbers of inputs. (Resilience here means that the designs still achieve design objective if a number of input channels become unavailable.) Section III introduces an approach to resilient designs based on an approach we call parameter lifting. Section IV briefly discusses how adding input channels which transmit noisy signals inputs can have the net effect of reducing uncertainty in achieving a control objective. Section V takes up the problem of control with quantized inputs, and Section VI concludes with a discussion of ongoing work that is aimed at making further connections with problems in neurobiology.

Refer to caption
Figure 1: The defining characteristics of neuromimetic linear systems are large numbers of input and output channels, all of which carry simple signals comprising possibly only a single bit of information and which collectively steer the system to achieve system goals.

II Linear Systems with Large Numbers of Inputs

The models to be studied have the simple form

x˙(t)=Ax(t)+Bu(t),xn,um,andy(t)=Cx(t),yq.\begin{array}[]{l}\dot{x}(t)=Ax(t)+Bu(t),\ \ \ x\in\mathbb{R}^{n},\ \ u\in\mathbb{R}^{m},\ {\rm and}\\[5.05942pt] y(t)=Cx(t),\ \ \ \ \ \ \ \ \ \ \ \ \ \ y\in\mathbb{R}^{q}.\end{array} (1)

As in [9, 10]. we shall be interested in the evolution and output of (1) in which a portion of the input or output channels may or may not be available over any given subinterval of time. Among cases of interest, channels may intermittently switch in or out of operation. In all cases, we are explicitly assuming that m,q>nm,q>n. In [9] we studied the way the structure of a system of this form might be affected by random unavailability of input channels. The work in [10] showed the advantages of having large numbers of input channels (as measured by input energy costs as a function of the number of active input channels and resilience to channel drop-outs). In what follows we shall show that having large numbers of parallel input channels provides corresponding advantages in dealing with noise and uncertainty.

To further explore the advantages of large numbers of inputs in terms of robustness to model and operating uncertainty and resilience with respect to input channel dropouts, we shall examine control laws for systems of the form (1) in which groups of control primitives are chosen from dictionaries and aggregated on the fly to achieve desired ends. The ultimate goal is to understand how carefully aggregated finitely quantized inputs can be used to steer (1) as desired. To lay the foundation for this inquiry, we begin by focusing on dictionaries of continuous control primitives that we shall call set point stabilizing.

II-A Resilient eigenvalue assignment for systems with large numbers of inputs

Briefly introduced in [10] our dictionaries will be comprised of set-point stabilizing control primitives of the form

uj(t)=vj+kj1x1++kjnxn,j=1,,m,u_{j}(t)=v_{j}+k_{j1}x_{1}+\dots+k_{jn}x_{n},\ \ j=1,\dots,m,

where the gains kjik_{ji} are chosen to make the matrix A+BKA+BK Hurwitz, and the vjv_{j}’s are then chosen to make the desired goal point xgnx_{g}\in\mathbb{R}^{n} an equilibrium of (1). Thus, given xgx_{g} and a desired gain matrix KK the vector vv can chosen so as to satisfy the equation

(A+BK)xg+Bv=0.(A+BK)x_{g}+Bv=0. (2)
Proposition 1.

Let BB be full rank =n=n, and let the m×nm\times n matrix KK be such that the eigenvalues of A+BKA+BK are in the open left half plane. If vv is any vector satisfying (2), then the mm control inputs

uj(t)=vj+kj1x1(t)++kjnxn(t)u_{j}(t)=v_{j}+k_{j1}x_{1}(t)+\cdots+k_{jn}x_{n}(t) (3)

steer (1) toward the goal xgx_{g}.

Since m>nm>n, the problem of finding values kijk_{ij} making A+BKA+BK Hurwitz and vv satisfying (2) is underdetermined. We can thus carry out a parametric exploration of ranges of values KK and vv that make the system (1) resilient in the sense that the loss of one (or more) input channels will not prevent the system from reaching the desired goal point. To examine the effect of channel unavailability and channel intermittency, let PP be an m×mm\times m diagonal matrix whose diagonal entries are kk 1’s and mm-kk 0’s. For each of the 2m2^{m} possible projection matrices of this form, we shall be interested in cases where (A,BP)(A,BP) is a controllable pair. We have the following:

Definition 1.

Let PP be such a projection matrix with kk 1s1^{\prime}s on the main diagonal. The system (1) is said to be kk-channel controllable with respect to PP if for all T>0T>0, the matrix

WP(0,T)=0TeA(Ts)BPBTeA(Ts)T𝑑s.W_{P}(0,T)=\int_{0}^{T}\,e^{A(T-s)}BPB^{T}{e^{A(T-s)}}^{T}\,ds.

is nonsingular.

Remark 1.

For the system x˙=Ax+Bu\dot{x}=Ax+Bu, being kk-channel controllable with respect to a canonical projection PP having kk ones on the diagonal is equivalent to (A,BP)(A,BP) being a controllable pair.

Example 1.

Consider the three input system

(x˙1x˙2)=(0100)(x1x2)+(011101)u.\left(\begin{array}[]{c}\dot{x}_{1}\\ \dot{x}_{2}\end{array}\right)=\left(\begin{array}[]{cc}0&1\\ 0&0\end{array}\right)\left(\begin{array}[]{c}x_{1}\\ x_{2}\end{array}\right)+\left(\begin{array}[]{ccc}0&1&1\\ 1&0&1\\ \end{array}\right)u. (4)

Adopting the notation

P[i,j,k]=(i000j000k),P[i,j,k]=\left(\begin{array}[]{ccc}i&0&0\\ 0&j&0\\ 0&0&k\end{array}\right),

the system (4) is 3-channel controllable with respect to P[1,1,1]P[1,1,1]; it is 2-channel controllable with respect to P[1,1,0],P[1,0,1],P[1,1,0],P[1,0,1], and P[0,1,1]P[0,1,1]. It is 1-channel controllable with respect to P[1,0,0]P[1,0,0] and P[0,0,1]P[0,0,1], but it fails to be 1-channel controllable with respect to P[0,1,0]P[0,1,0].

Within classes of system (1) that are kk-channel controllable, for 1km1\leq k\leq m, we wish to characterize control designs that achieve set point goals despite different sets of jj control channels (1jmk1\leq j\leq m-k) being either intermittently or perhaps even entirely unavailable. When m>nm>n, the problem of finding KK and vv such that A+BKA+BK is Hurwitz and (2) is satisfied leaves considerable room to explore solutions (K,v)(K,v) such that A+BKA+BK is Hurwitz and (A+BPK)xg+BPv=0(A+BPK)x_{g}+BPv=0 for various coordinate projections of the type considered in Definition 1.

To take a deeper dive into the theory of LTI systems with large numbers of input channels, we introduce some notation. Fix integers 0km0\leq k\leq m, and let [m]:={1,,m}[m]:=\{1,\dots,m\}. Let ([m]k)[m]\choose k be the set of kk-element subsets of [m]. In Example 3, for instance, ([3]2)={{1,2},{1,3},{2,3}}{[3]\choose 2}=\left\{\{1,2\},\{1,3\},\{2,3\}\right\}. Extending the 2×32\times 3 example, we consider matrix pairs (A,B)(A,B) where AA is n×nn\times n and BB is n×mn\times m. We consider the following:

Problem A. Find an m×nm\times n gain matrix KK such that A+BPIKA+BP_{I}K is Hurwitz for projections PIP_{I} onto the coordinates corresponding to the index set II for all I([m]j)I\in{[m]\choose j} with njmn\leq j\leq m.

Problem B. Find an m×nm\times n gain matrix KK that assigns eigenvalues of A+BPIKA+BP_{I}K to specific values for each I([m]n)I\in{[m]\choose n}.


Both of these problems can be solved if m=n+1m=n+1, but when m>n+1m>n+1, there are more equations than unknowns, making the problems over constrained and thus generally not solvable. We consider the following:

Definition 2.

Given a system (1) where AA and BB are respectively n×nn\times n and n×mn\times m matrices with nmn\leq m, for each I([m]n)I\in{[m]\choose n}, the II-th principal subsystem is given by the state evolution

x˙(t)=Ax(t)+BPIu(t).\dot{x}(t)=Ax(t)+BP_{I}u(t).

\square

Problem B requires solving nn equations for each I([m]n)I\in{[m]\choose n}, and if we seek an m×nm\times n gain matrix KK which places eigenvalues of A+BPIKA+BP_{I}K for everyI([m]n)I\in{[m]\choose n}, then a total of n(mn)n{m\choose n}the simultaneous equations must be solved to determine the nmnm entries in KK. Noting that nmn(mn)nm\leq n{m\choose n}, with the inequality being strict if m>n+1m>n+1, we see that solving Problem B cannot be solved in general. Problem A is less exacting in that it only requires eigenvalues to lie in the open left half plane, but it carries the further requirement that a single gain KK places all the closed-loop eigenvalues of all IthI-th subsystems in the open left half plane for I([m]n)I\in{[m]\choose n} and all kk in the range nkmn\leq k\leq m. The following example shows that solutions to Problem B are not necessarily solutions to Problem A as well and illustrates the complexity of the resilient eigenvalue placement problem.

Example 2.

For the system (4) of Example 1, we consider the three 2-channel controllable pairs

(A,BP[110]),(A,BP[101]),(A,BP[011]).(A,BP[110]),\ \ \ (A,BP[101]),\ \ \ (A,BP[011]).

We look for a 3×23\times 2 gain matrix KK such that A+BPKA+BPK has specified eigenvalues for each of these PP’s. Thus we seek to assign eigenvalues to the three matrices

(k211+k22k11k12),(k311+k32k11+k31k12+k32),\left(\begin{array}[]{cc}k_{21}&1+k_{22}\\ k_{11}&k_{12}\end{array}\right),\ \ \ \left(\begin{array}[]{cc}k_{31}&1+k_{32}\\ k_{11}+k_{31}&k_{12}+k_{32}\end{array}\right),
(k21+k311+k22+k32k31k32)\left(\begin{array}[]{cc}k_{21}+k_{31}&1+k_{22}+k_{32}\\ k_{31}&k_{32}\end{array}\right)

respectively. For any choice of LHP eigenvalues, this requires solving six equations in six unknowns. For all three matrices to have eigenvalues at (-1,-1) the kijk_{ij}-values k11=0,k12=1,k21=1,k22=0,k31=1/2,k32=1/2k_{11}=0,k_{12}=-1,k_{21}=-1,k_{22}=0,k_{31}=-1/2,k_{32}=-1/2 place these closed loop eigenvalues as desired. For this choice of KK, the closed loop eigenvalues of (A+BK)(A+BK) are 3/2±i/2-3/2\pm i/2. It is not possible to assign all eigenvalues by state feedback independently for the four controllable pairs (A,B)(A,B), (A,BP[110])(A,BP[110]), (A,B,P[101])(A,B,P[101]), and (A,BP[011])(A,BP[011]). Moreover, it is coincidental that the eigenvalues of A+BKA+BK are in the left half plane. This is seen in the following examples. (Consider k11=2,k12=3,k21=1,k22=2.4,k31=0,k32=1/2k_{11}=-2,k_{12}=-3,k_{21}=-1,k_{22}=-2.4,k_{31}=0,k_{32}=-1/2. There are the following closed-loop eigenvalues: For A+BP[110]KA+BP[110]K: (3.95,0.05)(-3.95,-0.05), for A+BP[101]KA+BP[101]K: (3.19,0.31)(-3.19,-0.31), and for A+BP[011]KA+BP[011]K: (1,0.5)(-1,-0.5); but the eigenvalues of A+BKA+BK are (4.57,0.066)(-4.57,0.066).)

Resilience requires an approach that applies simultaneously to all pairs (A,BPIA,BP_{I}) where PIP_{I} ranges over the lattice of projections PIP_{I}, I([m]j),j=n,mI\in{[m]\choose j},\ j=n\dots,m. In view of these examples, it is of interest to explore conditions under which the stability of feedback designs will be preserved as control input channels become intermittently or even permanently unavailable.

III Lifted Parameters

To exploit the advantages of a large number of control input channels, we turn our attention to using the extra degrees of freedom in specifying the control offsets v1,,vmv_{1},\dots,v_{m} so as to make (1) resilient in the face of channels being intermittently unavailable. As noted in [10], we seek an offset vector vv that satisfies B[(A^+K)xg+v]=0B\left[(\hat{A}+K)x_{g}+v\right]=0 where A^\hat{A} is any m×nm\times n matrix satisfying BA^=AB\hat{A}=A. If KK is chosen such that A+BKA+BK is Hurwitz, the feedback law u=Kx+vu=Kx+v satisfies (2) and by Proposition 1 steers the state of (1) asymptotically toward the goal in xgx_{g}. Under the assumption that BB has full rank nn, such matrix solutions can be found–although such an A^\hat{A} will not be unique. Once A^\hat{A} and the gain matrix KK have been chosen, the offset vector vv is determined by the equation

(A^+K)xg+v=0.(\hat{A}+K)x_{g}+v=0. (5)

Conditions under which A^\hat{A} may be chosen to make (1) resilient to channel dropouts is the following.

Theorem 1.

([10]) Consider the linear system (1) in which the number of control inputs, mm, is strictly larger than the dimension of the state, nn and in which rank B=nB=n. Let the gain KK be chosen such that A+BKA+BK is Hurwitz, and assume that
(a) PP is a projection of the form considered in Definition 1 and (1) is {\ell}-channel controllable with respect to PP;
(b) A+BPKA+BPK is Hurwitz;
(c) the solution A^\hat{A} of BA^=AB\hat{A}=A is invariant under PP—i.e., PA^=A^P\hat{A}=\hat{A}; and
(d) BPBP has rank nn. Then the control inputs defined by (3) steer (1) toward the goal point xgx_{g} whether or not the (m)(m-\ell) input channels that are mapped to zero by PP are available.

The next two lemmas review simple facts about the input matrices BB under consideration.

Lemma 1.

Let BB be an n×mn\times m matrix with mnm\geq n. If all principal minors of BB are nonzero, then BB has full rank.

Proof.

The matrix BB has either more or the same number of columns as rows. Its rank is thus the number of linearly independent rows. If there is a nontrivial linear combination of rows of BB, then this linear combination is inherited by all n×nn\times n submatrices, and thus all principal minors would be zero. ∎

Lemma 2.

Let BB be an n×mn\times m matrix with mnm\geq n and having full rank nn. Then BBTBB^{T} is positive definite.

Proof.

If BB has full rank, the nn rows are linearly independent. Hence if xTBx^{T}B is the zero vector in m\mathbb{R}^{m}, then we must have x=0x=0. Thus, for all xnx\in\mathbb{R}^{n}, xTBBTx=BTx20x^{T}BB^{T}x=\|B^{T}x\|^{2}\geq 0 with equality holding if and only if x=0x=0. ∎

Lemma 3.

Consider a linear function B:mnB:\mathbb{R}^{m}\to\mathbb{R}^{n} given by an n×mn\times m rank nn matrix BB with mnm\geq n. This induces a linear function B^\hat{\rm B} having rank n2n^{2} from the space of m×nm\times n matrices, m×n\mathbb{R}^{m\times n}, to the space of n×nn\times n matrices, n×n\mathbb{R}^{n\times n}, which is given explicitly by B^\hat{\rm B}(Y) = BB\cdotY.

Proof.

Because BB has full rank nn, it has a rank nn right inverse UU. Given any n×nn\times n matrix AA, the image of UAm×nUA\in\mathbb{R}^{m\times n} under B^\hat{\rm B} is AA, proving the lemma. ∎

This function lifting is depicted graphically by

B^:m×nn×n||B:mn.\begin{array}[]{clcc}\hat{\rm B}:&\mathbb{R}^{m\times n}&\rightarrow&\mathbb{R}^{n\times n}\\ &\ \ \big{|}&&\big{|}\\ B:&\mathbb{R}^{m}&\rightarrow&\mathbb{R}^{n}.\end{array}
Lemma 4.

Given that the rank of the n×mn\times m matrix BB is nn with n<mn<m, the dimension of the null space of BB is mnm-n. The dimension of the null space of B^\hat{\rm B} is n(mn)n(m-n).

Proof.

Let the set column vectors {n1,,nmn}\left\{\vec{n}_{1},\dots,\vec{n}_{m-n}\right\} be a basis of the nullspace of BB. The set of all m×nm\times n matrices of the form N=(ni1ni2nin)N=(\vec{n}_{i_{1}}\vdots\vec{n}_{i_{2}}\vdots\cdots\vdots\vec{n}_{i_{n}}), where ij[mn]i_{j}\in[m-n], is a linearly independent set that spans the null space of B^\hat{\rm B}. There are mnm-n independent choices of labels for each of the nn columns of NN, proving the lemma. ∎

Lemma 5.

Let AA be an n×nn\times n matrix and BB be an n×mn\times m matrix with m>nm>n and BB having full rank nn. Then there is an n(mn)n(m-n)-parameter family of solutions X=A^X=\hat{A} to the matrix equation B^(X)=A\hat{\rm B}(X)=A.

Proof.

From Lemma 4, we may find a basis {N1,,Nn(mn)}\{N_{1},\dots,N_{n(m-n)}\} of the nullspace of B^\hat{\rm B}, which we denote 𝒩(B^){\cal N}(\hat{\rm B}). A particular solution of the matrix equation is A^=BT(BBT)1A\hat{A}=B^{T}(BB^{T})^{-1}A, and any other solution may be written as BT(BBT)1A+NB^{T}(BB^{T})^{-1}A+N where N𝒩(B^)N\in{\cal N}(\hat{\rm B}). This proves the lemma. ∎

Lemma 6.

Consider the LTI system x˙=Ax+Bu\dot{x}=Ax+Bu, where AA is an n×nn\times n real matrix and BB is an n×mn\times m real matrix with m>nm>n. Suppose that all principal minors of BB are nonzero, and further assume that A=0A=0. Then, for any real number α>0\alpha>0, the feedback gain K=αBTK=-\alpha B^{T} is stabilizing; i.e. BKBK has all eigenvalues in the open left half plane. Moreover, for all njmn\leq j\leq m and I([m]j)I\in{[m]\choose j}, all matrices BPIKBP_{I}K have eigenvalues in the open left half plane.

Proof.

Let B¯=PIB\bar{B}=P_{I}B. Noting that B¯\bar{B} has full rank nn and B¯B¯T=BPIBT\bar{B}\bar{B}^{T}=BP_{I}B^{T}, the lemma follows from Lemma 2.

Lemma 7.

Consider the LTI system x˙=Ax+Bu\dot{x}=Ax+Bu, where AA is an n×nn\times n real matrix and BB is an n×mn\times m real matrix with m>nm>n. Suppose that all principal minors of BB are nonzero, and further assume that A^\hat{A} is a solution of B^(A^)=A\hat{\rm B}(\hat{A})=A. Then, for any real α>0\alpha>0 and feedback gain

K=αBTA^,K=-\alpha B^{T}-\hat{A}, (6)

the closed loop system x˙=(A+BK)x\dot{x}=(A+BK)x is exponentially stable—i.e..the eigenvalues of A+BKA+BK lie in the open left half plane.

Proof.

Substituting u=Kxu=Kx into x˙=Ax+Bu\dot{x}=Ax+Bu, we find that the closed-loop system is given by x˙=αBBTx\dot{x}=-\alpha BB^{T}x, and the conclusion follows from Lemma 6. ∎

Definition 3.

Let I([m]j)I\in{[m]\choose j}. The lattice of index supersets of I in [m][m] is given by I={L[m]:IL}{\cal L}_{I}=\{L\subset[m]:I\subset L\}.

Theorem 2.

Consider the LTI system x˙=Ax+Bu\dot{x}=Ax+Bu, where AA is an n×nn\times n real matrix and BB is an n×mn\times m real matrix with m>nm>n. Suppose that all principal minors of BB are nonzero, and further assume that A^\hat{A} is a solution of B^(A^)=A\hat{\rm B}(\hat{A})=A, and that for a given I([m]j)I\in{[m]\choose j}, A^\hat{A} is invariant under PIP_{I}—i.e.PIA^=A^P_{I}\hat{A}=\hat{A}. Then, for any real α>0\alpha>0 and feedback gain K=αBTA^K=-\alpha B^{T}-\hat{A} the closed loop systems x˙=(A+BPLK)x\dot{x}=(A+BP_{L}K)x are exponentially stable for all LIL\in{\cal L}_{I}.

Proof.

The theorem follows from Lemma 6 with BPIBP_{I} and PIKP_{I}K here playing the roles of BB and KK in the lemma. ∎

Within the class of resilient (invariant under projections) stabilizing feedback designs identified in Theorem 2, there is considerable freedom to vary paramters to meet system objectives. To examine parameter choices, let P[k]P_{[k]} be a diagonal m×mm\times m projection matrix having kk-0’s and mkm-k-1’s on the principal diagonal (1kmn1\leq k\leq m-n). For each such P[k]P_{[k]}, the n(mn)n(m-n) parameter family of solutions A^\hat{A} to B^(A^)=A\hat{\rm B}(\hat{A})=A is further constrained by the invariance P[k]A^=A^P_{[k]}\hat{A}=\hat{A} which imposes knkn further constraints. Hence, the family of P[k]P_{[k]}-invariant A^\hat{A}’s is n(mnk)n(m-n-k)-dimensional; see Fig. 2. Within this family, design choices may involve adjusting the overall strength of the feedback (parameter α\alpha) or differentially adjusting the influence of each input channel by scaling rows of BTB^{T} in (6). Such weighting will be discussed in greater detail in Section V where we note that to make the models reflective of the parallel actions of many simple inputs, it is necessary to normalize the weight α\alpha of BTB^{T} in (6) so as to not let the influence of inputs depend on the total number of those available (as opposed to the total number acting according to the various projections PP). In other words, we want to take care not to overly weight the influence of large groups versus small groups of channels.

Refer to caption
Figure 2: Solutions to B^(A^)=A\hat{\rm B}(\hat{A})=A and the lattice of channel dropouts. At the root, there is an n(mn)n(m-n)-parameter family of solutions, and if any channel drops out, the solutions under the corresponding projection PIP_{I} are an n(mn1)n(m-n-1)-parameter family. At the bottom leaf nodes of the lattice where mnm-n channels have become unavailable, there will typically be a single invariant A^\hat{A}. Note that there are m(m1)(mn+1)m(m-1)\cdots(m-n+1) distinct paths from the root to the (mn){m\choose n} leaves.
Example 3.

We end this section with an example of random unavailability of input channels. Considering still the system of Example 1, define the feedback gain K=αBTA^K=\alpha B^{T}-\hat{A} where

A^=(000100),\hat{A}=\left(\begin{array}[]{cc}0&0\\ 0&1\\ 0&0\end{array}\right),

and α=2\alpha=2. Consider channel intermittency in which each of the three channels is randomly available according to a two-state Markov process defined by

(p˙u(t)p˙a(t))=(δϵδϵ)(pu(t)pa(t)),\left(\begin{array}[]{c}\dot{p}_{u}(t)\\ \dot{p}_{a}(t)\end{array}\right)=\left(\begin{array}[]{cc}-\delta&\epsilon\\ \delta&-\epsilon\end{array}\right)\left(\begin{array}[]{c}p_{u}(t)\\ p_{a}(t)\end{array}\right),

where pu=p_{u}\ = probability of channel unavailability; pa=1pu=p_{a}=1-p_{u}\ = probability of the channel being available. Assuming the availabilities of the channels are i.i.d. with this availability law, simulations show that for a range of parameters δ,ϵ>0\delta,\epsilon>0 that

x˙=(A+BP(t)K)x\dot{x}=(A+BP(t)K)\,x (7)

is asymptotically stable with the time-dependent projection P(t)P(t) being a diagonal matrix whose diagonal entries can take any combination of values 0 and 1. The special case δ=3,ϵ=3\delta=3,\ \epsilon=3 is illustrated in Fig. 3. The hypotheses of Theorem 2 characterize a worst case scenario in terms of stability since there may be time intervals over which (7) does not satisfy the hypotheses because P(t)P(t) does not leave A^\hat{A} invariant.

Refer to caption

Figure 3: An example shows that when channels are intermittently available, the designed feedback can still maintain the asymptotic stability of the system, even if there are some short short periods of instability as exhibited in the black circle.

IV Uncertainty reduction from adding a channel

Suppose that instead of eq. (5), uncertainty in each input channel is taken into account and the actual system dynamics are governed by

(A^+K)xg+v+nϵ=0,(\hat{A}+K)x_{g}+v+n_{\epsilon}=0, (8)

where nϵn_{\epsilon} is a Gaussian random vector whose entries are i.i.dN(0,1)i.i.d\in N(0,1). Assume further that these channel-wise uncertainties are mutually independent. Then the asymptotic steady state limit xx_{\infty} will satisfy

E[xxg2]=tr[(A+BK)1BΣϵBT((A+BK)T)1],E[\|x_{\infty}-x_{g}\|^{2}]=tr[(A+BK)^{-1}B\Sigma_{\epsilon}B^{T}((A+BK)^{T})^{-1}], (9)

where Σϵ=I\Sigma_{\epsilon}=I is the m×mm\times m covariance of the channel perturbations. The question of how steady state error is affected by the addition of a noisy channel is partially addressed as follows.

Theorem 3.

Suppose the system (1) is controllable, and that KK and A^\hat{A} have been chosen as in Theorem 2 so that A+BKA+BK is Hurwitz and the control input u(t)=Kx(t)+vu(t)=\ Kx(t)+v has been chosen to steer the state to xgx_{g} using vv given by (5). Let bnb\in\mathbb{R}^{n} and consider the augmented n×(m+1)n\times(m+1) matrix B¯=(Bb)\bar{B}=(B\vdots b). Then with the control input u¯=K¯x+v¯\bar{u}=\bar{K}x+\bar{v}, where K¯=(Kk)\bar{K}=\left(\begin{array}[]{cc}K\\ k\end{array}\right), k=αbTk=-\alpha b^{T}, and offset v¯=(v0)\bar{v}=\left(\begin{array}[]{cc}v\\ 0\end{array}\right), the system x˙=Ax+B¯u¯\dot{x}=Ax+\bar{B}\bar{u} is steered such that the steady state error covariance is

ΣB¯=M(Bb)Σϵ(Bb)TMT,\Sigma_{\bar{B}}=M(B\vdots b)\Sigma_{\epsilon}(B\vdots b)^{T}M^{T},

where

M=(A+(Bb)(Kk))1.M=\left(A+(B\vdots b)(\begin{array}[]{c}K\\ k\end{array})\right)^{-1}.

The corresponding steady state error covariance for x˙=Ax+Bu\dot{x}=Ax+Bu is

ΣB=(A+BK)1BΣϵBT((A+BK)T)1,and\Sigma_{B}=(A+BK)^{-1}B\Sigma_{\epsilon}B^{T}((A+BK)^{T})^{-1},\ and

if the mean squared asymptotic errors under the two control laws are denoted by errB¯err_{\bar{B}} and errBerr_{B}, then errB¯<errBerr_{\bar{B}}<err_{B}.

Proof.

Let MB=(A+BK)1M_{B}=(A+BK)^{-1} and MB¯=(A+(Bb)(Kk))1.M_{\bar{B}}=\left(A+(B\vdots b)(\begin{array}[]{c}K\\ k\end{array})\right)^{-1}. We have A+(Bb)(Kk)=A+BK+bk=A+B(αBTA^)αbbT=α(BBT+bbTA+(B\vdots b)\left(\begin{array}[]{c}K\\ k\end{array}\right)=A+BK+bk=A+B(-\alpha B^{T}-\hat{A})-\alpha bb^{T}=-\alpha(BB^{T}+bb^{T}). Hence MB¯=(1/α)(BBT+bbT)1M_{\bar{B}}=-(1/\alpha)(BB^{T}+bb^{T})^{-1}, whereas MB=(1/α)(BBT)1M_{B}=-(1/\alpha)(BB^{T})^{-1}.

Whence, in comparing mean square errors,

errB^=tr{MB¯B¯B¯TMB¯T}=(1/α2)tr{(BBT+bbT)1(BBT+bbT)(BBT+bbT)1}=(1/α2)tr{(BBT+bbT)1};\begin{array}[]{ccl}err_{\hat{B}}&=&tr\left\{M_{\bar{B}}\bar{B}{\bar{B}}^{T}M_{\bar{B}}^{T}\right\}\\[5.05942pt] &=&(1/\alpha^{2})tr\left\{(BB^{T}+bb^{T})^{-1}(BB^{T}+bb^{T})\right.\\ &&\ \ \ \ \ \ \ \ \ \ \left.(BB^{T}+bb^{T})^{-1}\right\}\\[5.05942pt] &=&(1/\alpha^{2})tr\left\{(BB^{T}+bb^{T})^{-1}\right\};\end{array}
errB=tr{MBBBTMBT}=(1/α2)tr{((BBT)1(BBT)(BBT)1}=(1/α2)tr{(BBT)1}.\begin{array}[]{ccl}err_{B}&=&tr\left\{M_{B}BB^{T}M_{B}^{T}\right\}\\[5.05942pt] &=&(1/\alpha^{2})tr\left\{((BB^{T})^{-1}(BB^{T})(BB^{T})^{-1}\right\}\\[5.05942pt] &=&(1/\alpha^{2})tr\left\{(BB^{T})^{-1}\right\}.\end{array}

Since BBT+bbT>BBTBB^{T}+bb^{T}>BB^{T} in the natural ordering of symmetric positive definite matrices, the conclusion of the theorem follows. ∎

V The nuanced control authority of highly parallel quantized actuation

Technological advances have made it both possible and desirable to re-examine classical linear feedback designs using a new lens as described in the previous sections. In this section, we revisit the concepts of the previous sections in terms of quantized control along the lines of [1], [17], [18], [19], and [25]. The advantages of large numbers of input channels in terms of reducing energy ([10]) and uncertainty (Theorem 3 above) come at a cost that is not easily modeled in the context of linear time-invariant (LTI) systems. Indeed as the number of control input channels increases, so does the amount of information that must be processed to implement a feedback design of the form of Theorem 2 in terms of the attention needed (measured in bits per second) to ensure stable motions. (See [16] for a discussion of attention in feedback control.) An equally subtle problem with the models of massively parallel input described in the preceding sections is the need for asymptotic scaling of inputs as the number of channels grows. If the number of input channels is large enough, then the feedback control u=Kxu=Kx for K=BTK=-B^{T} will be stabilizing for any system (1). This is certainly plausible based on Gershgorin’s Theorem and the observation that if B1B_{1} has full rank mm, and B2B_{2} is obtained from B1B_{1} by adding one (or more) columns, then B2B2T>B1B1TB_{2}B_{2}^{T}>B_{1}B_{1}^{T} in terms of the standard order relationship on positive definite matrices. A precise statement of how fast the matrix BBTBB^{T} grows from adding columns in given by the following.

Proposition 2.

Let BB be an 2×m2\times m matrix with m>2m>2 whose columns are unit vectors uniformly spaced on the unit circle S1S^{1}. Then, the spectral radius of BBTBB^{T} is m/2m/2.

Proof.

The proof is by direct calculation. There is no loss of generality in assuming the mm uniformly distributed vectors are (cos2kπm,sin2kπm)(\cos\frac{2k\pi}{m},\sin\frac{2k\pi}{m}). BBTBB^{T} is then given explicitly by

BBT=(k=1mcos22kπmk=1msin2kπmcos2kπmk=1msin2kπmcos2kπmk=1msin22kπm).BB^{T}=\left(\begin{array}[]{ll}\sum_{k=1}^{m}\cos^{2}\frac{2k\pi}{m}&\sum_{k=1}^{m}\sin\frac{2k\pi}{m}\cos\frac{2k\pi}{m}\\[3.61371pt] \sum_{k=1}^{m}\sin\frac{2k\pi}{m}\cos\frac{2k\pi}{m}&\sum_{k=1}^{m}\sin^{2}\frac{2k\pi}{m}\end{array}\right).

Standard trig identities show the off-diagonal matrix entries are zero and the diagonal entries are m/2m/2, proving the proposition. ∎

For n>2n>2, the conclusion is similar but slightly more complex. We consider n×mn\times m matrices BB (m>nm>n) whose columns are unit vectors in n\mathbb{R}^{n}. Following standard constructions of Euler angles (e.g. http://www.baillieul.org/Robotics/740_L9.pdf), we define spherical coordinates (or generalized Euler angles) on the n1n-1-sphere Sn1S^{n-1} whereby points (x1,,xn)Sn1(x_{1},\dots,x_{n})\in S^{n-1} are given by

x1=sinθ1sinθ2sinθn1x2=sinθ1sinθ2cosθn1x3=sinθ1sinθ2cosθn2xn1=sinθ1cosθ2xn=cosθ1,\begin{array}[]{lll}x_{1}&=&\sin\theta_{1}\sin\theta_{2}\cdots\ \sin\theta_{n-1}\\ x_{2}&=&\sin\theta_{1}\sin\theta_{2}\cdots\ \cos\theta_{n-1}\\ x_{3}&=&\sin\theta_{1}\sin\theta_{2}\cdots\cos\theta_{n-2}\\ &\vdots&\\ x_{n-1}&=&\sin\theta_{1}\cos\theta_{2}\\ x_{n}&=&\cos\theta_{1},\end{array}

where 0θ1π, 0θj<2π0\leq\theta_{1}\leq\pi,\ 0\leq\theta_{j}<2\pi for j=2,,n1j=2,\dots,n-1. We shall call a distribution of points on Sn1S^{n-1} parametrically regular if it is given by θ1=jπN1,(j=0,,N1)\theta_{1}=\frac{j\pi}{N_{1}},\ (j=0,\dots,N_{1}), and θk=2jπNk,(j=1,,Nk)\theta_{k}=\frac{2j\pi}{N_{k}},\ (j=1,\dots,N_{k}) for 2kn12\leq k\leq n-1. The following extends Proposition 2 to n×mn\times m matrices where n>2n>2.

Theorem 4.

Let the n×mn\times m matrix BB comprise columns consisting of all unit nn-vectors associated with the parametrically regular distribution (N1,,Nn1)(N_{1},\dots,N_{n-1}) with all Nj>2N_{j}>2. BB then has m=(N1+1)N2Nn1m=(N_{1}+1)N_{2}\cdots N_{n-1} columns; BBTBB^{T} is diagonal, and the largest diagonal entry (eigenvalue) is N1+22N2Nn1\frac{N_{1}+2}{2}N_{2}\cdots N_{n-1}.

Proof.

The matrices BB of the theorem have the form

(sin(j1πN1)sin(j22πN2)sin(jn12πNn1)sin(j1πN1)sin(j22πN2)cos(jn12πNn1)cos(j1πN1)).\left(\begin{array}[]{ccc}&\sin(\frac{j_{1}\pi}{N_{1}})\sin(\frac{j_{2}2\pi}{N_{2}})\cdots\sin(\frac{j_{n-1}2\pi}{N_{n-1}})&\\ \cdots&\sin(\frac{j_{1}\pi}{N_{1}})\sin(\frac{j_{2}2\pi}{N_{2}})\cdots\cos(\frac{j_{n-1}2\pi}{N_{n-1}})&\cdots\\ \cdots&\vdots&\cdots\\ &\cos(\frac{j_{1}\pi}{N_{1}})&\end{array}\right).

The entries in the product BBTBB^{T} may be factored into products of sums of the form

k=1N1cos2(kπN1)=N1+22,k=1Njsin2(k2πNj)=Nj2,k=1Njcos2(k2πNj)=Nj2,k=1Njsin(k2πNj)cos(k2πNj)=0,(j=2,,n1),\begin{array}[]{cccc}\ \sum_{k=1}^{N_{1}}\cos^{2}(\frac{k\pi}{N_{1}})=\frac{N_{1}+2}{2},&\sum_{k=1}^{N_{j}}\sin^{2}(\frac{k2\pi}{N_{j}})=\frac{N_{j}}{2},\\ \sum_{k=1}^{N_{j}}\cos^{2}(\frac{k2\pi}{N_{j}})=\frac{N_{j}}{2},&\ \ \ \ \ \ \ \sum_{k=1}^{N_{j}}\sin(\frac{k2\pi}{N_{j}})\cos(\frac{k2\pi}{N_{j}})=0,(j=2,\dots,n-1),&\end{array}

and from this the result follows. ∎

Remark 2.

Simulation experiments show that for matrices BB comprised of columns of random unit vectors in n\mathbb{R}^{n} that are approximately parametrically regular in their distribution, the matrix norm (=largest singular value) of BBTBB^{T} is 𝒪(m){\cal O}(m), in agreement with the theorem.

To keep the focus on channel intermittency and pursue the emulation problems of the following section, we shall assume the parameter α\alpha appearing in Theorem 2 is inversely related to the size, mm, of the matrix BB, and for the case of quantized inputs considered next, other bounds on input magnitude may apply as well.

We conclude by considering discrete-time descriptions of (1) in which the control inputs to each channel are selected from finite sets having two or more elements. To start, suppose (1) is steered by inputs that take continuous values in m\mathbb{R}^{m} but are piecewise constant between uniformly spaced instants of time: t0<t1<;tk+1tk=ht_{0}<t_{1}<\cdots;\ t_{k+1}-t_{k}=h, and u(t)=u(tk)u(t)=u(t_{k}), for tkt<tk+1t_{k}\leq t<t_{k+1}. Then the state transition between sampling instants is given by

x(tk+1)=Fhx(tk)+Γhu(tk)x(t_{k+1})=F_{h}x(t_{k})+\Gamma_{h}u(t_{k}) (10)

where

Fh=eAhandΓh=0heA(hs)𝑑sB.F_{h}=e^{Ah}\ \ {\rm and}\ \ \Gamma_{h}=\int_{0}^{h}e^{A(h-s)}\,ds\cdot B.

When BB (and hence Γh\Gamma_{h}) has more columns than rows, there is a parameter lifting Γ^:m×nn×n\hat{\Gamma}:\mathbb{R}^{m\times n}\rightarrow\mathbb{R}^{n\times n}, and it is possible to find F^\hat{F} satisfying Γ^(F^)=Fh\hat{\Gamma}(\hat{F})=F_{h}. The relationship between B^\hat{B} and Γ^\hat{\Gamma} and between the lifted parameters A^\hat{A} and F^\hat{F} is highly nonlinear, and will not be explored further here. Rather, we consider first order approximations to FhF_{h} and Γh\Gamma_{h} and rewrite (10) as

x(tk+1)=(I+Ah)x(tk)+hBu(tk)+o(h)=x(tk)+h(Ax(tk)+Bu(tk))+o(h).\begin{array}[]{ccl}x(t_{k+1})&=&(I+Ah)x(t_{k})+hBu(t_{k})+o(h)\\[3.61371pt] &=&x(t_{k})+h(Ax(t_{k})+Bu(t_{k}))+o(h).\end{array}

We may use lifted parameters of the first order terms and write the first order, discrete time approximation to (1) as

x(k+1)=(I+hB^(A^))x(k)+hBu(k).x(k+1)=(I+h\hat{\rm B}(\hat{A}))x(k)+hBu(k). (11)

Having approximated the system in this way, we consider the problem of steering (11) by input vectors u(k)u(k) having each entry selected from a finite set, say {1,1}\{-1,1\} in the simplest case. A variety of MEMS devices that operate utilizing very large arrays of binary actuators can be modeled in this way, and successful control designs in adaptive optics applications have been either open-loop or hybrid open-loop combined with modulation from optical wavefront sensors ([15],[20]). The approach being pursued in the present work aims to close feedback loops using real-time measurements of the state. For any state x(k)x(k), binary choices must be made to determine the value of the input to each of the mm channels. Control design is thus equivalent to implementing a selection function along the lines of [16]. Some details of research on such designs are described in the next section, but complete results will appear elsewhere.

VI Conclusions and work in progress

It is easy to show that if inputs u(k)u(k) to (11) can take a continuum of values in an open neighborhood of the origin in m\mathbb{R}^{m}, then feedback laws based on sample-and-hold versions of (6) can be designed to asymptotically steer the state of (11) to the origin of the state space n\mathbb{R}^{n}. Current work is aimed at extending the ideas we have described in the above sections of the paper to a theory of feedback control designs for (11) in which control inputs at each time step are selected from a finite set 𝒰{\cal U} of admissible control values. One goal is design of both selection functions and modulation strategies whereby systems of the form (11) with finitely quantized feedback can emulate the motions of continuous time systems like the ones treated in the preceding sections. In the quest to find control theoretic abstractions of widely studied problems in neuroscience, the work is organized around three themes: neural coding (characterizing the brain responses to stimuli by collective electrical activity—spiking and bursting—of groups of neurons and the relationships among the electrical activities of the neurons in the group), neural modulation (real-time transformations of neural circuits induced by various means including chemically or through neural input from projection neurons) wherein connections within the anatomical connectome are made to specify the functional brain circuits that give rise to behavior [23], and neural emulation (finding patterns of neural activity that associate potential actions to the outcomes those actions are expected to produce, [21]).

The simplest coding and emulation problems in this context involve finding state-dependent binary vectors u[k]{1,1}mu[k]\in\{-1,1\}^{m} that elicit the desired state transitions of (11). For finite dimensional linear systems, it is natural to consider using inputs that are both spatially and temporally discrete to emulate continuous vector fields as follows. Let HH be an n×nn\times n Hurwitz matrix. A discretized Euler approximation to the solution of x˙=Hx;x(0)=x0\dot{x}=Hx;\ x(0)=x_{0} is

x(k+1)=(I+hH)x(k);x(0)=x0.x(k+1)=(I+hH)x(k);x(0)=x_{0}.

Two specific problems of emulating this system by (11) with binary inputs are:

  • Find a partition of the state space {Ui:Ui=n;UioUjo=;Uio=interiorUi}\{U_{i}\,:\,\cup\ U_{i}=\mathbb{R}^{n};\ \ U_{i}^{o}\cap U_{j}^{o}=\emptyset;\ U_{i}^{o}={\rm interior}\ U_{i}\} and a rule for assigning coordinates of u(k)u(k) to be either +1 or -1, so that for each xUix\in U_{i}, Ax+Bu(k)Ax+Bu(k) is as close as possible to HxHx (in an appropriate metric).

  • Find a partition and a rule that for each UiU_{i} both assigns ±1\pm 1 coordinate entries of u(k)u(k) together with a coordinate projection operation P(k)P(k) determining which channels are operative in the partition cell UiU_{i} with the property the Ax+BP(k)u(k)Ax+BP(k)u(k) is as close as possible to HxHx (in an appropriate metric).

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: (a) Linear vectors fields scale in magnitude with the distance from the origin. (b) The choices of vector directions for the matrix BB with binary ±1\pm 1 inputs. (c) Choices of vector directions as shown in (b) and specified by ±1\pm 1 inputs to the channels of BB to provide plausible approximations to the continuous vector field of (a), and (d) is typical paths followed by applying the ±1\pm 1 inputs as prescribed in (c).
Example 4.

Returning to the two-state, three-input examples considered in Section II, suppose A=0A=0 and B=(011/2101/2)B=\left(\begin{array}[]{ccc}0&1&1/\sqrt{2}\\ 1&0&1/\sqrt{2}\end{array}\right). We have normalized the third column of BB in order that all three channels are equally influential in steering with binary inputs chosen from {1,1}\{-1,1\}. Consider H=(0112)H=\left(\begin{array}[]{cc}0&1\\ -1&-2\end{array}\right), a Hurwitz matrix with both eigenvalues equal to -1. A sketch of the vector field and flow is shown in Fig. 4(a). It is noted that length scales of linear vector fields increase in proportion to their distance from the origin. This needs to be accounted for in interpolations using binary affine maps in a way that may be similar to the way that neurons in the entorhinal cortex—grid cells in particular—encode information about length scales. In mammalian brains, grid cells show stable firing patterns that are correlated with hexagonal grids of locations that an animal visits as it moves around a range space. Grid cells appear to be arranged in discrete modules anatomically arranged in order of increasing scale along the dorso-ventral axis of the medial entorhinal cortex [26, 22], with each module’s neurons firing at module-specific scales.

The mechanisms of neural coding and modulation are ares of active research in brain science, and it is in part for this reason that we have avoided being precise in specifying approximation metrics in this emulation problem. Both binary codings depicted in Fig. 4 steer the quantized system (11) toward the origin—but along qualitatively different paths. In observing animal movements in field and laboratory settings, it is found that they can exhibit a good deal of individuality in choosing paths from a common start point to a common goal ([28]). There is also evidence that among regions of the brain that guide movement, some exhibit neural activity that is more varied than others such as the grid cell modules in the entorhinal cortex [27], which tend to be similar in neuroanatomy from one animal to the next. As future work will be focused on systems with large numbers of simple inputs along with learning strategies, we will be aiming to understand the emergence of multiple and diverse solutions that meet similar control objectives.

In treating neuro-inspired approaches to input modulation along the lines suggested by the second bullet above, we note that with enough control channels available, it is possible to pursue quantized control input designs in which a large enough and appropriately chosen group of binary inputs can simulate certain amplitude-modulated analog signals. How this approach scales with control task complexity and numbers of neurons needed for satisfactory execution remains ongoing work.

Finally, we note that our work exploring neuromimetic control has focused on linear systems only because they are the most widely studied and best understood. There is no reason that exploration of nonlinear control systems that model land and air vehicles cannot be approached in ways that are similar to what we have presented. For such models, connections between task geometry and time-to-execute can be studied. Deeper connections to neurobiological aspects of sensing and control in and of themselves will also call for nonlinear models. This is foreshadowed in the piecewise selection functions that define the quantized control actions in this section. Connections with established theories of threshold-linear networks ([29]) and with other models competitive neural dynamics ([30]) remain under investigation. The neurobiology literature on the control of physical movement is very rich, and there appears to be much to explore.

References

  • [1] J. Baillieul and P.J. Antsaklis. “Control and communication challenges in networked real-time systems.” Proceedings of the IEEE, 95, no. 1 (2007): 9-28.
  • [2] J. Leonard, J. How, S. Teller, M. Berger, S. Campbell, G. Fiore, L. Fletcher, E. Frazzoli, A. Huang, S. Karaman, et al. “A perception-driven autonomous urban vehicle.” in J. of Field Robotics, 25(10):727–774, 2008.
  • [3] J. Huang, A. Isidori, L. Marconi, M. Mischiati, E. Sontag, W.M. Wonham, “Internal Models in Control, Biology and Neuroscience,” in Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami Beach, FL, pp. 5370 - 5390, doi:10.1109/CDC.2018.8619624.
  • [4] D. Muthirayana and P.P. Khargonekar, “Working Memory Augmentation for Improved Learning in Neural Adaptive Control,” in 58th Conference on Decision and Control (CDC), Nice, France, 2019. pp. 3819-.3826. doi:10.1109/CDC40024.2019.9029549.
  • [5] G. Markkula, E. Boer, R. Romano, et al. “Sustained sensorimotor control as intermittent decisions about prediction errors: computational framework and application to ground vehicle steering,” Biol Cybern 112, 181–207 (2018). https://doi.org/10.1007/s00422-017-0743-9
  • [6] P. Gawthrop, I. Loram, M. Lakie, et al. “Intermittent control: a computational theory of human control,” Biol Cybern. 104, 31–51 (2011). https://doi.org/10.1007/s00422-010-0416-4
  • [7] J. Hu and W. X. Zheng, “Bipartite consensus for multi-agent systems on directed signed networks,” in 52nd IEEE Conference on Decision and Control, Florence, 2013, pp. 3451-3456. doi: 10.1109/CDC.2013.6760412
  • [8] C. Altafini, “Consensus problems on networks with antagonistic interactions”, in IEEE Trans. Autom. Control, vol. 58, no. 4, pp. 935-946, 2013, doi:10.1109/TAC.2012.2224251.
  • [9] J. Baillieul and Z. Kong, “Saliency based control in random feature networks,” in 53rd IEEE Conference on Decision and Control’, Los Angeles, CA, 2014, pp. 4210-4215. doi: 10.1109/CDC.2014.7040045
  • [10] J. Baillieul, “Perceptual Control with Large Feature and Actuator Networks,” in 58th Conference on Decision and Control (CDC), Nice, France, 2019. pp. 3819-.3826. doi:10.1109/CDC40024.2019.9029615.
  • [11] B.K.P. Horn, Y. Fang, I. Masaki, “Hierarchical framework for direct gradient-based time-to-contact estimation,” in the 2009 IEEE Intelligent Vehicles Symposium. DOI: 10.1109/IVS.2009.5164489
  • [12] R.W. Brockett, Finite Dimensional Linear Systems, SIAM, 2015, xvi + 244 pages, ISBN 978-1-611973-87-7
  • [13] T.K. Ho, 1998. “The random subspace method for constructing decision forests,” IEEE Trans. Pattern Anal. Mach. Intell., 20(8), pp.832 - 844. doi:10.1109/34.709601
  • [14] J. Sivic, (April 2009). “Efficient visual search of videos cast as text retrieval”. In IEEE Transactions on Pattern Anal. and Mach. Intell., V.31,N.4, pp. 591–605, 10.1109/TPAMI.2008.111.
  • [15] J. Baillieul, 1999.“Feedback Designs for Controlling Device Arrays with Communication Channel Bandwidth Constraints,” Lecture Notes of the Fourth ARO Workshop on Smart Structures, Penn State University, August 16-18, 1999,
  • [16] J. Baillieul (2002). “Feedback Designs in Information-Based Control”. In: Pasik-Duncan B. (ed) Stochastic Theory and Control, Lecture Notes in Control and Information Sciences, vol 280. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48022-6_3
  • [17] K. Li and J.  Baillieul. ” Robust quantization for digital finite communication bandwidth (DFCB) control”. EEE Transactions on Automatic Control. 2004 Sep 13;49(9):1573-84.
  • [18] J. Baillieul, 2004, “Data-rate problems in feedback stabilization of drift-free nonlinear control systems”. In Proceedings of the 2004 Symposium on the Mathematical Theory of Networks and Systems (MTNS) (pp. 5-9).
  • [19] K. Li and J. Baillieul. “Data-rate requirements for nonlinear feedback control” In NOLCOS 2004, IFAC Proceedings Volumes. 2004 Sep 1;37(13):997-1002.
  • [20] T. Bifano, 2010. “Shaping light: MOEMS deformable mirrors for microscopes and telescopes,” in Proc. SPIE 7595, MEMS Adaptive Optics IV, 759502 (18 February 2010); https://doi.org/10.1117/12.848221
  • [21] B. Colder, 2011. “Emulation as an integrating principle for cognition,” Frontiers in Human Neuroscience, May 27;5:54, https://doi.org/10.3389/fnhum.2011.00054
  • [22] D. Bush, C. Barry, D. Manson, N. Burgess, 2015. “Using grid cells for navigation,” Neuron, Aug 5; 87(3):507-20, https://doi.org/10.1016/j.neuron.2015.07.006.
  • [23] E. Marder, 2012. “Neuromodulation of neuronal circuits: back to the future,” Neuron, Oct 4;76(1):1-1, https://doi.org/10.1016/j.neuron.2012.09.010.
  • [24] G.N. Nair and J. Baillieul, 2006. “Time to failure of quantized control via a binary symmetric channel” In Proceedings of the 45th IEEE Conference on Decision and Control, pages 2883–2888.
  • [25] G.N. Nair, F. Fagnani, S. Zampieri, and R.J. Evans, 2007. “Feedback control under data rate constraints: An overview,” Proceedings of the IEEE, 95(1):108–137.
  • [26] H. Stensola, T. Stensola, T. Solstad, K. Frøland, M.B. Moser, and E.I. Moser, 2012. “The entorhinal grid map is discretized,” Nature 492, 72–78. https://doi.org/10.1038/nature11649
  • [27] H.W. Lee, S.M. Lee, and I. Lee, 2018. “Neural firing patterns are more schematic and less sensitive to changes in background visual scenes in the subiculum than in the hippocampus” Journal of Neuroscience Aug 22;38(34):7392-408.
  • [28] Z. Kong, N.W. Fuller, S. Wang, K. Özcimder, E.G̃illam, D. Theriault, M. Betke, and J. Baillieul, 2016. “Perceptual Modalities Guiding Bat Flight in a Native Habitat,” Scientific Reports, 6, Article number: 27252. http://www.nature.com/articles/srep27252
  • [29] C. Curto, C. Langdon, K. Morrison, 2020. “Combinatorial Geometry of Threshold-Linear Networks,” https://arxiv.org/abs/2008.01032
  • [30] O.W. Layton and B.R.Fajen, 2016. “Competitive dynamics in MSTd: A mechanism for robust heading perception based on optic flow,” PLoS computational biology, 2016 Jun 24;12(6):e1004942.