This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Network analysis of memristive device circuits: dynamics, stability and correlations

F. Barrows Theoretical Division (T4), Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA    F. Sheldon London Institute for Mathematical Sciences, Royal Institution, 21 Albemarle St, London W1S 4BS, UK    F. Caravelli Theoretical Division (T4), Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
Abstract

Networks with memristive devices are a potential basis for the next generation of computing devices. They are also an important model system for basic science, from modeling nanoscale conductivity to providing insight into the information-processing of neurons. The resistance in a memristive device depends on the history of the applied bias and thus displays a type of memory. The interplay of this memory with the dynamic properties of the network can give rise to new behavior, offering many fascinating theoretical challenges. But methods to analyze general memristive circuits are not well described in the literature. In this paper we develop a general circuit analysis for networks that combine memristive devices alongside resistors, capacitors and inductors and under various types of control. We derive equations of motion for the memory parameters of these circuits and describe the conditions for which a network should display properties characteristic of a resonator system. For the case of a purely memresistive network, we derive Lyapunov functions, which can be used to study the stability of the network dynamics. Surprisingly, analysis of the Lyapunov functions show that these circuits do not always have a stable equilibrium in the case of nonlinear resistance and window functions. The Lyapunov function allows us to study circuit invariances, wherein different circuits give rise to similar equations of motion, which manifest through a gauge freedom and node permutations. Finally, we identify the relation between the graph Laplacian and the operators governing the dynamics of memristor networks operators, and we use these tools to study the correlations between distant memristive devices through the effective resistance.

preprint: AIP/123-QED

I Introduction

Once regarded as somewhat niche, memristance is now recognized as a ubiquitous feature of nature, especially at the nanoscale Pershin and Ventra (2011). As a result, memristive devices have been the subject of intense research over the past decade. In addition to the wide applicability of these components, ranging from scalable memory devices to machine learning and reservoir computingTraversa and et. al. (2014); Carbajal et al. (2015); Du and et. al. (2017); Milano et al. (2022); Fu et al. (2020); Xu, Wang, and Yan (2021), there is also interest in their fundamental dynamical propertiesPershin, Slipko, and Di Ventra (2013); Diaz-Alvarez et al. (2019); Zhu et al. (2021); Caravelli, Traversa, and Di Ventra (2017); Riaza (2011). For example, it has long been known that the introduction of memristive devices in a circuit can lead to chaotic dynamics and there are numerous papers studying the properties of the Chua attractorsMatsumoto (1984); Madan (1993).

However, less is known for generic circuits with memory, and even fewer exact results have been obtained in this regard. While little theoretical progress has been made for the most general cases of electronic circuits with memory, purely memristive circuits are amenable to an analytical approach: the equations for networks of these devices can often be obtained analytically, unveiling the most bizarre and unexpected properties of these dynamical circuitsZegarac and Caravelli (2019); Tellini et al. (2021). For instance, it has been shown that the asymptotic states of certain memristive-resistive circuits can be well described by the Curie-Weiß model and its ferromagnetic-paramagnetic transition. Further, it has been argued that random circuits of memristive devices (analyzed through the lens of their Lyapunov function) exhibit a ferromagnetic-glass transitionCaravelli and Sheldon (2020); Caravelli and Barucca (2018).

Obtaining a full dynamic picture of this behavior is difficult because the nonlinearity of the memristive devices makes the equations of motion impossible to solve analytically. This paper provides tools to understand the properties of generic, first-order memristive devices with window functions in networks. In particular, we derive dynamical equations for generic circuits with memristive devices, and we derive Lyapunov functions for the infinite family of first-order memristive circuits with sharp boundaries (e.g., discontinuous window functions).

We also generalize techniques used for network circuit analysis to the case of memristive device networks. Using these, we demonstrate that these circuits frequently give rise to analytically nontrivial or intractable dynamical equations. In doing so, we demonstrate mathematical techniques, including correlation analysis and Lyapunov functions, which can provide information on the dynamics, evolution, and stability of a network of memristive devices. Further, we discuss how different circuits give rise to similar dynamics through gauge freedom and node permutation in the governing equations.

This work builds upon previous works to study the dynamics of memristor networks most of which use the graph Laplacian or projector operatorsCaravelli, Traversa, and Di Ventra (2017); Stern, Liu, and Balasubramanian (2024); Xiang, Ren, and Tan (2022); Dhivakaran, Gowrisankar, and Vinodkumar (2024). We identify the relation between the graph Laplacian and the projection operators. Finally we use this correspondence to study the effective resistance in memristor networks and construct effective circuits which allow us to study the correlations between distant memristors.

II Graph theory for circuit dynamics

In network analysis of electrical circuits, an electrical circuit is mapped to a graph 𝒢=(V,E)\mathcal{G}=(V,E). A graph 𝒢\mathcal{G} consists of a set of nn vertices/nodes VV and a set of mm edges EE. An undirected edge may be denoted {k,l}\{k,l\}, but in circuit analysis, each edge is assigned an arbitrary direction from kk to ll denoted (k,l)(k,l). The graph allows no self-edges and no duplicate edges. As a simple example, let us work with the triangle graph defined by,

V={1,2,3},E={(1,2),(2,3),(1,3)}\displaystyle V=\{1,2,3\},\quad E=\{(1,2),(2,3),(1,3)\} (1)

A representation of this graph is shown in Figure 1.

Refer to caption
Figure 1: A simple graph with three edges connecting three nodes without an oriented cycle.

Note that the edge between 1 and 3 is not cyclical. A graph representation of a circuit generally consists of edges corresponding to circuit elements and nodes, i.e., junctions in the circuit. Currents and voltages are assigned in the graph to satisfy Kirchhoff’s current law (KCL) and Kirchhoff’s voltage law (KVL). A circuit is solved when it satisfies these constraints given a circuit topology; thus, network analysis of a circuit consists of solving for valid current and voltage configurations.

A current configuration i={ie,eE}i=\{i_{e},\;e\in E\} is a set of numbers associated with the edges which satisfy KCL. KCL could be enforced on each vertex by building an n×mn\times m incidence matrix BB, where each row represents a node and each column an edge; the matrix entries take values of ±1\pm 1 corresponding to a directed edge oriented towards or away from each node, and 0 when an edge is not incident on a node. For our triangle graph, this is,

B=e12e23e13n1( 101) n2110n3011B=\bordermatrix{&e_{12}&e_{23}&e_{13}\cr n_{1}&1&0&1\cr n_{2}&-1&1&0\cr n_{3}&0&-1&-1\cr} (2)

eije_{ij} are edges linking nodes ii and jj (edges are indexed with superscripts, eke^{k}), and nin_{i} are indexed nodes. For each vertex kk and edge ee, let bk,e=1b_{k,e}=1 if kk is the start, 1-1 if kk is the finish, and 0 if the edge does not include kk. KCL are enforced such that a current vector i\vec{i} satisfies,

ebk,eie=0.\displaystyle\sum_{e}b_{k,e}i_{e}=0. (3)

If i\vec{i} is an allowed current configuration then

Bi=0.\displaystyle B\vec{i}=0. (4)

The operator BB may be interpreted as a type of divergence on the graph, and the relationship Bi=0B\vec{i}=0 is equivalent to saying that the current configurations must be divergence-less. In this sense, they are similar to magnetic fields.

Similarly, a voltage configuration v={ve,eE}v=\{v_{e},\;e\in E\} is a set of numbers associated with the edges that satisfy KVL. That is, we define a cycle, ll on the graph as a sequence of nodes (k1,kj)(k_{1},\dots k_{j}) such that {ki,ki+1}E\{k_{i},k_{i+1}\}\in E (in either direction) and {kj,k1}E\{k_{j},k_{1}\}\in E. We can find the oriented edge set of the cycle ElE_{l}, wherein the orientation matches the direction traversed in ll (these edges do not necessarily lie in EE). We construct a cycle matrix AA; each row represents a unique cycle in the graph, and each column an edge in EE. The matrix entries take values of ±1\pm 1 when the edge is part of the cycle and 0 otherwise; the sign indicates whether the orientation of the edge in EE aligns with the direction of the cycle (+1)(+1) or is opposite to it (1)(-1). The triangle graph has two possible cycles, 12311\to 2\to 3\to 1 or 13211\to 3\to 2\to 1. We can include both for now,

A=e12e23e13cycle1( 111) cycle2111A=\bordermatrix{&e_{12}&e_{23}&e_{13}\cr\text{cycle}_{1}&1&1&-1\cr\text{cycle}_{2}&-1&-1&1} (5)

These are not independent, and we must eliminate one cycle to solve for a voltage configuration, generating a reduced cycle matrix. KVL are enforced on a voltage vector v\vec{v} for each cycle in the graph; let al,ea_{l,e} be 11 if the orientation matches that in the graph, 1-1 if it is opposite, and 0 if eEle\notin E_{l}. In this case, we can write,

eal,eve=0.\displaystyle\sum_{e}a_{l,e}v_{e}=0. (6)

To solve for a voltage configuration we will have to do some extra work later to determine which cycles we include. If v\vec{v} is a voltage configuration then

Av=0.\displaystyle A\vec{v}=0. (7)

AA may be interpreted as a type of curl on the graph that gives the circulation of a vector. In this sense, the relation Av=0A\vec{v}=0 is analogous to saying that the voltage drops are curl-less and thus analogous to electric fields.

The current and voltage configurations belong to a set of all current and voltage configurations. In the supplementary material, we prove that these current and voltage configurations are dual. Thus, we have two valid representations of the electrical properties of the circuit. Here, we define two ways of approaching the solution of a circuit: the node method and the loop method.

In the node method, the voltage configuration is calculated by finding voltage potentials of the nodes in the circuit. One potential value is set as the ground, and the n1n-1 remaining potential values are considered as the unknowns. For the n1n-1 non-root nodes, we write down KCL and solve for the voltage potential in succession via Gaussian elimination.

In the loop method, we must have a way of choosing cycles. To do this, we introduce a spanning tree TT. A spanning tree is the minimum number of edges to connect all nodes. The remaining cycle edges can be added to the spanning tree; when an individual cycle edge is added to the spanning tree it gives a unique cycle whose orientation we assign by that of the included edge. This is called the fundamental cycle of the edge, also called a fundamental loop. From the reduced cycle matrix, we write down KVL. A current i\vec{i} may then be expressed by the fundamental loop currents jej_{e} as a weighted sum of the fundamental cycles. Writing a reduced cycle matrix A~\tilde{A} in terms of the fundamental loops, we have

i=A~tj.\displaystyle\vec{i}=\tilde{A}^{t}\vec{j}. (8)

Given a spanning tree TT, we can find unique paths from any arbitrary initial node, the root, and any other node kk. We can sum the voltages at the nodes along this path; if an edge is oriented along the walk from root to node, the voltage value of that edge is added; if the walk and edge are unaligned, the voltage is subtracted. We define this walk pkp_{k}, and the voltage configuration can be written in terms of this fundamental walk:

v=Btp,\displaystyle\vec{v}=B^{t}\vec{p}, (9)

which reproduces the relation between the voltage and potential above. These identities will be useful to derive the results below.

II.1 Resistors and memristive devices

A memristive device can be treated as a resistor with variable resistance; its resistance can take continuous values between a low and high resistance. The state of the resistance between two limiting values can be parameterized by a variable xx, which is constrained by some window function such that 0x10\leq x\leq 1. This can be thought of as the first-order term in a polynomial expansion for the resistance in terms of an adimensional parameter. Thus, the resistance varies linearly with xx; later, we will generalize this to include higher-order terms of xx to describe nonlinear memresistance. In typical conditions, the resistance of a memristive device is bounded between two fixed high and low resistance states, corresponding to the ’off’ and ’on’ states. The resistance of these states is given by the values RoffRon>0R_{\text{off}}\gg R_{\text{on}}>0. As the resistance state depends on the history of applied voltage or bias, there is a memory stored in the resistance value, and we will refer to xx as the internal memory parameter. For the case of widely used metal oxide memristive devices, a simple toy model for the evolution of the resistance is the followingCaravelli, Traversa, and Di Ventra (2017):

R(x)\displaystyle R(x) =\displaystyle= Ronx+Roff(1x), the direct parametrization,\displaystyle R_{\text{on}}x+R_{\text{off}}(1-x),\text{ the direct parametrization,} (10)
ddtx(t)\displaystyle\frac{d}{dt}x(t) =\displaystyle= Ronβi(t)αx(t),\displaystyle\frac{R_{\text{on}}}{\beta}i(t)-\alpha x(t), (11)

In the direct parametrization, R(1)=RonR(1)=R_{\text{on}} and R(0)=RoffR(0)=R_{\text{off}}. In equation (11), i(t)i(t) is the current flowing in the device at time tt, α\alpha is a dampening parameter with units of inverse time, and β\beta can be thought of as the inverse learning rate with units of time per voltage. These constants control the decay and reinforcement timescales. We define a scaling variable ξ=RoffRonRon\xi=\frac{R_{\text{off}}-R_{\text{on}}}{R_{\text{on}}}. α\alpha, β\beta, ξ\xi can be measured experimentally. Alternatively, a similar formulation of the resistance dynamical equation is given,

R(x)\displaystyle R(x) =\displaystyle= Ron(1x)+Roffx, the flipped parametrization,\displaystyle R_{\text{on}}(1-x)+R_{\text{off}}x,\text{ the flipped parametrization}, (12)
ddtx(t)\displaystyle\frac{d}{dt}x(t) =\displaystyle= αx(t)Ronβi(t).\displaystyle\alpha x(t)-\frac{R_{\text{on}}}{\beta}i(t). (13)

In the flipped parametrization R(1)=RoffR(1)=R_{\text{off}} and R(0)=RonR(0)=R_{\text{on}}. Here we define the scaling factor χ=RoffRonRoff\chi=\frac{R_{\text{off}}-R_{\text{on}}}{R_{\text{off}}} which is bounded χ<1\chi<1. These two parameterizations are inequivalent models of the memristive device dynamics. From the point of view of the dynamics of the single memristive device, the two parametrizations are related by the following change of variables: αα\alpha\leftrightarrow-\alpha, ββ\beta\leftrightarrow-\beta and ξχ\xi\leftrightarrow-\chi, i.e., ξ(RoffRon)=χ\xi\left(\frac{-R_{\text{off}}}{R_{\text{on}}}\right)=\chi.

II.2 Resistance characterization

The model given above, which has been previously studied Caravelli, Traversa, and Di Ventra (2017); Caravelli (2019), has a simple time dependency.

x(t)=e±α(tt0)x01βteα(st)i(s)𝑑s\displaystyle x(t)=e^{\pm\alpha(t-t_{0})}x_{0}\mp\frac{1}{\beta^{\prime}}\int^{t}e^{\mp\alpha(s-t)}i(s)ds (14)

This is valid to a first approximation, with (+1β)(+\frac{1}{\beta^{\prime}}) in the direct parameterization and (1β)(-\frac{1}{\beta^{\prime}}) for the flipped parameterization. This is the simplest description of a device that saturates between two limiting values, RonR_{\text{on}} and RoffR_{\text{off}}, and it can be generalized. Here, we focus on how to parametrize higher nonlinearities. This approach can also be extended to systems in which nonlinear Schottky barriers are present by modifying the effective equation for the resistance with an exponential term in front and by parametrizing the resistance with a voltage-dependent term of the form Rn(x)=eαVRn(x)R^{\prime}_{n}(x)=e^{\alpha V}R_{n}(x).

Consider a memristive system described by a single internal variable x[0,1]x\in[0,1], with the property that R(x)[Ron,Roff]R(x)\in[R_{\text{on}},R_{\text{off}}]. Then, if we consider a set of resistive variables R0RkRnR_{0}\leq\cdots\leq R_{k}\leq\cdots\leq R_{n}, we can write without loss of generality

Rn(x)=k=0nRkBk,n(x)R_{n}(x)=\sum_{k=0}^{n}R_{k}B_{k,n}(x) (15)

where Bk,n(x)B_{k,n}(x) are Bernstein polynomials with Bk,n(x)=(nk)xk(1x)nkB_{k,n}(x)=\binom{n}{k}x^{k}(1-x)^{n-k}. In the limit nn\rightarrow\infty, Rn(x)R(x)R_{n}(x)\rightarrow R(x) pointwise. Now assume α=0\alpha=0, we have that in a controlled experiment with sinusoidal inputs, and the assumption of symmetric memristive devices, we have

R~(t)=V(t)I(t)=R(x(t))=k=0nRkBk,n(x(t))\tilde{R}(t)=\frac{V(t)}{I(t)}=R\left(x(t)\right)=\sum_{k=0}^{n}R_{k}B_{k,n}\left(x(t)\right) (16)

and we can write

(R~(t0)R~(t1)R~(tn))=(B0,n(x(t0))B1,n(x(t0))Bn,n(x(t0))B0,n(x(t1))B1,n(x(t1))Bn,n(x(t1))B0,n(x(tn))B1,n(x(tn))Bn,n(x(tn)))(R0R1Rn)\displaystyle\left(\begin{array}[]{c}\tilde{R}(t_{0})\\ \tilde{R}(t_{1})\\ \vdots\\ \tilde{R}(t_{n})\end{array}\right)=\left(\begin{array}[]{cccc}B_{0,n}\left(x(t_{0})\right)&B_{1,n}\left(x(t_{0})\right)&\cdots&B_{n,n}\left(x(t_{0})\right)\\ B_{0,n}\left(x(t_{1})\right)&B_{1,n}\left(x(t_{1})\right)&\cdots&B_{n,n}\left(x(t_{1})\right)\\ \vdots&\vdots&\vdots&\vdots\\ B_{0,n}\left(x(t_{n})\right)&B_{1,n}\left(x(t_{n})\right)&\cdots&B_{n,n}\left(x(t_{n})\right)\\ \end{array}\right)\left(\begin{array}[]{c}R_{0}\\ R_{1}\\ \vdots\\ R_{n}\end{array}\right) (29)

from which we can infer the Bernstein parameters from the acquired data,

R=B1R~.\vec{R}=B^{-1}\tilde{R}. (30)

Note that because the Bernstein polynomials are partitions of unity, one must have that if R~=1R¯\tilde{R}=1\bar{R}, then necessarily R=R¯1\vec{R}=\bar{R}\vec{1} for some vector of resistances R¯\bar{R}.

The Bernstein polynomials approximation is valid when R~\tilde{R} is a smooth function, such that RR is smooth in terms of xx. Thus, it is possible to learn a nonlinear function R(x(t))R(x(t)) given a R\vec{R} and real-valued resistance data R~(t)\tilde{R}(t). In the case of nonlinear resistance, the resistance in time-correlated real-valued samples can be expanded in terms of {X(m)(t)}m=1M\{X^{(m)}(t)\}_{m=1\ldots M}. As the samples will be noisy in real data, and not noiseless like in the simulated ideal memristive device case, we may find deviations even from the functional form of equation (29), as shown schematically in Figure 2. In this case, the task will be to learn the correct function f(X)f(X), where X=diag(x)X=\text{diag}(x) is an m×mm\times m diagonal matrix of memory parameters.

Refer to caption
Figure 2: A schematic of an IV-hysteresis loop generated by a memristive device. The deviations from a smooth loop are due to experimental noise that is not modeled by the Bernstein polynomial approximation of the resistance. The correct function f(X)f(X), without these deviations, could be determined from experimentally collected data.

It is interesting to note that a device with this property will still satisfy:

R(x)\displaystyle R(x) =\displaystyle= k=0nRkBk,n(x(t))\displaystyle\sum_{k=0}^{n}R_{k}B_{k,n}\left(x(t)\right) (31)
ddtx(t)\displaystyle\frac{d}{dt}x(t) =\displaystyle= αx(t)1βV(t)R(x(t)),\displaystyle\alpha x(t)-\frac{1}{\beta}\frac{V(t)}{R\left(x(t)\right)}, (32)

with R0=RonR_{0}=R_{\text{on}} and Rn=RoffR_{n}=R_{\text{off}}, and will reduce to a trivial memristive device for n=1n=1. In the generic case we have a nontrivial time dependency,

x(t)=e±α(tt0)x01βteα(st)V(s)B(x(s))R𝑑s.\displaystyle x(t)=e^{\pm\alpha(t-t_{0})}x_{0}\mp\frac{1}{\beta}\int^{t}e^{\mp\alpha(s-t)}\frac{V(s)}{B(x(s))R}ds. (33)

A pure memristor has x(t)q(t)x(t)\propto q(t) or x(t)ϕ(t)x(t)\propto\phi(t), the charge and flux, respectively.

II.3 Different control schemes

In this section, we derive various control schemes for the resistance in a memristive device network. The various ways in which the memristive circuit can be controlled can be cast in terms of a generic vector Y\vec{Y}

ddtx(t)=αx(t)1β(I+ξΩAX(t))1Y,\frac{d}{dt}\vec{x}(t)=\alpha\vec{x}(t)-\frac{1}{\beta}\left(I+\xi\Omega_{A}X(t)\right)^{-1}\vec{Y}, (34)

where, as we show below, we have

Y={ΩASVoltage sources in seriesA(AtA)1SextVoltage sources at nodesΩBjCurrent sources in parallelBt(BBt)1jextCurrent sources at nodes,\displaystyle\vec{Y}=\begin{cases}\Omega_{A}\vec{S}&\text{Voltage sources in series}\\ A(A^{t}A)^{-1}\vec{S}_{ext}&\text{Voltage sources at nodes}\\ \Omega_{B}\vec{j}&\text{Current sources in parallel}\\ B^{t}(BB^{t})^{-1}\vec{j}_{ext}&\text{Current sources at nodes}\end{cases}\;\;, (35)

which is useful in various proofs concerning the control of reservoirs using memristive devicesSheldon, Kolchinsky, and Caravelli (2022); Baccetti et al. (2023). Here ΩA\Omega_{A} and ΩB\Omega_{B} are orthogonal projection operators, ΩA=At(AAt)1A\Omega_{A}=A^{t}(AA^{t})^{-1}A and ΩB=Bt(BBt)1B\Omega_{B}=B^{t}(BB^{t})^{-1}B. ΩA\Omega_{A} and ΩB\Omega_{B} are discussed further below. Further, in the supplementary material, we show that it is possible to write similar differential equations for the resistance without referring to xx.

II.3.1 Voltages at Nodes and in Series

Here, we consider a network of resistors characterized by an incidence matrix BB, cycle matrix AA, and resistance values rmr\in\mathbb{R}^{m}. We also consider a set of voltage sources sms\in\mathbb{R}^{m} attached to edges in a consistent way (not violating KVL). For each edge eEe\in E, we then have

ve=reie+se,v=Ri+s\displaystyle v_{e}=r_{e}i_{e}+s_{e},\quad\vec{v}=R\vec{i}+\vec{s} (36)

where R=diag(r)R=\mathrm{diag}(r) is a diagonal matrix of resistances. Our goal is to solve for the current configuration i\vec{i}.

Define B~\tilde{B} as the reduced incidence matrix by omitting the last row of BB which renders it nonsingular. We then introduce a spanning tree TT and reorder the edges into those belonging to the tree and those belonging to the chords,

B~=(BT,BC),A~=(AT,AC)\displaystyle\tilde{B}=(B_{T},B_{C}),\quad\tilde{A}=(A_{T},A_{C}) (37)

With this construction, ACA_{C} is the identity. We thus have

B~A~t=BTATt+BC=0ATt=BT1BC\displaystyle\tilde{B}\tilde{A}^{t}=B_{T}A^{t}_{T}+B_{C}=0\to A^{t}_{T}=-B_{T}^{-1}B_{C} (38)

as BTB_{T} is invertible by our construction. Thus,

Bi=0=BTiT+BCiCiT=ATtiCi=A~tiC\displaystyle B\vec{i}=0=B_{T}i_{T}+B_{C}i_{C}\to i_{T}=A^{t}_{T}i_{C}\to\vec{i}=\tilde{A}^{t}i_{C} (39)

which is just the expression in terms of the fundamental loop currents we gave previously. Then, imposing KVL,

A~v=0=A~Ri+A~s=A~RA~tiC+A~s.\displaystyle\tilde{A}\vec{v}=0=\tilde{A}R\vec{i}+\tilde{A}\vec{s}=\tilde{A}R\tilde{A}^{t}i_{C}+\tilde{A}s. (40)

This gives the full solution,

i=A~t(A~RA~t)1A~s.\displaystyle\vec{i}=-\tilde{A}^{t}(\tilde{A}R\tilde{A}^{t})^{-1}\tilde{A}\vec{s}. (41)

Since we have seen that for voltages in series, we have s=A(AtA)1p\vec{s}=A(A^{t}A)^{-1}\vec{p}, it follows that we have for the voltages at nodes that Y=A(AtA)1Sext\vec{Y}=A(A^{t}A)^{-1}\vec{S}_{ext}.

Let us now derive the memristive device network equation for voltages in series, which was previously derivedCaravelli, Traversa, and Di Ventra (2017). Here, we provide a faster derivation. We use the following form of the Woodbury matrix identity

(P+Q)1=k=0(P1Q)kP1\displaystyle(P+Q)^{-1}=\sum_{k=0}^{\infty}(-P^{-1}Q)^{k}P^{-1} (42)

where assume that the matrix PP is invertible.

Then, let us define P=A~A~tP=\tilde{A}\tilde{A}^{t} and Q=A~ZA~tQ=\tilde{A}Z\tilde{A}^{t}. We have

A~t(A~A~t+A~ZA~t)1A~\displaystyle\tilde{A}^{t}(\tilde{A}\tilde{A}^{t}+\tilde{A}Z\tilde{A}^{t})^{-1}\tilde{A} =\displaystyle= k=0(1)kA~t((A~A~t)1A~ZA~t)k(A~A~t)1A~\displaystyle\sum_{k=0}^{\infty}(-1)^{k}\tilde{A}^{t}((\tilde{A}\tilde{A}^{t})^{-1}\tilde{A}Z\tilde{A}^{t})^{k}(\tilde{A}\tilde{A}^{t})^{-1}\tilde{A} (43)
=\displaystyle= k=0(1)k(ΩA~Z)kΩA~\displaystyle\sum_{k=0}^{\infty}(-1)^{k}(\Omega_{\tilde{A}}Z)^{k}\Omega_{\tilde{A}}
=\displaystyle= ΩA~(I+Z)1ΩA~=(I+ΩA~Z)1ΩA~=ΩA~(I+ZΩA~)1\displaystyle\Omega_{\tilde{A}}(I+Z)^{-1}\Omega_{\tilde{A}}=(I+\Omega_{\tilde{A}}Z)^{-1}\Omega_{\tilde{A}}=\Omega_{\tilde{A}}\left(I+Z\Omega_{\tilde{A}}\right)^{-1}

Here ΩA~\Omega_{\tilde{A}} is the non-orthogonal projection operator ΩA~=A~t(A~A~t)1A~\Omega_{\tilde{A}}=\tilde{A}^{t}\left(\tilde{A}\tilde{A}^{t}\right)^{-1}\tilde{A}. In the direct parameterization we can assume R=Roff(RoffRon)G(X)=Roff(IχG(X))R=R_{\text{off}}-(R_{\text{off}}-R_{\text{on}})G(X)=R_{\text{off}}(I-\chi G(X)), where we have used from the flipped parameterization χ=RoffRonRoff\chi=\frac{R_{\text{off}}-R_{\text{on}}}{R_{\text{off}}} (which satisfies 0<χ<10<\chi<1) with G(x)G(x) some function of xx such that G(0)=0G(0)=0 and G(1)=1G(1)=1. It follows that we can write

i\displaystyle\vec{i} =Roff1(IΩA~(RoffRon)G(X))1ΩA~s\displaystyle=-R_{\text{off}}^{-1}(I-\Omega_{\tilde{A}}(R_{\text{off}}-R_{\text{on}})G(X))^{-1}\Omega_{\tilde{A}}\vec{s} (44a)
=Roff1(IχΩA~G(X))1ΩA~s\displaystyle=-R_{\text{off}}^{-1}(I-\chi\Omega_{\tilde{A}}G(X))^{-1}\Omega_{\tilde{A}}\vec{s} (44b)

Then, let us use the equation for each memristive device,

dxkdt=Roffβikαxk,\displaystyle\frac{dx_{k}}{dt}=\frac{R_{\text{off}}}{\beta^{\prime}}i_{k}-\alpha x_{k}, (45)

then

dxdt=1β(IχΩA~G(X))1ΩA~sαx,\displaystyle\frac{d\vec{x}}{dt}=-\frac{1}{\beta^{\prime}}(I-\chi\Omega_{\tilde{A}}G(X))^{-1}\Omega_{\tilde{A}}\vec{s}-\alpha\vec{x}, (46)

and if we define the voltage generators to be with the negative on the side of the memristive device, then we indicate the voltage generators with S\vec{S}, and we have

dxdt=1β(IχΩA~G(X))1ΩA~Sαx.\displaystyle\frac{d\vec{x}}{dt}=\frac{1}{\beta^{\prime}}(I-\chi\Omega_{\tilde{A}}G(X))^{-1}\Omega_{\tilde{A}}\vec{S}-\alpha\vec{x}. (47)

If instead in the flipped parameterization we assume R=Ron+(RoffRon)G(X)=Ron(I+ξG(X))R=R_{\text{on}}+(R_{\text{off}}-R_{\text{on}})G(X)=R_{\text{on}}(I+\xi G(X)), again with G(0)=0G(0)=0 and G(1)=1G(1)=1 and

dxkdt=Ronβik+αxk,\displaystyle\frac{dx_{k}}{dt}=-\frac{R_{\text{on}}}{\beta}i_{k}+\alpha x_{k}, (48)

we have

dxdt=1β(I+ξΩA~G(X))1ΩA~S+αx,\displaystyle\frac{d\vec{x}}{dt}=-\frac{1}{\beta}(I+\xi\Omega_{\tilde{A}}G(X))^{-1}\Omega_{\tilde{A}}\vec{S}+\alpha\vec{x}, (49)

where ξ=RoffRonRon>0\xi=\frac{R_{\text{off}}-R_{\text{on}}}{R_{\text{on}}}>0. Note that we have defined the activation voltages as βRoff=βRon\frac{\beta^{\prime}}{R_{\text{off}}}=\frac{\beta}{R_{\text{on}}}.

II.3.2 Currents in Parallel and at nodes

In analogy to the previous section, we can solve for the voltage configuration vv, given a network of resistors and a set of current sources on edges jmj\in\mathbb{R}^{m}, where the current is added in parallel to the corresponding memristive device such that the total current through the two links is i=Gv+j{i}=G{v}+{j}, with GG the conductance matrix. In this case, we have

Bi=0=BGv+Bj=BGBtp+Bjp=(BGBt)1Bj\displaystyle B{i}=0=BG{v}+B{j}=BGB^{t}p+B{j}\to{p}=-(BGB^{t})^{-1}B{j} (50)

The matrix BGBtBGB^{t} is n1×n1n-1\times n-1 and BB has rank n1n-1. We can thus invert it to get,

v=Bt(BGBt)1Bj.\displaystyle v=-B^{t}(BGB^{t})^{-1}Bj. (51)

If we drive a current iextni^{ext}\in\mathbb{R}^{n} at the nn nodes, current conservation demands that jijext=0\sum_{j}i^{ext}_{j}=0. We eliminate one node to obtain the n1n-1 dimensional jext\vec{j}_{\text{ext}}. This must satisfy,

Bi=jext=BGv=BGBtp\displaystyle Bi=\vec{j}_{\text{ext}}=BGv=BGB^{t}p (52)

giving

v=Btp=Bt(BGBt)1jext.\displaystyle v=B^{t}p=B^{t}(BGB^{t})^{-1}\vec{j}_{ext}. (53)

Note the difference in sign. This is because we have usually defined currents exiting nodes as being positive, and here, we have considered jextj_{ext} as positive when entering the nodes.

Let us thus consider a diagonal matrix of homogeneous resistances:

R\displaystyle R =\displaystyle= Rondiag(R(xi(t))Ron)\displaystyle R_{\text{on}}\text{diag}\left(\frac{R(x_{i}(t))}{R_{\text{on}}}\right) (54)
=\displaystyle= Ron(I+diag(R(xi(t))Ron1))\displaystyle R_{\text{on}}\left(I+\text{diag}\left(\frac{R(x_{i}(t))}{R_{\text{on}}}-1\right)\right)

In terms of the Bernstein polynomials, for n=1n=1 then R(xi(t))Ron1\frac{R(x_{i}(t))}{R_{\text{on}}}-1 simply reduces to ξX\xi X, where ξ=RoffRonRon\xi=\frac{R_{\text{off}}-R_{\text{on}}}{R_{\text{on}}} as previously derived (we could similarly work in the direct parametrization). We note that ξX\xi X interpolates linearly between 0 and ξ\xi. For n>1n>1, we define this function as ξfn(xi)\xi f_{n}(x_{i}), which interpolates (nonlinearly in xix_{i}) between 0 and ξ\xi by the properties of the Bernstein polynomials. Thus, fn(xi)f_{n}(x_{i}) interpolates between 0 and 11 and is a generalization of the xix_{i} for n=1n=1. Thus, throughout the manuscript, it will be simply necessary to replace ξX\xi X with ξfn(X)\xi f_{n}(X) everywhere to obtain the generalized device equations, where fn(X)f_{n}(X) is a positive and bounded from above (by 1) matrix for arbitrary nn.

III Memristive circuit motifs

Here, we generalize dynamical equations for standard passive circuit elements to incorporate memristive devices and discuss the eigenvalues of such circuit motifs. Figure 3 depicts resistive, RC, and RLC devices.

Refer to caption
Figure 3: Schematic representations of circuit motifs. (a) Memristive device and voltage source motif. (b) The RC motif with a memristive device and voltage source in parallel with a capacitor. (c) The RLC motif with a memristive device and voltage source in parallel with an inductor and capacitor.

We examine these circuit elements as they can form fundamental motifs of more complicated circuits, going beyond purely memristive circuits. We derive equations for the dynamics of RC and RLC motifs that incorporate memristive devices. Further, we derive dynamical equations for RLC memristive device networks and describe some simple circuit structures, e.g., all circuit elements in series, which make analyzing the circuit dynamics more tractable.

Let us now try to understand the eigenvalue properties of a resistive circuit and how they depend on the device. As shown in equation (41), we know that the equation for the currents is of the formCaravelli, Traversa, and Di Ventra (2017)

i=At(ARAt)1ASin,\vec{i}=-A^{t}(ARA^{t})^{-1}A\vec{S}_{\text{in}}, (55)

where Sin\vec{S}_{\text{in}} is the voltage applied on each edge. From equation (44b) (with G(x)G(x) as function of xx), we can rewrite this as

i=Roff1(IχΩA~G(X))1ΩA~s.\displaystyle\vec{i}=-R_{\text{off}}^{-1}(I-\chi\Omega_{\tilde{A}}G(X))^{-1}\Omega_{\tilde{A}}\vec{s}. (56)

From equation (55), we can obtain the voltage across each edge

Sout=Ri=RAt(ARAt)1ASin,S_{\text{out}}=R\vec{i}=-RA^{t}(ARA^{t})^{-1}A\vec{S}_{\text{in}}, (57)

which can be written as

Sout=ΩRSinS_{out}=-\Omega_{R}S_{in} (58)

where ΩR\Omega_{R} is the non-orthogonal projector ΩR=RAt(ARAt)1A\Omega_{R}=RA^{t}(ARA^{t})^{-1}A. Thus, a resistive circuit acts simply as a filter for the voltages and is linear. We note ΩR2=ΩR\Omega_{R}^{2}=\Omega_{R}; thus, the eigenvalues are simply zeros and ones.

Following equation (57), the memristor and source motif can be rewritten in the direct parametrization as:

Sout\displaystyle S_{\text{out}} =\displaystyle= (IχX(t))At(A(IχX(t))At)1ASin\displaystyle-\left(I-\chi X(t)\right)A^{t}\left(A\left(I-\chi X(t)\right)A^{t}\right)^{-1}AS_{\text{in}} (59)
=\displaystyle= (IχX(t))(IχΩAX(t))1ΩASin\displaystyle-\left(I-\chi X(t)\right)\left(I-\chi\Omega_{A}X(t)\right)^{-1}\Omega_{A}S_{\text{in}}

From this we can note a stability condition of SoutS_{\text{out}} with respect to SinS_{\text{in}},

(IχX(t))1Sout=(IχΩAX(t))1ΩASin.\displaystyle\left(I-\chi X(t)\right)^{-1}S_{\text{out}}=-\left(I-\chi\Omega_{A}X(t)\right)^{-1}\Omega_{A}S_{\text{in}}. (60)

Thus Sout=SinS_{\text{out}}=-S_{\text{in}} when the voltage across each edge is equal to the voltage applied at each edge, when (IχX(t))1=(IχΩAX(t))1ΩA\left(I-\chi X(t)\right)^{-1}=\left(I-\chi\Omega_{A}X(t)\right)^{-1}\Omega_{A}. This will occur when ΩA=I\Omega_{A}=I, the identity matrix, corresponding to when the cycles space (AAt)(AA^{t}) is invertible and each cycle is linearly independent.

III.1 RC Networks

Let us now show how this framework can be used to study standard RC circuits. Changing our network design to edges consisting of a capacitor in parallel with a resistor and voltage generator, shown in Figure 3 (b), we have at each edge,

ie\displaystyle i_{e} =\displaystyle= ir,e+ic,e\displaystyle i_{r,e}+i_{c,e} (61)
vr,e\displaystyle v_{r,e} =\displaystyle= reir,e+se=vc,e=qce=ve\displaystyle-r_{e}i_{r,e}+s_{e}=v_{c,e}=-\frac{q}{c_{e}}=v_{e} (62)
ie\displaystyle i_{e} =\displaystyle= severecedvedt,\displaystyle\frac{s_{e}-v_{e}}{r_{e}}-c_{e}\frac{dv_{e}}{dt}, (63)

here subscripts rr and cc correspond to the resistor and capacitor edges, respectively. Imposing Kirchhoff’s current law and using v=Btp\vec{v}=B^{t}\vec{p} for a potential vector p\vec{p}

Bi\displaystyle B\vec{i} =\displaystyle= 0=BR1(vs)+BCdvdt\displaystyle 0=BR^{-1}(\vec{v}-\vec{s})+BC\frac{d\vec{v}}{dt} (64)
BCBtdpdt\displaystyle BCB^{t}\frac{d\vec{p}}{dt} =\displaystyle= BR1(vs)\displaystyle-BR^{-1}(\vec{v}-\vec{s}) (65)
dvdt\displaystyle\frac{d\vec{v}}{dt} =\displaystyle= Bt(BCBt)1BR1(vs)\displaystyle-B^{t}(BCB^{t})^{-1}BR^{-1}(\vec{v}-\vec{s}) (66)
=\displaystyle= ΩB/C(RC)1v+ΩB/C(RC)1s\displaystyle-\Omega_{B/C}(RC)^{-1}\vec{v}+\Omega_{B/C}(RC)^{-1}\vec{s}

where ΩB/C=Bt(BCBt)1BC\Omega_{B/C}=B^{t}(BCB^{t})^{-1}BC is a non-orthogonal projector and R,CR,\,C are diagonal matrices containing the resistance and capacitance. Notably, ΩB/C\Omega_{B/C} is asymmetric. Allowable voltage vectors satisfy v=Btp\vec{v}=B^{t}\vec{p} and are thus entirely within the row space of BB. The resulting dynamics are linear but do satisfy the fading memory propertySheldon, Kolchinsky, and Caravelli (2022). Considering the trajectories of two voltage configurations, subject to the same source s(t)\vec{s}(t), for Δv=v1v2\Delta\vec{v}=\vec{v}_{1}-\vec{v}_{2} we have

dΔvdt=ΩB/C(RC)1Δv.\displaystyle\frac{d\Delta\vec{v}}{dt}=-\Omega_{B/C}(RC)^{-1}\Delta\vec{v}. (67)

As ΩB/C\Omega_{B/C} has eigenvalue 11 in the space of allowable voltages and Rii,Cii>0R_{ii},C_{ii}>0, this gives exponential convergence of the trajectories.

The solution of equation (66) is known. Let us call τ=rc\tau=rc, we have

v(t)=eΩBtτv(t0)+t0teΩB(st0)τΩB(RC)1s(s)𝑑s.\vec{v}(t)=e^{-\Omega_{B}\frac{t}{\tau}}\vec{v}(t_{0})+\int_{t_{0}}^{t}e^{\Omega_{B}\frac{(s-t_{0})}{\tau}}\Omega_{B}(RC)^{-1}\vec{s}(s)ds. (68)

Note that the operator eΩBtτe^{-\Omega_{B}\frac{t}{\tau}} can be written in a simpler form considering that ΩB\Omega_{B} is a projector. In fact,

eΩBtτ=k=0(1)ktkτkk!ΩBk=I(1etτ)ΩBe^{-\Omega_{B}\frac{t}{\tau}}=\sum_{k=0}^{\infty}(-1)^{k}\frac{t^{k}}{\tau^{k}k!}\Omega_{B}^{k}=I-(1-e^{-\frac{t}{\tau}})\Omega_{B} (69)

and thus

eΩBqτΩB=eqτΩB.e^{\Omega_{B}\frac{q}{\tau}}\Omega_{B}=e^{\frac{q}{\tau}}\Omega_{B}. (70)

Using the formula above, we can simply write the solution as

v(t)=v(t0)+ΩB((etτ1)v(t0)+1τt0test0τs(s)𝑑s)\vec{v}(t)=\vec{v}(t_{0})+\Omega_{B}\left((e^{-\frac{t}{\tau}}-1)\vec{v}(t_{0})+\frac{1}{\tau}\int_{t_{0}}^{t}e^{\frac{s-t_{0}}{\tau}}\vec{s}(s)ds\right) (71)

Let’s derive a memristive device version of an RC motif. We can rewrite the RC motif incorporating memristive device resistance. Plugging in the resistance from the direct parameterization, we have an equation of motion for the voltage:

dvdt\displaystyle\frac{d\vec{v}}{dt} =\displaystyle= ΩB/C(RC)1v+ΩB/C(RC)1s\displaystyle-\Omega_{B/C}(RC)^{-1}\vec{v}+\Omega_{B/C}(RC)^{-1}\vec{s} (72)
=\displaystyle= Roff1Bt(BCBt)1B(IχX(t))1v(t)+Roff1Bt(BCBt)1B(IχX(t))1s(t)\displaystyle-R_{\text{off}}^{-1}B^{t}(BCB^{t})^{-1}B\left(I-\chi X(t)\right)^{-1}v(t)+R_{\text{off}}^{-1}B^{t}(BCB^{t})^{-1}B\left(I-\chi X(t)\right)^{-1}\vec{s}(t)
=\displaystyle= ΩB/CC1(i(t)+Cdv(t)dt)(s(t)v(t))1(v(t)s(t))\displaystyle-\Omega_{B/C}C^{-1}\left(\vec{i}(t)+C\frac{d\vec{v}(t)}{dt}\right)\left(\vec{s}(t)-\vec{v}(t)\right)^{-1}\left(\vec{v}(t)-\vec{s}(t)\right)

Here, we used equation (63) to substitute for the resistance. This can be rewritten,

(IΩB/C)dvdt\displaystyle\left(I-\Omega_{B/C}\right)\frac{d\vec{v}}{dt} =\displaystyle= ΩB/CC1i(t)\displaystyle\Omega_{B/C}C^{-1}\vec{i}(t) (73)
=\displaystyle= ΩB/CC1β(αX(t)dXdt).\displaystyle\Omega_{B/C}C^{-1}\beta^{\prime}\left(\alpha X(t)-\frac{dX}{dt}\right).

Here, we have a coupled nonlinear differential equation relating voltage and the internal memory parameter. Solving for v(t)\vec{v}(t) and X(t)X(t) needs to be done in a self-consistent way, as X˙\dot{X} depends on both applied current and voltage sources, as well as the current and resistance in the RC motif, as determined by equation (63). Interestingly, the voltage across the capacitor will depend on the time integral of X(t)X(t); thus, the voltage value has a memory of the memristive device resistance.

III.2 RLC Networks

The system above gives us a reservoir with real eigenvalues and some control over their distribution by choosing values of rer_{e} and cec_{e}. However, the only stable reservoirs we are aware of that give extensive memories are the so-called ’resonator’ systems previously studiedWhite, Lee, and Sompolinsky (2004). In discrete time, this consists of eigenvalues spread around a circle in the complex plane with radius η<1\eta<1. In continuous time, the appropriate reservoir spectrum is given by the zz-transform of these, giving eigenvalues with fixed real part σ-\sigma and imaginary parts spread on the interval [iπ,iπ][-i\pi,i\pi]. An inductance is required to achieve imaginary eigenvalues in an electric circuit (the matrix ΩB/CC1\Omega_{B/C}C^{-1} is symmetric and thus gives real eigenvalues). We begin with a sketch of the single edge case and use it as a guide for the network case.

We first consider a single edge isolated from the network, which we isolate and split into parallel edges to characterize the memristive device resistance. Our motif will consist of two edges in parallel, one with an inductor and capacitor and the other with a resistor and voltage generator. The voltages across the two edges are equal,

ve\displaystyle v_{e} =qeceledicl,edt\displaystyle=-\frac{q_{e}}{c_{e}}-l_{e}\frac{di_{cl,e}}{dt} (74a)
=ir,ere+se\displaystyle=-i_{r,e}r_{e}+s_{e} (74b)

Here, lel_{e} and cec_{e} are the inductance and capacitance of the edge, shown in Figure 3 (c). The current through both edges is ie=ir,e+icl,ei_{e}=i_{r,e}+i_{cl,e} and must vanish by current conservation when no current is injected across our motif. Writing these equations in terms of the voltage is inconvenient as the result will depend on s˙e\dot{s}_{e}, making comparisons with other types of reservoirs more difficult. Using the connection between the capacitive charge and current, icl,e=qe˙i_{cl,e}=\dot{q_{e}} we can write everything in terms of qeq_{e},

0=qece+led2qedt2+req˙e+se.\displaystyle 0=\frac{q_{e}}{c_{e}}+l_{e}\frac{d^{2}q_{e}}{dt^{2}}+r_{e}\dot{q}_{e}+s_{e}. (75)

This can be transformed into a first-order system,

(qe˙q¨e)=(011lecerele)(qeq˙e)+(0sele)\displaystyle\begin{pmatrix}\dot{q_{e}}\\ \ddot{q}_{e}\end{pmatrix}=\begin{pmatrix}0&1\\ -\frac{1}{l_{e}c_{e}}&-\frac{r_{e}}{l_{e}}\end{pmatrix}\begin{pmatrix}q_{e}\\ \dot{q}_{e}\end{pmatrix}+\begin{pmatrix}0\\ -\frac{s_{e}}{l_{e}}\end{pmatrix} (76)

This gives eigenvalues λ±=r2l±r24l21lc\lambda_{\pm}=-\frac{r}{2l}\pm\sqrt{\frac{r^{2}}{4l^{2}}-\frac{1}{lc}}. We will obtain imaginary eigenvalues for the underdamped case, r2cl<1\frac{r}{2}\sqrt{\frac{c}{l}}<1.

Moving to an RLC network, we can build a network consisting of RLC motifs composed as above, i.e., with memristive elements and a source parallel to an LC element; these motifs are connected by resistive edges referred to as network edges. We construct a minimum spanning tree such that the memristive edges exist on the tree and the LC components are on the cycle edges. The current in the resistive elements and the LC edges can be separated, iri_{r} and icli_{cl}, respectively. In addition, there is a current in the edges that connect the RLC motifs; we call this the network current iNi_{N}, thus i=ic+ir+iN\vec{i}=i_{c}+i_{r}+i_{N}. Examining the cycles in the graph, due to KVL, Av=0A\vec{v}=0, and we will have cycles in each motif equating the voltage in the parallel meristive and LC edges, thus equations (74a) and (74b) remain valid in the network case. In addition, some cycles are a linear sum of memristive edges and the passive-resistive network edges. We treat the network resistance (outside the RLC motifs) as constants; the R(X)R(X) matrix is now larger incorporating these constant resistive network edges that do not depend explicitly on an internal memory parameter, xx. Examining the cycles in the network, we obtain a model of the network circuit dynamics,

Av\displaystyle A\vec{v} =\displaystyle= 0\displaystyle 0 (77)
=\displaystyle= AR(x)ir+AR(x)iNAvcAs\displaystyle AR(x)i_{r}+AR(x)i_{N}-Av_{c}-A\vec{s}
=\displaystyle= AR(x)AticAR(x)ic+AvcAs\displaystyle AR(x)A^{t}i_{c}-AR(x)i_{c}+Av_{c}-A\vec{s}
Atic\displaystyle-A^{t}i_{c} =\displaystyle= ΩA/R(x)R(x)ic+ΩA/R(x)vcΩA/R(x)s\displaystyle-\Omega_{A/R(x)}R(x)i_{c}+\Omega_{A/R(x)}v_{c}-\Omega_{A/R(x)}\vec{s} (78)

We have used i=Atic\vec{i}=A^{t}i_{c} from equation (39). We have

ΩA/R(x)vcΩA/R(x)s\displaystyle\Omega_{A/R(x)}v_{c}-\Omega_{A/R(x)}\vec{s} =\displaystyle= ΩA/R(x)R(x)ir\displaystyle-\Omega_{A/R(x)}R(x)i_{r} (79)
=\displaystyle= At(AR(x)At)AR(x)ir\displaystyle-A^{t}(AR(x)A^{t})AR(x)i_{r}
=\displaystyle= At(AR(x)At)(AR(x)iNAvc+As),\displaystyle-A^{t}(AR(x)A^{t})\left(-AR(x)i_{N}-Av_{c}+A\vec{s}\right),

from which we can rewrite equation (78)

Atic\displaystyle-A^{t}i_{c} =\displaystyle= ΩA/R(x)R(x)ic+ΩA/R(x)R(x)iNΩA/R(x)vcΩA/R(x)s\displaystyle-\Omega_{A/R(x)}R(x)i_{c}+\Omega_{A/R(x)}R(x)i_{N}-\Omega_{A/R(x)}v_{c}-\Omega_{A/R(x)}\vec{s} (80)
0\displaystyle 0 =\displaystyle= (ΩA/R(x)R(x)+I)(iN+ic)(ΩA/R(x)R(x)I)(R(x)1vc+R(x)1s)\displaystyle\left(\Omega_{A/R(x)}R(x)+I\right)\left(i_{N}+i_{c}\right)-\left(\Omega_{A/R(x)}R(x)-I\right)\left(-R(x)^{-1}v_{c}+R(x)^{-1}\vec{s}\right) (81)
=\displaystyle= (ΩA/R(x)R(x)+I)(iN+q˙)(ΩA/R(x)R(x)I)(R(x)1Lq¨+(CR(x))1q+R(x)1s)\displaystyle\left(\Omega_{A/R(x)}R(x)+I\right)\left(i_{N}+\dot{\vec{q}}\right)-\left(\Omega_{A/R(x)}R(x)-I\right)\left(R(x)^{-1}L\ddot{\vec{q}}+(CR(x))^{-1}\vec{q}+R(x)^{-1}\vec{s}\right)

Equation (81) is a governing equation for the RLC network. We can now examine some specific cases. In the case with very high resistance in the network edges connecting RLC motifs, limRNiN0\lim_{R_{N}\rightarrow\infty}i_{N}\rightarrow 0, we can simplify equation (81),

0\displaystyle 0 =\displaystyle= ΩA/R(x)R(x)(Rq˙Lq¨C1qs)+(q˙+R(x)1Lq¨+R(x)1C1q+R(x)1s).\displaystyle\Omega_{A/R(x)}R(x)\left(R\dot{\vec{q}}-L\ddot{\vec{q}}-C^{-1}\vec{q}-\vec{s}\right)+\left(\dot{\vec{q}}+R(x)^{-1}L\ddot{\vec{q}}+R(x)^{-1}C^{-1}\vec{q}+R(x)^{-1}\vec{s}\right). (82)

The first term on the right-hand side enforces KVL within the RLC motifs with an additional constraint that the linear sum of potential across the memristive device edges, (R(x)q˙s)(R(x)\dot{q}-\vec{s}), equals zero. These are the linear sums remaining from the cycles spanning the network edges. The second term is the standard RLC dynamics in equation (76) for isolated RLC motifs.

When the resistance in the network edges is very low, as when the connections are of negligibly small resistance compared to the RLC elements, e.g., wires, we can approximate this as limRN0\lim_{R_{N}\rightarrow 0}. In this case, we have

0\displaystyle 0 =\displaystyle= ΩA/R(x)R(x)(Rq˙Lq¨C1qs)+(iN+q˙+R(x)1Lq¨+R(x)1C1q+R(x)1s).\displaystyle\Omega_{A/R(x)}R(x)\left(R\dot{\vec{q}}-L\ddot{\vec{q}}-C^{-1}\vec{q}-\vec{s}\right)+\left(i_{N}+\dot{\vec{q}}+R(x)^{-1}L\ddot{\vec{q}}+R(x)^{-1}C^{-1}\vec{q}+R(x)^{-1}\vec{s}\right). (83)

The first term on the right-hand side is a voltage constraint enforcing KVL, similar to the previous case. The second term is a dynamical equation incorporating the network current iNi_{N}.

We examine some network topologies that simplify the circuit analysis and produce more tractable dynamical equations. If we have a network of RLC motifs in series each linked by a single network edge such that we form a ring of RLC motifs, we can again assign the LC edges to the cycle edges and the memristive edges to the tree edges. In the case of constant resistance in all the network edges, rNr_{N}, then we have a single cycle that spans all the memristive edges, R(x)R(x) here, and the network edges,

0\displaystyle 0 =\displaystyle= rNiN+R(x)ir\displaystyle r_{N}i_{N}+R(x)i_{r} (84)
iN\displaystyle i_{N} =\displaystyle= rN1(Lq¨+C1q+s)\displaystyle r_{N}^{-1}\left(L\ddot{\vec{q}}+C^{-1}q+\vec{s}\right) (85)

We can now impose Bi=0B\vec{i}=0 using equations (85),

0\displaystyle 0 =\displaystyle= B(ir+iN+ic)\displaystyle B(i_{r}+i_{N}+i_{c}) (86)
=\displaystyle= B(R(x)1+rN1)(s+Lq¨+C1q)+Bq˙\displaystyle B(R(x)^{-1}+r_{N}^{-1})\left(\vec{s}+L\ddot{\vec{q}}+C^{-1}\vec{q}\right)+B\dot{\vec{q}}
=\displaystyle= B(R)1(s+Lq¨+C1q)+Bq˙\displaystyle B(R^{\prime})^{-1}\left(\vec{s}+L\ddot{\vec{q}}+C^{-1}\vec{q}\right)+B\dot{\vec{q}}

Here we write icl=q˙i_{cl}=\dot{\vec{q}} as above and have an equivalent resistance R=(R(x)1+rN1)1R^{\prime}=(R(x)^{-1}+r_{N}^{-1})^{-1}. To solve for q¨\ddot{\vec{q}} we use Lq¨+C1q=v=Btp=RirsL\ddot{\vec{q}}+C^{-1}\vec{q}=-\vec{v}=-B^{t}\vec{p}=Ri_{r}-\vec{s}, where p\vec{p} is the n1n-1 dimensional potential vector defined on the nodes, and from which

ir=R1[C1q+Lq¨+s].i_{r}=R^{-1}[C^{-1}\vec{q}+L\ddot{\vec{q}}+\vec{s}]. (87)

We can now write equation (86) as

0=(B(R)1Bt)1Bq˙p+(B(R)1Bt)1B(R)1s.0=(B(R^{\prime})^{-1}B^{t})^{-1}B\dot{\vec{q}}-\vec{p}+(B(R^{\prime})^{-1}B^{t})^{-1}B(R^{\prime})^{-1}\vec{s}. (88)

If we now multiply the equation above by BtB^{t} on the left, we obtain

0\displaystyle 0 =\displaystyle= Bt(B(R)1Bt)1Bq˙Btp+Bt(B(R)1Bt)1B(R)1s\displaystyle B^{t}(B(R^{\prime})^{-1}B^{t})^{-1}B\dot{\vec{q}}-B^{t}\vec{p}+B^{t}(B(R^{\prime})^{-1}B^{t})^{-1}B(R^{\prime})^{-1}\vec{s} (89)
=\displaystyle= ΩB/(R)1Rq˙+Btp+ΩB/(R)1s\displaystyle\Omega_{B/(R^{\prime})^{-1}}R^{\prime}\dot{\vec{q}}+B^{t}\vec{p}+\Omega_{B/(R^{\prime})^{-1}}\vec{s}

where we defined Bt(B(R)1Bt)1B(R)1ΩB/(R)1B^{t}(B(R^{\prime})^{-1}B^{t})^{-1}B(R^{\prime})^{-1}\equiv\Omega_{B/(R^{\prime})^{-1}}.

With this,

0\displaystyle 0 =\displaystyle= B(R)1(Btp)+Bq˙+B(R)1s\displaystyle-B(R^{\prime})^{-1}(B^{t}\vec{p})+B\dot{\vec{q}}+B(R^{\prime})^{-1}\vec{s} (90)
=\displaystyle= p+(B(R)1Bt)1Bq˙+(B(R)1Bt)1B(R)1s\displaystyle-\vec{p}+(B(R^{\prime})^{-1}B^{t})^{-1}B\dot{\vec{q}}+(B(R^{\prime})^{-1}B^{t})^{-1}B(R^{\prime})^{-1}\vec{s}
=\displaystyle= Lq¨+C1q+(ΩB/(R)1R)q˙+ΩB/(R)1s.\displaystyle L\ddot{\vec{q}}+C^{-1}\vec{q}+(\Omega_{B/(R^{\prime})^{-1}}R^{\prime})\dot{\vec{q}}+\Omega_{B/(R^{\prime})^{-1}}\vec{s}.

This can be written as a first-order linear system of equations,

(q˙q¨)=(0I(LC)1L1(ΩB/(R)1R))(qq˙)+(0L1ΩB/(R)1s).\displaystyle\begin{pmatrix}\dot{\vec{q}}\\ \ddot{\vec{q}}\end{pmatrix}=\begin{pmatrix}0&I\\ -(LC)^{-1}&-L^{-1}(\Omega_{B/(R^{\prime})^{-1}}R^{\prime})\end{pmatrix}\begin{pmatrix}\vec{q}\\ \dot{\vec{q}}\end{pmatrix}+\begin{pmatrix}0\\ -L^{-1}\Omega_{B/(R^{\prime})^{-1}}\vec{s}\end{pmatrix}. (91)

This gives eigenvalues of λ±=ΩB/(R)1R2L±(ΩB/(R)1R)24L21LC\lambda_{\pm}=\frac{-\Omega_{B/(R^{\prime})^{-1}}R^{\prime}}{2L}\pm\sqrt{\frac{\left(\Omega_{B/(R^{\prime})^{-1}}R^{\prime}\right)^{2}}{4L^{2}}-\frac{1}{LC}}. We again obtain imaginary eigenvalues for the underdamped case, where ΩB/(R)1R2CL<1\frac{\Omega_{B/(R^{\prime})^{-1}}R^{\prime}}{2}\sqrt{\frac{C}{L}}<1.

We can also examine the case of a network of RLC motifs in series and without internal sources. For now, we work in the general case with nonzero sources. We incorporate the resistance from the network edges by introducing an equivalent impedance, a resistance RcR_{c}, on the LC edges. We assign the LC edges to the tree edges and the memristive edges to the cycle edges. The RLC motifs satisfy KVL,

Lq¨+C1q+Rcic=Rirs.\displaystyle L\ddot{\vec{q}}+C^{-1}\vec{q}+R_{c}i_{c}=Ri_{r}-\vec{s}. (92)

Proceeding as in the previous case, we can remove the network edges and link the RLC motifs directly in series. The total current comprises only the current on the memristive and LC edges, i=ir+ic\vec{i}=i_{r}+i_{c}. Now with B(ir+ic)=0B(i_{r}+i_{c})=0, we can again solve for q¨\ddot{\vec{q}} using Lq¨+C1q+Rcq˙=v=Btp=Rirs,L\ddot{\vec{q}}+C^{-1}\vec{q}+R_{c}\dot{\vec{q}}=-\vec{v}=-B^{t}\vec{p}=Ri_{r}-\vec{s}, where p\vec{p} is the n1n-1 dimensional potential vector defined on the nodes, and from which

ir=R1[C1q+Lq¨+Rcq˙+s].i_{r}=R^{-1}[C^{-1}\vec{q}+L\ddot{\vec{q}}+R_{c}\dot{\vec{q}}+\vec{s}]. (93)

With this we have,

0\displaystyle 0 =\displaystyle= BR1(Btp)+Bq˙+BR1s\displaystyle-BR^{-1}(B^{t}\vec{p})+B\dot{\vec{q}}+BR^{-1}\vec{s} (94)
=\displaystyle= p+(BR1Bt)1Bq˙+(BR1Bt)1BR1s\displaystyle-\vec{p}+(BR^{-1}B^{t})^{-1}B\dot{\vec{q}}+(BR^{-1}B^{t})^{-1}BR^{-1}\vec{s}
=\displaystyle= Lq¨+C1q+(ΩB/R1R+Rc)q˙+ΩB/R1s.\displaystyle L\ddot{\vec{q}}+C^{-1}\vec{q}+(\Omega_{B/R^{-1}}R+R_{c})\dot{\vec{q}}+\Omega_{B/R^{-1}}\vec{s}.

This may be cast as a first-order linear system of equations,

(q˙q¨)=(0I(LC)1L1(ΩB/R1R+Rc))(qq˙)+(0L1ΩB/R1s).\displaystyle\begin{pmatrix}\dot{\vec{q}}\\ \ddot{\vec{q}}\end{pmatrix}=\begin{pmatrix}0&I\\ -(LC)^{-1}&-L^{-1}(\Omega_{B/R^{-1}}R+R_{c})\end{pmatrix}\begin{pmatrix}\vec{q}\\ \dot{\vec{q}}\end{pmatrix}+\begin{pmatrix}0\\ -L^{-1}\Omega_{B/R^{-1}}\vec{s}\end{pmatrix}. (95)

We can rewrite the RLC circuit motif by incorporating memristive devices into the resistor elements. We note the RcR_{c} now depends on the resistance in the memristive device elements in a complicated way. Thus, we use Rc(X(t))R_{c}(X(t)). Here, we use the flipped parameterization and recast the system of equations with memristive device resistance R(x)R(x).

(q˙q¨)=(0I(LC)1L1(Ron(ΩA+ΩB(I+ξX(t))1)1ΩB+Rc(X(t))))(qq˙)\displaystyle\begin{pmatrix}\dot{\vec{q}}\\ \ddot{\vec{q}}\end{pmatrix}=\begin{pmatrix}0&I\\ -(LC)^{-1}&-L^{-1}(R_{\text{on}}\left(\Omega_{A}+\Omega_{B}\left(I+\xi X(t)\right)^{-1}\right)^{-1}\Omega_{B}+R_{c}(X(t)))\end{pmatrix}\begin{pmatrix}\vec{q}\\ \dot{\vec{q}}\end{pmatrix}
+(0L1(ΩA+ΩB(I+ξX(t))1)1ΩB(I+ξX(t))1s)\displaystyle+\begin{pmatrix}0\\ -L^{-1}\left(\Omega_{A}+\Omega_{B}\left(I+\xi X(t)\right)^{-1}\right)^{-1}\Omega_{B}\left(I+\xi X(t)\right)^{-1}\vec{s}\end{pmatrix} (96)

If the resistance between RLC motifs is small compared to the LC impedance, then RcR_{c} can be neglected; this is what we expect in most cases where good conductors, e.g., wires, connect the memristive device elements and inductors. In the case of large resistance between RLC motifs, we now examine the network conductance when the sources within the RLC motifs are zero, e.g., se=0s_{e}=0. Then Rc(X(t))R_{c}(X(t)) reduces to R(x(t))-R(x(t)), where R(x(t))R(x(t)) is the resistance of the memristive elements. This allows us to simplify equation (96),

(q˙q¨)\displaystyle\begin{pmatrix}\dot{\vec{q}}\\ \ddot{\vec{q}}\end{pmatrix} =\displaystyle= (0I(LC)1L1(ΩB/R1I)R(X))(qq˙)\displaystyle\begin{pmatrix}0&I\\ -(LC)^{-1}&-L^{-1}(\Omega_{B/R^{-1}}-I)R(X)\end{pmatrix}\begin{pmatrix}\vec{q}\\ \dot{\vec{q}}\end{pmatrix} (97)
=\displaystyle= (0I(LC)1L1(ΩA+RonΩBR(X)1)1ΩAR(X))(qq˙)\displaystyle\begin{pmatrix}0&I\\ -(LC)^{-1}&L^{-1}(\Omega_{A}+R_{\text{on}}\Omega_{B}R(X)^{-1})^{-1}\Omega_{A}R(X)\end{pmatrix}\begin{pmatrix}\vec{q}\\ \dot{\vec{q}}\end{pmatrix}

In the supplementary material, we prove (ΩB/R1I)(\Omega_{B/R^{-1}}-I) can be written in terms of the cycle projection matrix ΩA\Omega_{A}. In this case, the resistance is projected onto the cycles via ΩA\Omega_{A}. Compared with the case without memristive devices, equation (91), there is an additional positive L1R(x)L^{-1}R(x) term in the fourth element of the matrix. Thus, the change in the time constant τ=LR1\tau=LR^{-1} in the cycles is a driving contribution to the current. We expect the current to not propagate beyond individual RLC cycles in this network. This can be seen noting that q¨\ddot{\vec{q}} is proportionate to (ΩB/R11)R(x)q˙(\Omega_{B/R^{-1}}-1)R(x)\dot{\vec{q}}. The orthogonal complement to ΩB/R1\Omega_{B/R^{-1}} is related to the cycle projection matrix, ΩA\Omega_{A}, without any voltage bias within the RLC motif, the current will dissipate. We examine the eigenvalues of this system, λ±=(ΩB/R1I)R(x)2L±(ΩB/R1I)R(x)(ΩB/R1I)R(x)4L21LC\lambda_{\pm}=\frac{-(\Omega_{B/R^{-1}}-I)R(x)}{2L}\pm\sqrt{\frac{(\Omega_{B/R^{-1}}-I)R(x)(\Omega_{B/R^{-1}}-I)R(x)}{4L^{2}}-\frac{1}{LC}}. We have imaginary eigenvalues in the under dampened case (ΩB/R11)R(x)2CL<1\frac{(\Omega_{B/R^{-1}}-1)R(x)}{2}\sqrt{\frac{C}{L}}<1, similarly (ΩA+RonΩBR(X)1)1ΩAR(X)2CL<1\frac{(\Omega_{A}+R_{\text{on}}\Omega_{B}R(X)^{-1})^{-1}\Omega_{A}R(X)}{2}\sqrt{\frac{C}{L}}<1 .

The techniques employed here can be generalized to other RLC circuit motifs with different arrangements of circuit elements.

IV Lyapunov functions for purely memristive circuit motifs

The Lyapunov functions provide a method to analyze the stability of memristive device networks at equilibrium while offering insight into control parameters that can be adjusted to enforce stability. The Lyapunov functions are scalar positive-definite functions, and we construct Lyapunov functions that capture the dynamics of the memristive device networks in the first-order time derivative. The conditions that ensure the time derivative of the Lyapunov functions is negative provide insight into the stable equilibrium conditions of memristive device networks. We explore the case of a dynamical equation where RR is linearly proportional to xx, the case of asymmetric resistance, and when RR is proportional to a nonlinear function, Gn(x)G_{n}(x). In addition, we study a few specific examples including the nonlinear case with Gn(x)=tanh(x)G_{n}(x)=\tanh{(x)}, akin to activation functions in neural networks, and explicitly examine the role of a window function in the resistance.

IV.1 Lyapunov functions for resistance of the form R(x)=a+xbR(x)=a+xb

Let us discuss the Lyapunov function for the set of dynamical equations for memristive device networks, derived in earlier papers Caravelli (2019); Caravelli, Sheldon, and Traversa (2021), but that will serve as an introduction to the generalized Lyapunov functions. We begin with

ddtx=αx1β(I+ξΩX)1Ωs,\displaystyle\frac{d}{dt}\vec{x}=\alpha\vec{x}-\frac{1}{\beta}(I+\xi\Omega X)^{-1}\Omega\vec{s}, (98)

where XX is a diagonal matrix of the memory parameters in the network. We will call Ωs=y\Omega\vec{s}=\vec{y} in the following. For the equation of motion above, we have

(I+ξΩX)x˙=αx+αξΩXx1βy(I+\xi\Omega X)\dot{\vec{x}}=\alpha\vec{x}+\alpha\xi\Omega X\vec{x}-\frac{1}{\beta}\vec{y} (99)

Consider

L=13xtXxξ4xtXΩXx+12αβxtXy.L=-\frac{1}{3}\vec{x}\ ^{t}X\vec{x}-\frac{\xi}{4}\vec{x}\ ^{t}X\Omega X\vec{x}+\frac{1}{2\alpha\beta}\vec{x}\ ^{t}X\vec{y}. (100)

In this case, we have

dLdt\displaystyle\frac{dL}{dt} =x˙t(XxξXΩXx+1αβXy)\displaystyle=\dot{\vec{x}}\ ^{t}\left(-X\vec{x}-\xi X\Omega X\vec{x}+\frac{1}{\alpha\beta}X\vec{y}\right)
=1αx˙tX(αx+αξΩXx1βy)\displaystyle=-\frac{1}{\alpha}\dot{\vec{x}}\ ^{t}X\big{(}\alpha\vec{x}+\alpha\xi\Omega X\vec{x}-\frac{1}{\beta}\vec{y}\big{)}
=1αx˙t(X+ξXΩX)x˙\displaystyle=-\frac{1}{\alpha}\dot{\vec{x}}\ ^{t}(X+\xi X\Omega X)\dot{\vec{x}}
=1αx˙tX(I+ξXΩX)Xx˙\displaystyle=-\frac{1}{\alpha}\dot{\vec{x}}^{t}\sqrt{X}(I+\xi\sqrt{X}\Omega\sqrt{X})\sqrt{X}\dot{\vec{x}}
=1αXx˙(I+ξXΩX)2\displaystyle=-\frac{1}{\alpha}||\sqrt{X}\dot{\vec{x}}||^{2}_{(I+\xi\sqrt{X}\Omega\sqrt{X})} (101)

and we have that dLdt0\frac{dL}{dt}\leq 0. We can move x˙\dot{x} to the left, as all the terms in LL are symmetric here. Any positive semi-definite symmetric matrix, e.g., M=I+ξXΩXM=I+\xi\sqrt{X}\Omega\sqrt{X}, can be decomposed into the product of a matrix and its transpose such that M=QtQM=Q^{t}Q. Thus, we can rewrite the matrix norm of the Lyapunov function,

dLdt\displaystyle\frac{dL}{dt} =1αx˙tXQtQXx˙\displaystyle=-\frac{1}{\alpha}\dot{\vec{x}}^{t}\sqrt{X}Q^{t}Q\sqrt{X}\dot{\vec{x}} (102)
=1αXx˙QtQ2\displaystyle=-\frac{1}{\alpha}||\sqrt{X}\dot{\vec{x}}||^{2}_{Q^{t}Q}
=1αQXx˙2\displaystyle=-\frac{1}{\alpha}||Q\sqrt{X}\dot{\vec{x}}||^{2}

As an asymptotic form, in this case, we have xi()={1,0}x_{i}(\infty)=\{1,0\} we have

L=ξ4xtΩ¯x+xt(12αβyξ4Ω13I).L=-\frac{\xi}{4}\vec{x}\ ^{t}\underline{\Omega}\vec{x}+\vec{x}\ ^{t}(\frac{1}{2\alpha\beta}\vec{y}-\frac{\xi}{4}\vec{\Omega}-\frac{1}{3}I). (103)

where Ω¯\underline{\Omega} is the matrix Ω\Omega with diagonal elements removed and Ω\vec{\Omega} is the vector of diagonal elements.

Let us now consider the Lyapunov function in the standard parameterization. We have

ddtx=1β(IχΩX)1yαx,\displaystyle\frac{d}{dt}\vec{x}=\frac{1}{\beta}(I-\chi\Omega X)^{-1}\vec{y}-\alpha\vec{x}, (104)

or

(IχΩX)ddtx=1βyα(IχΩX)x.\displaystyle(I-\chi\Omega X)\frac{d}{dt}\vec{x}=\frac{1}{\beta}\vec{y}-\alpha(I-\chi\Omega X)\vec{x}. (105)

Because of the symmetry between the two differential equations, αβ\alpha\beta remains constant and ξχ\xi\rightarrow-\chi. We thus attempt to write a Lyapunov function of the form:

L=(13xtXxχ4xtXΩXx12αβxtXy).L^{\prime}=\big{(}\frac{1}{3}\vec{x}\ ^{t}X\vec{x}-\frac{\chi}{4}\vec{x}\ ^{t}X\Omega X\vec{x}-\frac{1}{2\alpha\beta}\vec{x}\ ^{t}X\vec{y}\big{)}. (106)

Taking a time derivative, we get

dLdt\displaystyle\frac{dL^{\prime}}{dt} =x˙t(XxχXΩXx1αβXy)\displaystyle=\dot{\vec{x}}\ ^{t}\left(X\vec{x}-\chi X\Omega X\vec{x}-\frac{1}{\alpha\beta}X\vec{y}\right)
=1αx˙tX(1βyα(IχΩX)x)\displaystyle=-\frac{1}{\alpha}\dot{\vec{x}}\ ^{t}\ X\left(\frac{1}{\beta}\vec{y}-\alpha(I-\chi\Omega X)\vec{x}\right)
=1αx˙t(XχXΩX)x˙\displaystyle=-\frac{1}{\alpha}\dot{\vec{x}}\ ^{t}(X-\chi X\Omega X)\dot{\vec{x}}
=1αx˙tX(IχXΩX)Xx˙\displaystyle=-\frac{1}{\alpha}\dot{\vec{x}}^{t}\sqrt{X}(I-\chi\sqrt{X}\Omega\sqrt{X})\sqrt{X}\dot{\vec{x}}
=1αXx˙(IχXΩX)2\displaystyle=-\frac{1}{\alpha}||\sqrt{X}\dot{\vec{x}}||^{2}_{(I-\chi\sqrt{X}\Omega\sqrt{X})} (107)

As with the other dynamics, we see that if IχXΩX0I-\chi\sqrt{X}\Omega\sqrt{X}\succ 0, the Lyapunov function always has a negative derivative. However, it is not hard to see that XΩX1\sqrt{X}\Omega\sqrt{X}\prec 1 (if xi[0,1]x_{i}\in[0,1]), and since 0<χ<10<\chi<1, thus the Lyapunov property also applies in this case. Thus, the dynamics are passive and asymptotically stable for both types of circuits.

IV.2 Proof that this works only for linear functions

The procedure we employed above for deriving the Lyapunov function only applies if the resistance R(x)R(x) is linear in the internal memory parameter xx. In order to see this, we begin again with the equations of motion,

1α(I+ξΩGn(X))dxdt=x+ξΩGn(X)x1αβy.\frac{1}{\alpha}(I+\xi\Omega G_{n}(X))\frac{d\vec{x}}{dt}=\vec{x}+\xi\Omega G_{n}(X)\vec{x}-\frac{1}{\alpha\beta}\vec{y}. (108)

Here Gn(X)G_{n}(X) is a nonlinear function for the resistance RR, and throughout this section y\vec{y} is a control vector of the memristance from equation (35), e.g., ΩS\Omega\vec{S}. This is equivalent to

1α(Gn(X)+ξGn(X)ΩGn(X))dxdt=Gn(X)x1αβGn(X)y+ξGn(X)ΩGn(X)x\displaystyle\begin{split}\frac{1}{\alpha}(G_{n}(X)+\xi G_{n}(X)\Omega G_{n}(X))\frac{d\vec{x}}{dt}&=G_{n}(X)\vec{x}-\frac{1}{\alpha\beta}G_{n}(X)\vec{y}+\xi G_{n}(X)\Omega G_{n}(X)\vec{x}\end{split} (109)

We now consider a generic Lyapunov function of the form

L(X)=axtF(X)x+bxtG(X)ΩG(X)xcxtQ(X)yαβL(X)=a\vec{x}^{t}F(X)\vec{x}+b\vec{x}^{t}G(X)\Omega G(X)\vec{x}-c\vec{x}^{t}Q(X)\frac{\vec{y}}{\alpha\beta} (110)

with F(X)F(X) a diagonal matrix

dLdt\displaystyle\frac{dL}{dt} =ixiL(x)txi\displaystyle=\sum_{i}\partial_{x_{i}}L(x)\partial_{t}x_{i}
=idxidt(a(xi2F(xi)+2F(xi)xi)\displaystyle=\sum_{i}\frac{dx_{i}}{dt}\Big{(}a\big{(}x_{i}^{2}F^{\prime}(x_{i})+2F(x_{i})x_{i}\big{)}
+bj((xiG(xi)+G(xi))ΩijG(xj)xj+xjG(xj)Ωji(G(xi)+xiG(xi)))c(Q(xi)+xiQ(xi))yiαβ)\displaystyle\qquad\qquad\qquad+b\sum_{j}\big{(}(x_{i}G^{\prime}(x_{i})+G(x_{i}))\Omega_{ij}G(x_{j})x_{j}+x_{j}G(x_{j})\Omega_{ji}(G(x_{i})+x_{i}G^{\prime}(x_{i}))\big{)}-c(Q(x_{i})+x_{i}Q^{\prime}(x_{i}))\frac{y_{i}}{\alpha\beta}\Big{)}
=idxidt(a(xiF(xi)+2F(xi))xi+2bj((xiG(xi)+G(xi))ΩijG(xj)xj)c(Q(xi)+xiQ(xi))yiαβ)\displaystyle=\sum_{i}\frac{dx_{i}}{dt}\Big{(}a\big{(}x_{i}F^{\prime}(x_{i})+2F(x_{i})\big{)}x_{i}+2b\sum_{j}\big{(}(x_{i}G^{\prime}(x_{i})+G(x_{i}))\Omega_{ij}G(x_{j})x_{j}\big{)}-c(Q(x_{i})+x_{i}Q^{\prime}(x_{i}))\frac{y_{i}}{\alpha\beta}\Big{)} (111)

Now, we will show that dLdt\frac{dL}{dt} is not guaranteed to be negative by examining the case of the external field and a quadratic diagonal term. Note that for the quadratic diagonal term, we must have

a(xiF(xi)+2F(xi))=Gn(xi),a\left(x_{i}F^{\prime}(x_{i})+2F(x_{i})\right)=G_{n}(x_{i}), (112)

while for the external field term

c(xiQ(xi)+Q(xi))=Gn(xi).c\left(x_{i}Q^{\prime}(x_{i})+Q(x_{i})\right)=G_{n}(x_{i}). (113)

We temporarily set (a,c)=(1,1)(a,c)=(1,1). In the supplementary material, we discuss solving the quadratic and external field terms. It follows that the full solution can be written, in terms of the free parameter a0a_{0} as

F(z)\displaystyle F(z) =\displaystyle= 1z2(a0+zqGn(q)𝑑q)\displaystyle\frac{1}{z^{2}}(a_{0}+\int^{z}qG_{n}(q)dq) (114)
Q(z)\displaystyle Q(z) =\displaystyle= 1z(a0+zGn(q)𝑑q)\displaystyle\frac{1}{z}(a_{0}+\int^{z}G_{n}(q)dq) (115)

which is the solution to the original problem. Let us assume that Gn(z)=a2zG_{n}(z)=a_{2}z. Then we have

F(z)=1z2(a0+a23z3)F(z)=\frac{1}{z^{2}}\left(a_{0}+\frac{a_{2}}{3}z^{3}\right) (116)

If we require that F(0)=0F(0)=0 we obtain a0=0a_{0}=0, if we require F(1)=1F(1)=1 then a2=3a_{2}=3 which leaves us with the solution F(z)=zF(z)=z. For QQ, to enforce Q(0)=0Q(0)=0 and Q(1)=1Q(1)=1 we need to chose a0=0a_{0}=0 and a1=2a_{1}=2. Thus there is a disagreement in the value of a2a_{2}, which can be resolved by setting c=23c=\frac{2}{3} or (a,c)=(13,12)(a,c)=(\frac{1}{3},\frac{1}{2}).

We now note that the quadratic term is problematic and that we need to ask for a function such that we obtain the symmetric term, such that we can move dxdt\frac{dx}{dt} to the left. If we ask for a function G(xi)G(x_{i}) such that

2b(xiG(xi)+G(xi))=G(xi),2b\left(x_{i}G^{\prime}(x_{i})+G(x_{i})\right)=G(x_{i}), (117)

which means

zlogG(z)=G(z)z((2b)11)G(z)=a0z(2b)11.\partial_{z}\log G(z)=\frac{G(z)}{z}((2b)^{-1}-1)\rightarrow G(z)=a_{0}z^{(2b)^{-1}-1}. (118)

For this reason, a symmetric quadratic term in the Lyapunov function works only in the simple case where G(x)G(x) is proportional to a power of xx, and the simplest linear memristive device (G(x)G(x) linearly proportionate to xx) occurs when b=14b=\frac{1}{4}. Therefore, the Lyapunov functions found above apply only if the resistance is linear in the internal memory parameter xx.

IV.3 Asymmetric EOMs and extension to nonlinear components

As we will see below, there is another way of obtaining a Lyapunov function for the case of generic memristive devices by relaxing the requirement that MM has to be symmetric. We consider the generic equations of motion, in which gn(X)g_{n}(X) is the Bernstein polynomial approximation we introduced earlier. We have

1αZ(X)(I+ξΩgn(X))dxdt\displaystyle\frac{1}{\alpha}Z(X)(I+\xi\Omega g_{n}(X))\frac{d\vec{x}}{dt} =\displaystyle= Z(X)x+ξZ(X)Ωgn(X)x1αβZ(X)y\displaystyle Z(X)\vec{x}+\xi Z(X)\Omega g_{n}(X)\vec{x}-\frac{1}{\alpha\beta}Z(X)\vec{y} (119)

for an arbitrary nonzero diagonal matrix Z(X)Z(X). We consider again a generic Lyapunov function of the form

L(X)=axtF(X)x+bxtG(X)ΩG(X)xcxtQ(X)yαβL(X)=a\vec{x}^{t}F(X)\vec{x}+b\vec{x}^{t}G(X)\Omega G(X)\vec{x}-c\vec{x}^{t}Q(X)\frac{\vec{y}}{\alpha\beta} (120)

with F(X),Q(X)F(X),Q(X) and G(X)G(X) diagonal matrices

dLdt\displaystyle\frac{dL}{dt} =\displaystyle= ixiL(x)txi\displaystyle\sum_{i}\partial_{x_{i}}L(x)\partial_{t}x_{i} (121)
=\displaystyle= idxidt(a(xiF(xi)+2F(xi))xi+2bj((xiG(xi)+G(xi))ΩijG(xj)xj)c(Q(xi)+xiQ(xi))yiαβ)\displaystyle\sum_{i}\frac{dx_{i}}{dt}\Big{(}a\big{(}x_{i}F^{\prime}(x_{i})+2F(x_{i})\big{)}x_{i}+2b\sum_{j}\big{(}(x_{i}G^{\prime}(x_{i})+G(x_{i}))\Omega_{ij}G(x_{j})x_{j}\big{)}-c(Q(x_{i})+x_{i}Q^{\prime}(x_{i}))\frac{y_{i}}{\alpha\beta}\Big{)}

We need to solve for the linear and diagonal quadratic terms, i.e., the equations

xiF(xi)+2F(xi)\displaystyle x_{i}F^{\prime}(x_{i})+2F(x_{i}) =\displaystyle= 1aZ(xi)\displaystyle\frac{1}{a}Z(x_{i}) (122)
xiQ(xi)+Q(xi)\displaystyle x_{i}Q^{\prime}(x_{i})+Q(x_{i}) =\displaystyle= 1cZ(xi).\displaystyle\frac{1}{c}Z(x_{i}). (123)

For the asymmetric term, we need to be careful. We need in fact to satisfy

xiG(xi)+G(xi)\displaystyle x_{i}G^{\prime}(x_{i})+G(x_{i}) =\displaystyle= 12bZ(xi).\displaystyle\frac{1}{2b}Z(x_{i}). (124)

Since we have now the freedom of choosing Z(xi)Z(x_{i}), we choose

Z(xi)\displaystyle Z(x_{i}) =\displaystyle= 2b(xign(xi)+gn(xi))\displaystyle 2b(x_{i}g_{n}^{\prime}(x_{i})+g_{n}(x_{i})) (125)
G(xi)\displaystyle G(x_{i}) =\displaystyle= gn(xi),\displaystyle g_{n}(x_{i}), (126)

so that the quadratic coupling term is immediately satisfied. Thus, we need to solve

xiF(xi)+2F(xi)\displaystyle x_{i}F^{\prime}(x_{i})+2F(x_{i}) =\displaystyle= 2ba(xign(xi)+gn(xi))\displaystyle\frac{2b}{a}\left(x_{i}g_{n}^{\prime}(x_{i})+g_{n}(x_{i})\right) (127)
xiQ(xi)+Q(xi)\displaystyle x_{i}Q^{\prime}(x_{i})+Q(x_{i}) =\displaystyle= 2bc(xign(xi)+gn(xi))\displaystyle\frac{2b}{c}\left(x_{i}g_{n}^{\prime}(x_{i})+g_{n}(x_{i})\right) (128)

which imposes Q(xi)=2bcgn(xi)Q(x_{i})=\frac{2b}{c}g_{n}(x_{i}). When (b,c)=±(14,12)(b,c)=\pm(\frac{1}{4},\frac{1}{2}) then Q(xi)=gn(xi)Q(x_{i})=g_{n}(x_{i}). On the other hand, we have

F(z)=1z2(a0+2bazq(qgn(q)+gn(q))𝑑q).F(z)=\frac{1}{z^{2}}\left(a_{0}+\frac{2b}{a}\int^{z}q(qg_{n}^{\prime}(q)+g_{n}(q))dq\right). (129)

Choosing a=13a=-\frac{1}{3}, b=14b=-\frac{1}{4} and c=12c=-\frac{1}{2}, we obtain that

L(X)=13xtF(X)x14xtgn(X)Ωgn(X)x+12xtgn(X)yαβL(X)=-\frac{1}{3}\vec{x}^{t}\cdot F(X)\vec{x}-\frac{1}{4}\vec{x}^{t}g_{n}(X)\cdot\Omega g_{n}(X)\vec{x}+\frac{1}{2}\vec{x}^{t}\cdot g_{n}(X)\frac{\vec{y}}{\alpha\beta} (130)

from equation (119), we have

ddtL(X)\displaystyle\frac{d}{dt}L(X) =\displaystyle= 1αdxdtZ(X)+ξZ(X)Ωgn(X)2\displaystyle-\frac{1}{\alpha}||\frac{d\vec{x}}{dt}||^{2}_{Z(X)+\xi Z(X)\Omega g_{n}(X)} (131)
=\displaystyle= 1αdxdtZ(X)+ξ2(Z(X)Ωgn(X)+gn(X)tΩZ(X))2\displaystyle-\frac{1}{\alpha}||\frac{d\vec{x}}{dt}||^{2}_{Z(X)+\frac{\xi}{2}(Z(X)\Omega g_{n}(X)+g_{n}(X)^{t}\Omega Z(X))}

We arrive at the second form in equation (131) by noting that the inner product of real vector space is symmetric, so we can convert the matrix norm to the symmetric form. We have Z(X)=2b(gn(X)+Xgn(X))Z(X)=2b\left(g_{n}(X)+Xg_{n}^{\prime}(X)\right), thus our matrix is

M=Z(X)+ξZ(x)Ωgn(X)=2b(gn(X)+Xgn(X)+ξgn(X)Ωgn(X)+ξXgn(X)Ωgn(X))\displaystyle\begin{split}M&=Z(X)+\xi Z(x)\Omega g_{n}(X)\\ &=2b\left(g_{n}(X)+Xg_{n}^{\prime}(X)+\xi g_{n}(X)\Omega g_{n}(X)+\xi Xg_{n}^{\prime}(X)\Omega g_{n}(X)\right)\end{split} (132)

which is symmetric if gn(X)g_{n}(X) is linear in XX.

In general, we can treat gn(X)g_{n}(X) as a nonlinear function akin to the Bernstein polynomial expansion studied earlier, e.g., gn=xn(1xn)g_{n}=x^{n}(1-x^{n}). Here, we examine the case in which gn(X)=tanh(X)g_{n}(X)=\tanh(X) explicitly, as it has the right properties to give analytical results. We have Z(X)=2b(Xsech2(X)+tanh(X))Z(X)=2b\left(X\text{sech}^{2}(X)+\tanh(X)\right). The function F(z)F(z) is a little complicated, and is

F(z)\displaystyle F(z) =\displaystyle= 1z2(a0+2ba(12(Li2(e2z)z(z+2log(e2z+1)))+z2tanh(z)))\displaystyle\frac{1}{z^{2}}\Big{(}a_{0}+\frac{2b}{a}\big{(}\frac{1}{2}\left(\text{Li}_{2}\left(-e^{-2z}\right)-z\left(z+2\log\left(e^{-2z}+1\right)\right)\right)+z^{2}\tanh(z)\big{)}\Big{)} (133)

where Lis(z)=k=1zkks\text{Li}_{s}(z)=\sum_{k=1}^{\infty}\frac{z^{k}}{k^{s}} is the polylogarithm, also called the Jonquiere’s function. We now want to impose F(0)=0F(0)=0 and F(1)=1F(1)=1. Then we need to choose a0=π2b12aa_{0}=\frac{\pi^{2}b}{12a}, and a normalization 𝒩=ab{\mathcal{N}}=\frac{a}{b}

𝒩=(Li2(1e2)12log(1+1e2))+π212+2tanh(1)\displaystyle\mathcal{N}=\left(\text{Li}_{2}\left(-\frac{1}{e^{2}}\right)-1-2\log\left(1+\frac{1}{e^{2}}\right)\right)+\frac{\pi^{2}}{12}+2\tanh(1) (134)

The result is

F(z)=1𝒩(Li2(e2z)z(z+2log(e2z+1)))+2z2tanh(z)+π212z2F(z)=\frac{1}{\mathcal{N}}\frac{\left(\text{Li}_{2}\left(-e^{-2z}\right)-z\left(z+2\log\left(e^{-2z}+1\right)\right)\right)+2z^{2}\tanh(z)+\frac{\pi^{2}}{12}}{z^{2}} (135)

Thus, the Lyapunov function depends on the eigenvalues of the matrix

M=Xsech2(X)+tanh(X)+ξ(Xsech2(X)+tanh(X))Ωtanh(X).M=X\text{sech}^{2}(X)+\tanh(X)+\xi(X\text{sech}^{2}(X)+\tanh(X))\Omega\tanh(X). (136)

Note that we can separate MM into symmetric and asymmetric components, M=Ms+MaM=M_{s}+M_{a}, where

Ms\displaystyle M_{s} =\displaystyle= Xsech2(X)+tanh(X)+ξtanh(X)Ωtanh(X)\displaystyle X\text{sech}^{2}(X)+\tanh(X)+\xi\tanh(X)\Omega\tanh(X) (137)
Ma\displaystyle M_{a} =\displaystyle= ξ2(Xsech2(X)Ωtanh(X)+tanh(X)ΩXsech2(X))\displaystyle\frac{\xi}{2}(X\text{sech}^{2}(X)\Omega\tanh(X)+\tanh(X)\Omega X\text{sech}^{2}(X)) (138)

Now, it is clear that MsM_{s} is positive semi-definite. We note that on xi[0,1]x_{i}\in[0,1] we have

ρ(xi)=xisech2(xi)tanh(xi)1\rho(x_{i})=\frac{x_{i}\text{sech}^{2}(x_{i})}{\tanh(x_{i})}\leq 1 (139)

thus we write

Ma=ξ2tanh(X)(ρ(X)Ω+Ωρ(X))tanh(X)M_{a}=\frac{\xi}{2}\tanh(X)(\rho(X)\Omega+\Omega\rho(X))\tanh(X) (140)

We see that the obtained eigenvalues are always positive, and thus, this should guarantee that the L(X)L(X) constructed is a Lyapunov function.

IV.4 Case with window function

Let us now try to further generalize the Lyapunov function for memristive equations of motion with a window function Joglekar and Wolf (2009); Ascoli et al. (2016). In this case, we have

dxdt=𝒳(X)(αX1β(I+ξΩgn(X))1y)\frac{d\vec{x}}{dt}=\mathcal{X}(X)(\alpha\vec{X}-\frac{1}{\beta}(I+\xi\Omega g_{n}(X))^{-1}\vec{y}) (141)

where 𝒳(X){\mathcal{X}}(X) is the window function. We can rewrite equation (141) as

1αZ(X)(I+ξΩgn(X))𝒳1(X)dxdt=Z(X)X+ξZ(X)Ωgn(X)XZ(X)yαβ.\displaystyle\frac{1}{\alpha}Z(X)\Big{(}I+\xi\Omega g_{n}(X)\Big{)}{\mathcal{X}}^{-1}(X)\frac{d\vec{x}}{dt}=Z(X)\vec{X}+\xi Z(X)\Omega g_{n}(X)\vec{X}-Z(X)\frac{\vec{y}}{\alpha\beta}. (142)

We see that the problem is the same as before, but now, for the same Lyapunov function we have,

dLdt=1αdxdtZ(X)𝒳1(X)+ξZ(X)Ωgn(X)𝒳1(X)2\frac{dL}{dt}=-\frac{1}{\alpha}||\frac{d\vec{x}}{dt}||^{2}_{Z(X){\mathcal{X}}^{-1}(X)+\xi Z(X)\Omega g_{n}(X)\mathcal{X}^{-1}(X)} (143)

which has to be symmetrized. Note that Z(X)Z(X) and 𝒳1(X)\mathcal{X}^{-1}(X) are both diagonal and thus they commute and are symmetrical. Thus we have

M\displaystyle M =\displaystyle= Z(X)𝒳1(X)+ξ2(Z(X)Ωgn(X)𝒳1(X)+𝒳1(X)gn(X)ΩZ(X)).\displaystyle Z(X)\mathcal{X}^{-1}(X)+\frac{\xi}{2}\Big{(}Z(X)\Omega g_{n}(X)\mathcal{X}^{-1}(X)+\mathcal{X}^{-1}(X)g_{n}(X)\Omega Z(X)\Big{)}. (144)

Note that the spectral properties of the function above are rather different. In order to see this, we can expand Z(X)Z(X) and write it as Z(X)=gn(X)+Xgn(X)Z(X)=g_{n}(X)+Xg_{n}^{\prime}(X). Then

M\displaystyle M =Z(X)𝒳1(X)+ξ2(gn(X)Ωgn(X)𝒳1(X)+𝒳1(X)gn(X)Ωgn(X))\displaystyle=Z(X)\mathcal{X}^{-1}(X)+\frac{\xi}{2}\Big{(}g_{n}(X)\Omega g_{n}(X)\mathcal{X}^{-1}(X)+\mathcal{X}^{-1}(X)g_{n}(X)\Omega g_{n}(X)\Big{)}
+ξ2(Xgn(X)Ωgn(X)𝒳1(X)+𝒳1(X)gn(X)Ωgn(X)X).\displaystyle\qquad\qquad\qquad\qquad+\frac{\xi}{2}\Big{(}Xg_{n}^{\prime}(X)\Omega g_{n}(X)\mathcal{X}^{-1}(X)+\mathcal{X}^{-1}(X)g_{n}(X)\Omega g_{n}^{\prime}(X)X\Big{)}. (145)

We see that in the third term on the right-hand side, we have an operator that is not symmetric, similar to the previous case. However, the second term on the right-hand side, which was symmetric and positive before, is no longer symmetric due to the window function. This implies that we can have negative eigenvalues of order ξ\xi, and in the case of window functions, the Lyapunov function we have defined is not of general applicability. However, case-by-case analysis can identify when the window function applies to the Lyapunov function. We note that 𝒳{\mathcal{X}}^{-} is diagonal, so the second row in equation (145) has similar symmetry restrictions as equation (132).

V Invariances

Due to the projection operators, Ω\Omega, numerous circuit configurations give rise to similar properties. This is apparent by noting we can identify components of the circuit that are orthogonal to the projection operator; these orthogonal components are identified with the orthogonal projection operator Ω\Omega^{\perp}, e.g., ΩA\Omega_{A} and ΩB\Omega_{B} as described in the supplementary material. This produces various local gauge symmetries, wherein the electrical properties do not vary with the components of the circuit that span the orthogonal component; this freedom is a gauge freedom. For example, for an arbitrary matrix G(x)G(x) or voltage source s\vec{s}, under the projection operator there is freedom in an orthogonal component, ΩG~(x)\Omega_{\perp}\tilde{G}(x) and Ωγ\Omega_{\perp}\vec{\gamma}, respectively.

Ω(G(x)+ΩG~(x))\displaystyle\Omega\left(G(x)+\Omega_{\perp}\tilde{G}(x)\right) =\displaystyle= Ω(G(x)+(IΩ)G~(x))\displaystyle\Omega\left(G(x)+\left(I-\Omega\right)\tilde{G}(x)\right) (146)
=\displaystyle= ΩG(x)\displaystyle\Omega G(x)
Ω(s+Ωγ)\displaystyle\Omega\left(\vec{s}+\Omega_{\perp}\vec{\gamma}\right) =\displaystyle= y\displaystyle\vec{y} (147)

Equivalently, there is gauge invariance in any arbitrary vectors or diagonal matrixs YY that are a projection from a ground truth 𝒴{\mathcal{Y}}, such that Y=Ω𝒴Y=\Omega{\mathcal{Y}}:

ΩY=ΩΩ𝒴=ΩΩnY.\begin{split}\Omega\cdot Y&=\Omega\cdot\Omega\mathcal{Y}\\ &=\Omega\cdot\Omega^{n}Y.\end{split}

The projection operator may have extensive gauge freedom in any circuit, but examining the circuit properties and memristive dynamics can restrict this freedom. For example, consider ΩY=Ω(Y±kΩΓ)\Omega Y=\Omega(Y\pm k\Omega^{\perp}\Gamma), with diagonal matrices YY and Γ\Gamma. Note Γ\Gamma here and γ\vec{\gamma} are not necessarily related. We can devise functional forms of YY that produce equivalent orthogonally projected vectors,

Y(Y+knΩΓ)=f(Ω,Y,Γ)\begin{split}Y&\rightarrow\left(Y+\sum k_{n}\Omega^{\perp}\Gamma\right)\\ &=f(\Omega^{\perp},Y,\Gamma)\end{split} (148)

where ff in a nonlinear function in Ω\Omega^{\perp}, YY and Γ\Gamma. Consider a nonlinear function f(Ω,Y,Γ)=exp(ΩΓ)Yf(\Omega^{\perp},Y,\Gamma)=\exp{\left(\Omega^{\perp}\Gamma\right)}Y. As Ω\Omega is a non-orthogonal projector operator, it is necessary to track the action of the projection operators to ensure gauge freedom in an arbitrary nonlinear function. For example, consider the following two functions:

Ωf1(Ω,Y,Γ)\displaystyle\Omega f_{1}(\Omega^{\perp},Y,\Gamma) =\displaystyle= Ωexp(ΩΓ)Y\displaystyle\Omega\exp{\left(\Omega^{\perp}{\Gamma}\right)}{Y} (149)
=\displaystyle= ΩY\displaystyle\Omega{Y}
Ωf2(Ω,Y,Γ)\displaystyle\Omega f_{2}(\Omega^{\perp},Y,\Gamma) =\displaystyle= ΩYexp(ΩΓ)\displaystyle\Omega{Y}\exp{\left(\Omega^{\perp}{\Gamma}\right)} (150)
=\displaystyle= Ωn=0Yn!(ΩΓ)nΩY\displaystyle\Omega\sum_{n=0}^{\infty}\frac{{Y}}{n!}\left(\Omega^{\perp}{\Gamma}\right)^{n}\neq\Omega Y

Thus, the specific network properties need to be examined on a case-by-case basis to ensure invariance, as they could contribute to otherwise anomalous behaviors in the network properties.

Finally, there is also invariance within circuit configurations; a transformation between equivalent circuits preserves the eigenspectrum of Ω\Omega. The projection operators can be transformed via orthogonal matrices, Ω1=OtΩ2O\Omega_{1}=O^{t}\Omega_{2}O; when Ω\Omega is the cycle matrix different matrices, OO, respect different loop equivalent circuits. Since permutations are a subgroup of orthogonal transformations, they also leave the spectrum of Ω\Omega invariant (and are interpreted as edge relabeling).

We examine how such a transformation affects the dynamics of the memristive network. For example, we can rewrite the equations of motion with an orthogonally similar projection operator,

dxdt=1β(I+ξOtΩ2OG(X))1OtΩ2OSαx.\frac{d\vec{x}}{dt}=\frac{1}{\beta}(I+\xi O^{t}\Omega_{2}OG(X))^{-1}O^{t}\Omega_{2}O\vec{S}-\alpha\vec{x}. (151)

We can see that while we have freedom in choosing the orthogonal matrices, OO, restrictions on OO or nodal permutations may be required to preserve the network dynamics. Such restrictions will provide insight into the form of equivalent circuits. We investigate this below.

Here, we explore the effect of three invariances on the equations of motion of xx, the current i\vec{i}, and the Lyapunov functions. We examine invariant functions of the types Y+ΩγY+\Omega_{\perp}\vec{\gamma}, ΩY\Omega Y and OtΩOO^{t}\Omega O. We work with a voltage source in series, s\vec{s}, in the flipped parameterization:

i\displaystyle\vec{i} =Ron1(I+ξΩA~G(X))1ΩA~s\displaystyle=-R_{\text{on}}^{-1}(I+\xi\Omega_{\tilde{A}}G(X))^{-1}\Omega_{\tilde{A}}\vec{s} (152)

The results from this case are generalizable to the direct parameterization and other sources, e.g., current sources.
There is a more general invariance lurking in the background, which generalizes the one we just discussed. The analysis above can be used to understand the relaxational dynamics of memristors near equilibria. To see this, we consider a network of memristive current-driven devices, whose internal memory is

dxdt=αx+Ronβi,\displaystyle\frac{d\vec{x}}{dt}=-\alpha\vec{x}+\frac{R_{\text{on}}}{\beta}\vec{i}, (153)

which leads again to equation (98) replacing equation (56) for G(X)=XG(X)=X. As we have seen, the transformation ss+(IΩA~)γ\vec{s}\rightarrow\vec{s}+(I-\Omega_{\tilde{A}})\vec{\gamma} for arbitrary vectors γ\vec{\gamma} leaves the dynamics invariant, since ΩA~(IΩA~)=0\Omega_{\tilde{A}}(I-\Omega_{\tilde{A}})=0. This invariance does not take into account the fact that we have memristive devices. However, in memristive devices described by equation (98) this is a subset of a larger invariance set, which is more generally a transformation of the form

(x,s)(x,s)=(x+δx,s+δs)\displaystyle(\vec{x},\vec{s})\rightarrow(\vec{x}\ ^{\prime},\vec{s}\ ^{\prime})=(\vec{x}+\delta\vec{x},\vec{s}+\delta\vec{s}) (154)

which preserves the dynamics, such that the time derivative is invariant, e.g., dxdt=dxdt\frac{d\vec{x}}{dt}=\frac{d\vec{x}\ ^{\prime}}{dt}. To see this, let us equate

(IχΩA~X)1ΩA~sβαx\displaystyle(I-\chi\Omega_{\tilde{A}}X)^{-1}\Omega_{\tilde{A}}\frac{\vec{s}}{\beta}-\alpha\vec{x} =\displaystyle= (IχΩA~(X+δX))1ΩA~(s+δs)βα(x+δx)\displaystyle(I-\chi\Omega_{\tilde{A}}(X+\delta X))^{-1}\Omega_{\tilde{A}}\frac{(\vec{s}+\delta\vec{s})}{\beta}-\alpha(\vec{x}+\delta\vec{x}) (155)

After a few algebraic steps, we get the following generalized relationship:

(IΩA~)δx\displaystyle(I-\Omega_{\tilde{A}})\delta\vec{x} =\displaystyle= 0\displaystyle 0 (156)
δs\displaystyle\delta\vec{s} =\displaystyle= αβ(Iχ(X+δX))δxχδX(IχΩA~X)1ΩA~s+(IΩA~)γ\displaystyle\alpha\beta\big{(}I-\chi(X+\delta X)\big{)}\delta\vec{x}-\chi\delta X(I-\chi\Omega_{\tilde{A}}X)^{-1}\Omega_{\tilde{A}}\vec{s}+(I-\Omega_{\tilde{A}})\vec{\gamma} (157)

for any vector γ\vec{\gamma}. To see the origin of this generalized invariance, consider for instance resistances R1,,RnR_{1},\cdots,R_{n} in series with kk generators sis_{i}. We have

j=1ksjj=1nRj=i.\displaystyle\frac{\sum_{j=1}^{k}s_{j}}{\sum_{j=1}^{n}R_{j}}=i. (158)

To preserve the current flowing in the mesh, we can either change RjR_{j} and sjs_{j} in such a way that the ratio is preserved. For networks of memristors, equation (157) generalizes this simple invariance. Since this is valid for arbitrary derivatives, it must be true also for the particular case of dxdt=0\frac{d\vec{x}}{dt}=0. As a result, this invariance maps equilibrium points as a function of a change in external control.

V.1 Orthogonal vector component

Let us examine the case where we have voltage sources with an orthogonal component applied in series, such that we can write our source as S+ΩAγ\vec{S}+\Omega_{A}^{\perp}\vec{\gamma}. We examine the orthogonal components’ role in the time evolution of the resistance,

ddtx(t)=αx(t)1β(I+ξΩAX(t))1ΩA(S+ΩAγ)=αx(t)1β(I+ξΩAX(t))1ΩAS\begin{split}\frac{d}{dt}\vec{x}(t)&=\alpha\vec{x}(t)-\frac{1}{\beta}\left(I+\xi\Omega_{A}X(t)\right)^{-1}\Omega_{A}\left(\vec{S}+\Omega_{A}^{\perp}\vec{\gamma}\right)\\ &=\alpha\vec{x}(t)-\frac{1}{\beta}\left(I+\xi\Omega_{A}X(t)\right)^{-1}\Omega_{A}\vec{S}\end{split} (159)

The resistance in the network is independent of the orthogonal component of the source. We now examine the Lyapunov functions of linear memristive device elements in the presence of an orthogonal contribution to the resistance. Consider the Lyapunov function,

L=13xtXxξ4xtXΩAXx+12αβxtXΩA(S+ΩAγ),\displaystyle L=-\frac{1}{3}x^{t}Xx-\frac{\xi}{4}x^{t}X\Omega_{A}Xx+\frac{1}{2\alpha\beta}x^{t}X\Omega_{A}\left(\vec{S}+\Omega_{A}^{\perp}\vec{\gamma}\right), (160)
dLdt=1αXx(I+ξXΩAX).\displaystyle\frac{dL}{dt}=\frac{-1}{\alpha}||\sqrt{X}x||_{\left(I+\xi\sqrt{X}\Omega_{A}\sqrt{X}\right)}. (161)

Again, we obtain that if I+ξXΩAX0I+\xi\sqrt{X}\Omega_{A}\sqrt{X}\succ 0, the Lyapunov function always has a negative derivative. As XΩAX>0\sqrt{X}\Omega_{A}\sqrt{X}>0 in the domain x[0,1]x\in\left[0,1\right], and ξ>0\xi>0, the Lyapunov function has a negative derivative and the same passive dynamics and stability conditions as the system without orthogonal sources.

Similarly we can examine the case where there is a component of the resistance that is orthogonal to ΩA\Omega_{A}, x(t)x(t)+ΩAγx(t)\rightarrow x^{\prime}(t)+\Omega^{\perp}_{A}\vec{\gamma}, similarly X(t)X(t)+ΩAΓX(t)\rightarrow X^{\prime}(t)+\Omega^{\perp}_{A}\vec{\Gamma}. We examine the orthogonal components’ role in the equations of motion,

x˙(t)+ΩAγ˙=αx(t)+αΩAγ1β(I+ξΩA(X(t)+ΩAΓ))1ΩAs\displaystyle\begin{split}\dot{x}^{\prime}(t)+\Omega_{A}^{\perp}\dot{\vec{\gamma}}&=\alpha x^{\prime}(t)+\alpha\Omega_{A}^{\perp}\vec{\gamma}-\frac{1}{\beta}\left(I+\xi\Omega_{A}\left(X^{\prime}(t)+\Omega_{A}^{\perp}\vec{\Gamma}\right)\right)^{-1}\Omega_{A}\vec{s}\end{split} (162)
x˙(t)=αx(t)+ΩA(αγγ˙)1β(I+ξΩAX(t))1ΩAs\displaystyle\begin{split}\dot{x}^{\prime}(t)&=\alpha x^{\prime}(t)+\Omega_{A}^{\perp}\left(\alpha\vec{\gamma}-\dot{\vec{\gamma}}\right)-\frac{1}{\beta}\left(I+\xi\Omega_{A}X^{\prime}(t)\right)^{-1}\Omega_{A}\vec{s}\end{split} (163)

The time derivative of xx has an additional component projected into ΩA\Omega_{A}^{\perp}. Here, we note that without loss of generality, x(t)x^{\prime}(t) has no orthogonal component as any orthogonal component can be in γ\vec{\gamma}; therefore, we can split equation (163) into projected and orthogonally projected components. The orthogonal component has dissipative dynamics of the memristive device elements,

γ˙=αγ.\displaystyle\dot{\vec{\gamma}}=\alpha\vec{\gamma}. (164)

The dissipative dynamics are short-term dynamics in the orthogonally projected space, as there could be a contribution to the dynamics of γ\vec{\gamma} from the non-orthogonal component. Thus, equation (164) does not fix the long-term dynamics. Treating the dynamics of the orthogonal and non-orthogonal components as separable we see γ\vec{\gamma} does not change the electrical current in the network. We rewrite the current using an expansion similar to equation (56) for the flipped parameterization,

i\displaystyle\vec{i} =\displaystyle= Ron1At(AAt+ξA(x(t)+ΩAγ)At)1AtSin\displaystyle-R_{\text{on}}^{-1}A^{t}\left(AA^{t}+\xi A\left(x^{\prime}(t)+\Omega_{A}^{\perp}\vec{\gamma}\right)A^{t}\right)^{-1}A^{t}\vec{S}_{\text{in}} (165)
=\displaystyle= Ron1(I+ξΩA(x(t)+ΩAγ))1ΩASin\displaystyle-R_{\text{on}}^{-1}\left(I+\xi\Omega_{A}\left(x^{\prime}(t)+\Omega_{A}^{\perp}\vec{\gamma}\right)\right)^{-1}\Omega_{A}\vec{S}_{\text{in}}

The orthogonal term ΩAγ\Omega_{A}^{\perp}\vec{\gamma} does not contribute to the electrical current. Next, we examine the Lyapunov functions in the presence of an orthogonal component of the circuit. We rewrite the equations of motion, equation (163),

(I+ξΩAX(t))x˙(t)ΩA(αγγ˙)\displaystyle\left(I+\xi\Omega_{A}X^{\prime}(t)\right)\dot{x}^{\prime}(t)-\Omega_{A}^{\perp}\left(\alpha\vec{\gamma}-\dot{\vec{\gamma}}\right) =α(x(t)+ξΩAX(t)x(t)1αβΩAs).\displaystyle=\alpha\left(x^{\prime}(t)+\xi\Omega_{A}X^{\prime}(t)x^{\prime}(t)-\frac{1}{\alpha\beta}\Omega_{A}\vec{s}\right). (166)

We work with the transformed Lyapunov function,

L\displaystyle L =13(x+ΩAγ)t(X+ΩAΓ)(x+ΩAγ)\displaystyle=-\frac{1}{3}(x^{\prime}+\Omega_{A}^{\perp}\vec{\gamma})^{t}(X^{\prime}+\Omega_{A}^{\perp}\vec{\Gamma})(x^{\prime}+\Omega_{A}^{\perp}\vec{\gamma})
ξ4(x+ΩAγ)t(X+ΩAΓ)ΩA(X+ΩAΓ)(x+ΩAγ)+12αβ(x+ΩAγ)t(X+ΩAΓ)ΩAs\displaystyle\qquad\qquad\qquad\qquad-\frac{\xi}{4}(x^{\prime}+\Omega_{A}^{\perp}\vec{\gamma})^{t}(X^{\prime}+\Omega_{A}^{\perp}\vec{\Gamma})\Omega_{A}(X^{\prime}+\Omega_{A}^{\perp}\vec{\Gamma})(x^{\prime}+\Omega_{A}^{\perp}\vec{\gamma})+\frac{1}{2\alpha\beta}(x^{\prime}+\Omega_{A}^{\perp}\vec{\gamma})^{t}(X^{\prime}+\Omega_{A}^{\perp}\vec{\Gamma})\Omega_{A}\vec{s} (167)

We continue to work in the regime where x(t)x^{\prime}(t) does not have a component orthogonal to ΩA\Omega_{A} and thus is perpendicular to ΩAγ\Omega_{A}^{\perp}\vec{\gamma}. We can separate x˙\dot{x}^{\prime} and γ˙\dot{\vec{\gamma}} in equation (166), and we have

dLdt\displaystyle\frac{dL}{dt} =\displaystyle= x˙tX(x+ξΩAXx1αβΩAs)γ˙tΩAΓΩAγ\displaystyle-\dot{x^{\prime}}^{t}X^{\prime}\left(x^{\prime}+\xi\Omega_{A}X^{\prime}x^{\prime}-\frac{1}{\alpha\beta}\Omega_{A}\vec{s}\right)-\dot{\vec{\gamma}}^{t}\Omega_{A}^{\perp}\Gamma\Omega_{A}^{\perp}\vec{\gamma} (168)
=\displaystyle= 1αx˙tX(I+ξΩAX)x˙1αγ˙tΩAΓΩAγ˙\displaystyle-\frac{1}{\alpha}\dot{x^{\prime}}^{t}X^{\prime}\left(I+\xi\Omega_{A}X^{\prime}\right)\dot{x}^{\prime}-\frac{1}{\alpha}\dot{\vec{\gamma}}^{t}\Omega_{A}^{\perp}\Gamma\Omega_{A}^{\perp}\dot{\vec{\gamma}}
=\displaystyle= 1αXx˙+ΓΩAγ˙I+ξXΩAX\displaystyle-\frac{1}{\alpha}||\sqrt{X^{\prime}}\dot{x}^{\prime}+\sqrt{\Gamma}\Omega_{A}^{\perp}\dot{\vec{\gamma}}||_{I+\xi\sqrt{X^{\prime}}\Omega_{A}\sqrt{X^{\prime}}}

Where we have used the dissipative dynamics of γ\gamma from equation (164). The Lyapunov function again has a semi-definite negative time derivative. It has a similar form as the Lyapunov functions without an orthogonal source term. Thus, the memristive device networks with distinct orthogonal sources have the same passive dynamics and stability conditions.

As xx^{\prime} and γ\vec{\gamma} are separable we can separate these into two Lyapunov functions for the orthogonal and non-orthogonal components,

dLdt\displaystyle\frac{dL}{dt} =\displaystyle= 1αXx˙I+ξXΩAX\displaystyle-\frac{1}{\alpha}||\sqrt{X^{\prime}}\dot{x}^{\prime}||_{I+\xi\sqrt{X^{\prime}}\Omega_{A}\sqrt{X^{\prime}}} (169)
dLdt\displaystyle\frac{dL^{\perp}}{dt} =\displaystyle= 1αγ˙ΩAΓΩA\displaystyle-\frac{1}{\alpha}||\dot{\vec{\gamma}}||_{\Omega_{A}^{\perp}\Gamma\Omega_{A}^{\perp}} (170)

These two Lyapunov functions follow trivially from equation (163) by applying ΩA\Omega_{A} and ΩA\Omega_{A}^{\perp}. In this case, we have two equations:

ΩAx˙(t)\displaystyle\Omega_{A}\dot{x}^{\prime}(t) =ΩAαx(t)ΩAβ(I+ξΩAX(t))1ΩAs\displaystyle=\Omega_{A}\alpha x^{\prime}(t)-\frac{\Omega_{A}}{\beta}\left(I+\xi\Omega_{A}X^{\prime}(t)\right)^{-1}\Omega_{A}\vec{s} (171)
ΩAγ˙\displaystyle\Omega_{A}^{\perp}\dot{\vec{\gamma}} =\displaystyle= ΩAαγ.\displaystyle\Omega_{A}^{\perp}\alpha\vec{\gamma}. (172)

Equation (171) reduces to the normal Lyapunov functions studied above, reproducing equation (169). We can define a Lyapunov function for the orthogonal component.

L\displaystyle L =\displaystyle= α2γtΩAγ\displaystyle-\frac{\alpha}{2}\vec{\gamma}^{t}\Omega_{A}^{\perp}\vec{\gamma} (173)
dLdt\displaystyle\frac{dL}{dt} =\displaystyle= αγ˙tΩAγ\displaystyle-\alpha\dot{\vec{\gamma}}^{t}\Omega_{A}^{\perp}\vec{\gamma} (174)
=\displaystyle= γ˙ΩA\displaystyle-||\dot{\vec{\gamma}}||_{\Omega_{A}^{\perp}}

Where we have used the dissipative dynamics of γ\vec{\gamma}. Again, we see the orthogonal component has passive dynamics.

V.2 Projected vectors

Here, we examine the case where we have some voltage sources applied in series that is a projection of a larger space, such that we can write our source vector as y=Ω𝒴\vec{y}=\Omega\mathcal{Y}. This is another form of the previous case, wherein 𝒴\mathcal{Y} has some orthogonal component; however, now we do not know an ab initio trivial separation into orthogonal and non-orthogonal components. We examine the equations of motion in the case of an orthogonal component in a source term,

ddtx(t)\displaystyle\frac{d}{dt}\vec{x}(t) =\displaystyle= αx(t)1β(I+ξΩAX(t))1Ω𝒴\displaystyle\alpha\vec{x}(t)-\frac{1}{\beta}\left(I+\xi\Omega_{A}X(t)\right)^{-1}\Omega\mathcal{Y} (175)
=\displaystyle= αx(t)1β(I+ξΩAX(t))1ΩΩ𝒴\displaystyle\alpha\vec{x}(t)-\frac{1}{\beta}\left(I+\xi\Omega_{A}X(t)\right)^{-1}\Omega\Omega\mathcal{Y}
=\displaystyle= αx(t)1β(I+ξΩAX(t))1Ωy\displaystyle\alpha\vec{x}(t)-\frac{1}{\beta}\left(I+\xi\Omega_{A}X(t)\right)^{-1}\Omega\vec{y}

The time derivative of xx does not depend on the orthogonal components in 𝒴\mathcal{Y}. Next, we examine the time derivative of x(t)x(t) when there is a component of memristive devices that are orthogonal to the projection operator, x(t)ΩAx(t)x(t)\rightarrow\Omega_{A}x^{\prime}(t), using a modified form of equation (159),

ΩAx˙(t)=αΩAx(t)1β(I+ξΩAΩAX(t))1ΩAs\displaystyle\Omega_{A}\dot{x}^{\prime}(t)=\alpha\Omega_{A}x^{\prime}(t)-\frac{1}{\beta}\left(I+\xi\Omega_{A}\Omega_{A}X^{\prime}(t)\right)^{-1}\Omega_{A}\vec{s} (176)
x˙(t)=ΩA(αx(t)1β(I+ξΩAX(t))1ΩAs).\displaystyle\dot{x}(t)=\Omega_{A}\left(\alpha x^{\prime}(t)-\frac{1}{\beta}\left(I+\xi\Omega_{A}X^{\prime}(t)\right)^{-1}\Omega_{A}\vec{s}\right). (177)

The time derivative of x(t)x(t)is just the projection of x˙(t)\dot{x}^{\prime}(t); there is no contribution from the orthogonal component. Similarly, we can see the electrical current in the network depends only on the projected component. The electrical current can be rewritten using the expansion similar to equation (56) and equation (43),

i=Ron1At(AAt+ξAx(t)At)1ASin=Ron1At(A(I+ξΩAx(t))At)1ASin=Ron1(I+ξΩAΩAx(t))1ΩASin=Ron1(I+ξΩAx(t))1ΩASin\displaystyle\begin{split}\vec{i}&=-R_{\text{on}}^{-1}A^{t}\left(AA^{t}+\xi Ax^{\prime}(t)A^{t}\right)^{-1}A\vec{S}_{\text{in}}\\ &=-R_{\text{on}}^{-1}A^{t}\left(A\left(I+\xi\Omega_{A}x(t)\right)A^{t}\right)^{-1}A\vec{S}_{\text{in}}\\ &=-R_{\text{on}}^{-1}\left(I+\xi\Omega_{A}\Omega_{A}x(t)\right)^{-1}\Omega_{A}\vec{S}_{\text{in}}\\ &=-R_{\text{on}}^{-1}\left(I+\xi\Omega_{A}x(t)\right)^{-1}\Omega_{A}\vec{S}_{\text{in}}\end{split} (178)

Thus, the current in the network depends only upon the projection of xx onto ΩA\Omega_{A}. This implies when the resistance is linear in xx, then R(X)ΩAR(X)R(X)\rightarrow\Omega_{A}R(X), and the current is invarient to RΩARR\rightarrow\Omega_{A}R^{\prime}. We verify this,

i\displaystyle\vec{i} =\displaystyle= At(AΩARAt)1ASin\displaystyle-A^{t}(A\Omega_{A}R^{\prime}A^{t})^{-1}A\vec{S}_{\text{in}} (179)
=\displaystyle= Ron1At(AΩAAt+AΩAξxAt)1ASin\displaystyle-R_{\text{on}}^{-1}A^{t}(A\Omega_{A}A^{t}+A\Omega_{A}\xi x^{\prime}A^{t})^{-1}A\vec{S}_{\text{in }}
=\displaystyle= Ron1(At(AΩAAt)1AΩAξx)kAt(AΩAAt)1ASin\displaystyle-R_{\text{on}}^{-1}\sum\left(-A^{t}(A\Omega_{A}A^{t})^{-1}A\Omega_{A}\xi x^{\prime}\right)^{k}A^{t}(A\Omega_{A}A^{t})^{-1}A\vec{S}_{\text{in}}
=\displaystyle= Ron1(1+ΩAξx)1ΩASin\displaystyle-R_{\text{on}}^{-1}\sum(1+\Omega_{A}\xi x)^{-1}\Omega_{A}\vec{S}_{in}
=\displaystyle= At(ARAt)1ASin\displaystyle-A^{t}(ARA^{t})^{-1}A\vec{S}_{\text{in}}

We now examine the Lyapunov functions of linear memristive devices as a function of xx^{\prime}. We have the equations of motion

(I+ξΩAX(t))ΩAx˙(t)=α(ΩAx(t)+αξΩAX(t)ΩAx(t)1αβΩAs)\displaystyle(I+\xi\Omega_{A}X^{\prime}(t))\Omega_{A}\dot{x}^{\prime}(t)=\alpha\left(\Omega_{A}x^{\prime}(t)+\alpha\xi\Omega_{A}X^{\prime}(t)\Omega_{A}x^{\prime}(t)-\frac{1}{\alpha\beta}\Omega_{A}\vec{s}\right) (180)

We study the time derivative of a suitable Lyapunov function:

L=13xtΩAXΩAx14ξxtΩAXΩAXΩAx+1αβxtΩAXΩAs\displaystyle L=-\frac{1}{3}x^{\prime\,t}\Omega_{A}X^{\prime}\Omega_{A}x^{\prime}-\frac{1}{4}\xi x^{\prime\,t}\Omega_{A}X^{\prime}\Omega_{A}X^{\prime}\Omega_{A}x^{\prime}+\frac{1}{\alpha\beta}x^{\prime\,t}\Omega_{A}X^{\prime}\Omega_{A}\vec{s} (181)
dLdt=x˙tΩAX(ΩAx+ξΩAXΩAx1αβΩAs)=1αx˙tΩAX(I+ξΩAX)ΩAx˙=1αXΩAx˙(I+ξXΩAX)\displaystyle\begin{split}\frac{dL}{dt}&=-\dot{x}^{\prime\,t}\Omega_{A}X^{\prime}\left(\Omega_{A}x^{\prime}+\xi\Omega_{A}X^{\prime}\Omega_{A}x^{\prime}-\frac{1}{\alpha\beta}\Omega_{A}\vec{s}\right)\\ &=\frac{-1}{\alpha}\dot{x}^{\prime\,t}\Omega_{A}X^{\prime}\left(I+\xi\Omega_{A}X^{\prime}\right)\Omega_{A}\dot{x}^{\prime}\\ &=\frac{-1}{\alpha}||\sqrt{X^{\prime}}\Omega_{A}\dot{x}^{\prime}||_{(I+\xi\sqrt{X^{\prime}}\Omega_{A}\sqrt{X^{\prime}})}\end{split} (182)

Thus, the Lyapunov functions are projections of ΩA\Omega_{A}. As in the case with an orthogonal source, we are guaranteed that dLdt0\frac{dL}{dt}\leq 0, but the stability conditions depend on the projection of the resistance given by ΩAx˙\Omega_{A}\dot{x}^{\prime}.

V.3 The equivalence class of spectrally invariant projectors

It was shown in Caravelli (2023) that an orthogonal transformation on the matrix ΩA\Omega_{A} can be interpreted as edge rewiring that leaves the cycle content of the graph invariant. Since the graph-theoretical content is contained in the projector matrix, networks that are orthogonally related have orthogonally similar projection operators. We can interrelate these projection operators by the transformation via orthogonal matrices, OO. A simple example is shown here for a cycle matrix, AA:

A\displaystyle A =\displaystyle= (1101000111)\displaystyle\begin{pmatrix}1&1&0&-1&0\\ 0&0&1&1&-1\end{pmatrix} (183)
O\displaystyle O =\displaystyle= (1000000001001000001001000)\displaystyle\begin{pmatrix}1&0&0&0&0\\ 0&0&0&0&-1\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&1&0&0&0\\ \end{pmatrix} (184)
(OAt)t\displaystyle(OA^{t})^{t} =\displaystyle= (1001101110)\displaystyle\begin{pmatrix}1&0&0&-1&1\\ 0&1&1&1&0\end{pmatrix} (185)

This transformed network has an orthogonally similar projection matrix, ΩAOtΩAO\Omega_{A}\rightarrow O^{t}\Omega_{A}O. We now investigate the memristive device properties of networks with orthogonally similar projection matrices, ΩA\Omega_{A} and OtΩAOO^{t}\Omega_{A}O. We rewrite the equations of motion of x(t)x(t) with this transformed projection operator for a voltage source, S\vec{S}, in series,

x˙(t)=αx(t)1β(I+ξOtΩAOx(t))1OtΩAOS.\displaystyle\dot{x}(t)=\alpha x(t)-\frac{1}{\beta}\left(I+\xi O^{t}\Omega_{A}Ox(t)\right)^{-1}O^{t}\Omega_{A}O\vec{S}. (186)

Obviously x˙(t)\dot{x}(t) is invariant when OtΩAOO^{t}\Omega_{A}O respects symmetries in xx and Y\vec{Y} such that OtΩAOx(t)=ΩAx(t)O^{t}\Omega_{A}Ox(t)=\Omega_{A}x(t), OtΩAOS=ΩASO^{t}\Omega_{A}O\vec{S}=\Omega_{A}\vec{S}, this is true if ΩA\Omega_{A} respects the orthonormal basis of OO and OO acts as a permutation operator. We gain further insight by examining the electrical current in the network, using the Woodbury matrix identity, equation (42), discussed above,

i\displaystyle\vec{i} =\displaystyle= Ron1k=0(1)k(OtΩAOξx(t))kOtΩAOSin\displaystyle-R_{\text{on}}^{-1}\sum_{k=0}^{\infty}(-1)^{k}\left(O^{t}\Omega_{A}O\xi x(t)\right)^{k}O^{t}\Omega_{A}O\vec{S}_{\text{in}} (187)
=\displaystyle= Ron1Otk=0(1)k(ΩAξOx(t)Ot)kΩAOSin\displaystyle-R_{\text{on}}^{-1}O^{t}\sum_{k=0}^{\infty}(-1)^{k}\left(\Omega_{A}\xi Ox(t)O^{t}\right)^{k}\Omega_{A}O\vec{S}_{\text{in}}

Thus, the transformation ΩAOtΩAO\Omega_{A}\rightarrow O^{t}\Omega_{A}O is equivalent to a dual transformation:

ΩAOtΩAOi(x(t),Sin)i(x(t),OSin)x(t)x(t)|Ox(t)Ot\displaystyle\Omega_{A}\rightarrow O^{t}\Omega_{A}O\Leftrightarrow\begin{matrix}\vec{i}(x(t),\vec{S}_{\text{in}})\rightarrow\vec{i}(x^{\prime}(t),O\vec{S}_{\text{in}})\\ x(t)\rightarrow x^{\prime}(t)\;|\,Ox(t)O^{t}\end{matrix} (188)

To examine the role of this dual transformation on the time evolution of x(t)x(t), we rewrite equation (13),

x˙(t)\displaystyle\dot{x}(t) =\displaystyle= αx(t)RonβOtA(AtOROtA)1AtOSin.\displaystyle\alpha x(t)-\frac{R_{\text{on}}}{\beta}O^{t}A\left(A^{t}ORO^{t}A\right)^{-1}A^{t}O\vec{S}_{\text{in}}. (189)

Thus, the transformation ΩAOtΩAO\Omega_{A}\rightarrow O^{t}\Omega_{A}O corresponds to a transformation of the orthogonal projectors with an orthogonally similar resistance matrix,

ΩROtΩOROtO,\displaystyle\Omega_{R}\rightarrow O^{t}\Omega_{ORO^{t}}O, (190)

therefore x˙(t)\dot{x}(t) is invariant to this orthogonal transformation if ΩROtΩOROtO\Omega_{R}\leftrightarrow O^{t}\Omega_{ORO^{t}}O.

Next, we examine the Lyapunov function of linear memristive devices under an orthogonal transformation. We rearrange equation (189) to rewrite the equations of motion,

(I+ξOtΩAOX)x˙=α(x+ξOtΩAOXx1αβOtΩAOSin)\displaystyle(I+\xi O^{t}\Omega_{A}OX)\dot{x}=\alpha\left(x+\xi O^{t}\Omega_{A}OXx-\frac{1}{\alpha\beta}O^{t}\Omega_{A}O\vec{S}_{\text{in}}\right) (191)

We find an appropriate Lyapunov function and examine the time derivative; we have

L\displaystyle L =\displaystyle= 13xtXxξ14xtXOtΩAOXx+12αβxtXOtΩAOSin\displaystyle-\frac{1}{3}x^{t}Xx-\xi\frac{1}{4}x^{t}XO^{t}\Omega_{A}OXx+\frac{1}{2\alpha\beta}x^{t}XO^{t}\Omega_{A}O\vec{S}_{\text{in}} (192)
dLdt\displaystyle\frac{dL}{dt} =\displaystyle= x˙tX(x+ξOtΩAOXx1αβOtΩAOSin)\displaystyle-\dot{x}^{t}X\left(x+\xi O^{t}\Omega_{A}OXx-\frac{1}{\alpha\beta}O^{t}\Omega_{A}O\vec{S}_{\text{in}}\right) (193)
=\displaystyle= 1αXx˙(I+ξXOtΩAOX)\displaystyle\frac{-1}{\alpha}||\sqrt{X}\dot{x}||_{\left(I+\xi\sqrt{X}O^{t}\Omega_{A}O\sqrt{X}\right)}

Therefore, under this transformation, the Lyapunov functions are equivalent to the original case (without orthogonal transformations) when OX(t)=X(t)O\sqrt{X(t)}=\sqrt{X(t)}, a similar restriction when A=AOtA=AO^{t}; this is true when O=IO=I or when OO respects the symmetries of the circuits such that a permutation leaves the circuit unchanged. Therefore, the Lyapunov functions of the orthogonally transformed system are identical only if OtΩAO=ΩAO^{t}\Omega_{A}O=\Omega_{A}. Here, the Lyapunov function has a negative derivative if I+ξXOtΩAOX0I+\xi\sqrt{X}O^{t}\Omega_{A}O\sqrt{X}\succ 0.

We can move the orthogonal matrix from the matrix norm to better compare the dynamics to the original case. We substitute OXOtXOXO^{t}\rightarrow X^{\prime}, and we can rewrite the time derivative of the Lyapunov function as

dLdt\displaystyle\frac{dL}{dt} =\displaystyle= 1αXOx˙(I+ξXΩAX).\displaystyle\frac{-1}{\alpha}||\sqrt{X^{\prime}}O\dot{x}||_{\left(I+\xi\sqrt{X^{\prime}}\Omega_{A}\sqrt{X^{\prime}}\right)}. (194)

Thus, networks related by an orthogonal transformation, OO, will similarly have a Lyapunov function with a negative derivative. As the inner product is invariant to orthogonal transformations, XOx˙=Xx˙||\sqrt{X^{\prime}}O\dot{x}||=||\sqrt{X}\dot{x}||. Here, it is apparent the stability conditions are similar to the non-transformed case.

VI Laplacian matrix and the projection operator

We note a deep connection between the graph Laplacian matrix and the projection operators studied above. First, we prove that the graph Laplacian LL can be constructed from the index matrix and GG is the (m×m)(m\times m) diagonal matrix of conductivity values.

Theorem 1.

The graph Laplacian can be written as L=BGBtL=BGB^{t}

Proof.

We examine the Laplacian matrix element-wise, noting that GG is a diagonal matrix:

Li,j\displaystyle L_{i,j} =\displaystyle= kBi,kBj,kGk,k\displaystyle\sum_{k}B_{i,k}B_{j,k}G_{k,k} (195)
=\displaystyle= kei,kej,kGk,k.\displaystyle\sum_{k}e^{k}_{i,}e^{k}_{j,}G_{k,k}.

Here ei,ke^{k}_{i,} indicates the incidence of edge eke^{k} on node nin_{i}. As each edge is incident upon only two nodes, then if iji\neq j, the product ei,kej,ke^{k}_{i,}e^{k}_{j,} is nonzero only if eke^{k} connects nodes nin_{i} and njn_{j}. If i=ji=j, then the sum is over all edges connected to node nin_{i}. Therefore, we can find the elements of LL:

Li,j\displaystyle L_{i,j} =\displaystyle= {Gk,k|iffei,jk{E},j0|ei,jk{E},jkGk,k|ei,k{E},=j,\displaystyle\begin{cases}-G_{k,k}\;|\;\text{iff}\,e^{k}_{i,j}\in\{E\},\.{i}\neq j\\ 0\;|\;e^{k}_{i,j}\notin\{E\},\.{i}\neq j\\ \sum_{k}G_{k,k}\;|\;\forall e^{k}_{i,}\in\{E\},\.{i}=j\end{cases}, (196)

this reproduces the graph Laplacian, completing the proof. ∎

We can immediately see that the pseudoinverse of the graph Laplacian, (BGBt)+(BGB^{t})^{+} is related to the projector operator, ΩB/R1\Omega_{B/R^{-1}}:

ΩB/R1\displaystyle\Omega_{B/R^{-1}} =\displaystyle= Bt(BGBt)+BG\displaystyle B^{t}(BGB^{t})^{+}BG (197)
=\displaystyle= BtL+BG.\displaystyle B^{t}L^{+}BG.

This retains the projection operator properties of Ω\Omega,

ΩB/R1ΩB/R1\displaystyle\Omega_{B/R^{-1}}\Omega_{B/R^{-1}} =\displaystyle= ΩB/R1\displaystyle\Omega_{B/R^{-1}} (198)
=\displaystyle= BtL+BGBtL+BG=BtL+BG.\displaystyle B^{t}L^{+}BGB^{t}L^{+}BG=B^{t}L^{+}BG.

We can similarly relate ΩB/R1\Omega_{B/R^{-1}} to the Laplacian matrix,

BGΩB/R1Bt\displaystyle BG\Omega_{B/R^{-1}}B^{t} =\displaystyle= LL+L\displaystyle LL^{+}L (199)
=\displaystyle= L,\displaystyle L,

and by multiplying on the left by (BG)+(BG)^{+} and on the right by Bt+B^{t\,+} we can equate,

ΩB/R1\displaystyle\Omega_{B/R^{-1}} =(BG)+LBt+\displaystyle=(BG)^{+}LB^{t\,+} (200a)
=BtL+BG\displaystyle=B^{t}L^{+}BG (200b)

where we have used equation (197) to arrive at equation (200b). Equating the right hand sides of equations (200a) and (200b), we see that ΩB/R1\Omega_{B/R^{-1}} is an involutory matrix within the non-orthogonal projector operator subspace. In the case of the unweighted Laplacian matrix, we can follow the derivation above using ΩB\Omega_{B} and substituting the identity matrix for GG.

Similarly, we can relate the Laplacian matrix and ΩB/R1\Omega_{B/R^{-1}} in the case of the circuit with current injected at the nodes. Using equation (53) we can equate the inner products,

jexttL+jext\displaystyle j_{\text{ext}}^{t}L^{+}j_{\text{ext}} =\displaystyle= itΩB/R1v,\displaystyle\vec{i}^{t}\Omega_{B/R^{-1}}\vec{v}, (201)

here jextj_{\text{ext}} is the current injected at the nodes, i\vec{i} is the current configuration and v\vec{v} is a voltage drop across the circuit edges. By dimensional analysis we can see that this equates the system’s power due to the injected current with the current and voltage configurations.

Finally, we can follow a similar procedure to generate a matrix based on the cycle matrix AA related to the cycle projection operators ΩA\Omega_{A}. Using an arbitrary diagonal matrix, GG, denoting the properties of the edges in the circuit, we define a matrix

K\displaystyle K =\displaystyle= AGAt\displaystyle AGA^{t} (202)
Ki,j\displaystyle K_{i,j} =\displaystyle= kAi,kAj,kGk,k.\displaystyle\sum_{k}A_{i,k}A_{j,k}G_{k,k}. (203)

Examining KK element-wise, when iji\neq j, then Ki,jK_{i,j} is the sum of elements Gk,kG_{k,k} corresponding to all edges eke^{k} which are part of both cycles cic_{i} and cjc_{j}; the sign of Gk,kG_{k,k} in the sum indicates whether the orientation of cycles cic_{i} and cjc_{j} are aligned or not on the edge eke^{k}, (+)(+) wen the cycles are aligned, ()(-) when they are not aligned. As we are able to separate our fundamental cycle graph A~\tilde{A} into ATA_{T} and ACA_{C} sub-matrices, wherein ACA_{C} is an identity matrix, then KK generated from A~\tilde{A} will have no component of the cycle edges in the off-diagonal elements as the cycle edges are never in more than one cycle. The diagonal elements, i=ji=j, of KK will consist of a sum of all elements Gk,kG_{k,k} which correspond to the edges eke^{k} which form the cycle cic^{i}. Therefore we can define KK as

Ki,j\displaystyle K_{i,j} =\displaystyle= {±Gk,k|ekcicj,j0|cicj={},jkGk,k|ek{ci},=j,\displaystyle\begin{cases}\sum\pm G_{k,k}\;|\;\forall\,e^{k}\in c_{i}\cap c_{j},\.{i}\neq j\\ 0\;|\;c_{i}\cap c_{j}=\{\emptyset\},\.{i}\neq j\\ \sum_{k}G_{k,k}\;|\;\forall e^{k}\in\{c_{i}\},\.{i}=j\end{cases}, (204)

which is related to a projection operator ΩA/G\Omega_{A/G} as

ΩA/G=GAtK+A.\displaystyle\Omega_{A/G}=GA^{t}K^{+}A. (205)

In the supplementary material, we discuss how this matrix can be used to calculate the power in the cycles of a circuit.

VII Effective resistance and memristive device correlations

In experimental studies of memristor networks it is often necessary to characterize the networks resistance without accessing all of the individual memristors. Instead one is often restricted to two-point or four-point probe measurements to characterize the network, in which case the effective resistance between contacts is being measured. Thus it is important to understand how the current and resistance in distant but accessible edges are related. This understanding could enable new methods to control the resistance between input and output edges.

Here, we use the effective resistance to study correlations between memristive device elements. The effective resistance can be calculated between individual edges, e.g., memristive devices, in the network. In the supplementary material, we develop a correlation analysis in the case where charge moves through a circuit via a random walk mechanism. Here, we generalize the correlation analysis to circuits under applied bias. By finding effective resistances between nodes in a network, we can examine the correlation of electrical current in two distant, non-adjacent, edges by constructing an effective Wheatstone bridge circuit.

VII.0.1 Two-point effective resistance

We begin by finding the effective resistance between nodes. The effective resistance, ReffR^{\text{eff}}, between the nan_{a} and nbn_{b} can be found by examining the (a,b)(a,b) walks and removing the closed cycle walks which return to nan_{a}, (a,a)(a,a), and similarly examining the reverse walk from (b,a)(b,a):

R(a,b)eff(a,b)\displaystyle R^{\text{eff}}_{(a,b)}(a,b) =\displaystyle= L+(a,b)L+(b,a)+L+(a,a)+L+(b,b).\displaystyle-L^{+}(a,b)-L^{+}(b,a)+L^{+}(a,a)+L^{+}(b,b). (206)

The effective resistance can be considered the resistance connecting two distant nodes via a hypothetical edge, it is the resistance such that if we apply a single volt between nan_{a} and nbn_{b} the measured current ia,bi_{a,b} at our generator would be 1R(a,b)eff\frac{1}{R^{\text{eff}}_{(a,b)}}. This is the definition of the two-point effective resistance, which is motivated in the supplementary material by a Markov process for particle movement in a circuit. We rederive the two-point effective resistance below. We can similarly find the effective resistance by constructing an incidence matrix for the hypothetical edge we are interested in. We construct a vector, ia,b\vec{i}_{a,b}, that represents sending a single particle from nan_{a} to nbn_{b} and is charge balanced otherwise. We define

Bia,b=𝔟(a,b),\displaystyle B\vec{i}_{a,b}={\mathfrak{b}}_{(a,b)}, (207)

here 𝔟(a,b){\mathfrak{b}}_{(a,b)} is an nn-vector that is 11 in aa, 1-1 in bb, and zero otherwise. We can write

R(a,b)eff=𝔟(a,b)tL+𝔟(a,b)\displaystyle R^{\text{eff}}_{(a,b)}={\mathfrak{b}}^{t}_{(a,b)}L^{+}{\mathfrak{b}}_{(a,b)} (208)

From which we can see a transfer current projector operator reproduces the node projection operator,

ΩB/GG1\displaystyle\Omega_{B/G}G^{-1} =\displaystyle= BtL+B\displaystyle B^{t}L^{+}B (209)
Ra,beff\displaystyle R^{\text{eff}}_{a,b} =\displaystyle= ia,btΩB/GG1ia,b.\displaystyle\vec{i}_{a,b}^{t}\Omega_{B/G}G^{-1}\vec{i}_{a,b}. (210)

Thus the effective resistance is related to projection operators by the single particle current vectors. As ΩB/GG1\Omega_{B/G}G^{-1} is symmetric, the two point effective resistance is the positive semi-definite inner product of the single particle current vectors.

VII.0.2 Effective resistive circuit

If ea,be_{a,b} and ec,de_{c,d} are edges in our network (e0e^{0} and e1e^{1} respectively), with corresponding currents i0i_{0} and i1i_{1}, then we can define a correlation between i0i_{0} and i1i_{1}. We define an effective circuit with effective resistors between the four nodes that define the edges e0e^{0} and e1e^{1}, Ra,cR_{a,c}, Ra,dR_{a,d}, Rb,cR_{b,c}, and Rb,dR_{b,d}; here we drop the eff{}_{\text{eff}} superscript. This allows us to construct a simple graph consisting of the edges ea,be_{a,b} and ec,de_{c,d} (e0e^{0} and e1e^{1}) with their effective resistance R0effR^{\text{eff}}_{0} and R1effR^{\text{eff}}_{1} (distinct from the true resistance values R0R_{0} and R1R_{1}) and the effective edges ea,ce_{a,c}, ea,de_{a,d}, eb,ce_{b,c}, and eb,de_{b,d} with their calculated effective resistance. A schematic of this graph is shown in Figure 4.

Refer to caption
Figure 4: (a) The effective circuit linking nan_{a} and nbn_{b} to ncn_{c} and ndn_{d} with six effective edges with effective resistances. (b) and (c) two representations of the unbalanced Wheatstone bridge circuit with sources ΔV1\Delta V_{1} and ΔV0\Delta V_{0}.

To find the effective resistance of each edge in our effective circuit, we need to calculate the resistance between each effective edge that preserves the power dissipation in the network. This is a generalization of the Star-Delta transformation of resistors. We can find such a transformation to an effective circuit by considering a partition function ZZ defined by a Gaussian model of the total power in the circuit. Here ϕ\phi is the potential on the nodes of our full network, and the power in the system is defined as P=i,j(ϕiϕj)2/RijP=\sum_{i,j}(\phi_{i}-\phi_{j})^{2}/R_{ij}. We define a partition function for the power of our circuit and integrate out the electric potential from all but the four nodes that remain in our effective circuit, {na.nb,nc,nd}\{n_{a}.n_{b},n_{c},n_{d}\}.

We define,

Z\displaystyle Z =\displaystyle= 𝑑ϕ0𝑑ϕn4exp(β2i,j(ϕiϕj)2Ri,j)\displaystyle\int d\phi_{0}\cdots d\phi_{n-4}\exp{\left(\frac{-\beta}{2}\sum_{i,j}\frac{(\phi_{i}-\phi_{j})^{2}}{R_{i,j}}\right)} (211)
=\displaystyle= 𝑑ϕBexp(β2ϕtLϕ)\displaystyle\int d\phi_{B}\exp{\left(\frac{-\beta}{2}\vec{\phi}^{t}L\vec{\phi}\right)}
=\displaystyle= 𝑑ϕBexp(β2ϕBtLBBϕB+ϕStLSSϕS+ϕStLSBϕB+ϕBtLBSϕS)\displaystyle\int d\phi_{B}\exp{\left(\frac{-\beta}{2}\phi_{B}^{t}L_{BB}\phi_{B}+\phi_{S}^{t}L_{SS}\phi_{S}+\phi_{S}^{t}L_{SB}\phi_{B}+\phi_{B}^{t}L_{BS}\phi_{S}\right)}

where ϕB\phi_{B} is ϕ0n4\vec{\phi}_{0\cdots n-4}, the vector of electric potentials excluding the four nodes in the effective circuit, e.g., the bulk in the network, and ϕS=ϕn3n\phi_{S}=\vec{\phi}_{n-3\cdots n}, e.g., the surface of the network. We have written the power in terms of the Laplacian matrix for the circuit, LL. Here we have rearranged our Laplacian and ϕ\vec{\phi} such that {na,nb,nc,nd}\{n_{a},n_{b},n_{c},n_{d}\} are the last nodes in our vector ϕ\vec{\phi}. The matrices LBBL_{BB}, LBSL_{BS}, LSB=LBStL_{SB}=L_{BS}^{t}, and LSSL_{SS} are the block elements of the Laplacian matrix:

L=(LBBLBSLBStLSS).\displaystyle L=\begin{pmatrix}L_{BB}&L_{BS}\\ L_{BS}^{t}&L_{SS}\end{pmatrix}. (212)

LBBL_{BB} is a (n4)×(n4)(n-4)\times(n-4) symmetric submatrix, LBSL_{BS} is a 4×(n4)4\times(n-4) submatrix, and LSSL_{SS} is 4×44\times 4 submatrix. Performing the integral,

Z=(2π)n4βn4det(LBB)exp(β2ϕSt(LBStLBB1LBSLSS)ϕS),\displaystyle Z=\sqrt{\frac{(2\pi)^{n-4}}{\beta^{n-4}\det(L_{BB})}}\exp{\left(\frac{\beta}{2}\phi_{S}^{t}\left(L_{BS}^{t}L_{BB}^{-1}L_{BS}-L_{SS}\right)\phi_{S}\right)}, (213)

here LBBL_{BB} must be a real positive-definite matrix, as LBBL_{BB} is a submatrix of LL it is not necesarily a positive semi-definite matrix. We can rewrite our Gaussian integral in terms of the bottom 4×44\times 4 block, e.g., the S×SS\times S block, of the inverse Laplacian matrix111Note that this is the result is the same as the bottom corner of a block-wise matrix inversion (Schur inverse), (ABCD)1=(A1+A1B(DCA1B)1CA1A1B(DCA1B)1(DCA1B)1CA1(DCA1B)1)\displaystyle\begin{pmatrix}A&B\\ C&D\end{pmatrix}^{-1}=\begin{pmatrix}A^{-1}+A^{-1}B\left(D-CA^{-1}B\right)^{-1}CA^{-1}&-A^{-1}B\left(D-CA^{-1}B\right)^{-1}\\ -\left(D-CA^{-1}B\right)^{-1}CA^{-1}&\left(D-CA^{-1}B\right)^{-1}\end{pmatrix} (214) :

Zexp(β2ϕSt(L4×4+)+ϕS).\displaystyle Z\propto\exp{\left(\frac{-\beta}{2}\phi_{S}^{t}(L^{+}_{4\times 4})^{+}\phi_{S}\right)}. (215)

Here (L4×4+)+(L^{+}_{4\times 4})^{+} is the effective Laplacian for our effective circuit, LeffL^{\text{eff}}; it is symmetric, but in general, the rows and columns do not sum to 0 and is not necessarily a singular matrix. We find the exact effective resistance matrix,

Leff\displaystyle L^{\text{eff}} =\displaystyle= BGeffBt\displaystyle BG^{\text{eff}}B^{t} (216)
L4x4+\displaystyle L^{+}_{4x4} =\displaystyle= (BGeffBt)+\displaystyle\left(BG^{\text{eff}}B^{t}\right)^{+} (217)
Geff+\displaystyle G^{\text{eff}+} =\displaystyle= BtL4×4+B\displaystyle B^{t}L^{+}_{4\times 4}B (218)
=\displaystyle= R~eff,\displaystyle\tilde{R}^{\text{eff}},

where BB is the incidence matrix for our effective circuit, which we define

B=e0eadeacebdebce1na( 111000) nb100110nc001011nd010101.\displaystyle B=\bordermatrix{&e^{0}&e_{ad}&e_{ac}&e_{bd}&e_{bc}&e^{1}\cr n_{a}&1&-1&1&0&0&0\cr n_{b}&-1&0&0&-1&1&0\cr n_{c}&0&0&-1&0&-1&1\cr n_{d}&0&1&0&1&0&-1\cr}. (219)

We can write R~eff\tilde{R}^{\text{eff}} in terms of the elements of the L4×4+L^{+}_{4\times 4} matrix:

R~eff=\displaystyle\tilde{R}^{\text{eff}}=
(220)

The power dissipation of this system is now P=ΔϕStR~eff+ΔϕSP=\Delta\phi_{S}^{t}\tilde{R}^{\text{eff}+}\Delta\phi_{S}, where ΔϕS\Delta\phi_{S} is the difference electric potential, ΔϕS=BtϕS\Delta\phi_{S}=B^{t}\phi_{S}. Importantly, the diagonal elements are the effective resistance between pairs of nodes, equation (206), the off-diagonal terms are cross talk which defines a power loss between current in distinct edges, e.g., Pi,j=iiR~i,jeffijP_{i,j}=\vec{i}_{i}\tilde{R}^{\text{eff}}_{i,j}\vec{i}_{j}. Due to the presence of these off-diagonal terms in R~eff\tilde{R}^{\text{eff}}, it is more convenient to determine the effective conductance, and thus resistance, defined by the off-diagonal terms of LeffL^{\text{eff}}, e.g., Rab=(La,beff)1R_{ab}=\left(L^{\text{eff}}_{a,b}\right)^{-1}. In the supplementary material, we discuss a mean field approximation of the power dissipation in our effective circuit using the effective resistance R~eff\tilde{R}^{\text{eff}}.

The above methods can be extended to reproduce the effective resistance between two nodes. We can rewrite equation (213) as

Z\displaystyle Z =\displaystyle= (2π)n4βn4det(LBB)𝑑ϕa𝑑ϕbexp(β2[ϕa,ϕb,ϕc,ϕd](L4×4+)+[ϕa,ϕb,ϕc,ϕd]t)\displaystyle\sqrt{\frac{(2\pi)^{n-4}}{\beta^{n-4}\det(L_{BB})}}\int d\phi_{a}d\phi_{b}\exp{\left(-\frac{\beta}{2}[\phi_{a},\phi_{b},\phi_{c},\phi_{d}](L^{+}_{4\times 4})^{+}[\phi_{a},\phi_{b},\phi_{c},\phi_{d}]^{t}\right)} (221)
\displaystyle\propto exp(β2[ϕc,ϕd](((L4×4+)+)2×2+)+[ϕc,ϕd]t),\displaystyle\exp{\left(-\frac{\beta}{2}[\phi_{c},\phi_{d}](((L^{+}_{4\times 4})^{+})^{+}_{2\times 2})^{+}[\phi_{c},\phi_{d}]^{t}\right)},

the matrix in the exponent can be simplified (((L4×4+)+)2×2+)+=(L2×2+)+(((L^{+}_{4\times 4})^{+})^{+}_{2\times 2})^{+}=(L^{+}_{2\times 2})^{+}. Using a two-point incident matrix, BB, the two-point effective resistance is written in term of the L2×2+L^{+}_{2\times 2} as:

R~2×2eff=Lcc++Ldd+Lcd+Ldc+,\displaystyle\tilde{R}^{\text{eff}}_{2\times 2}=L^{+}_{cc}+L^{+}_{dd}-L^{+}_{cd}-L^{+}_{dc}, (222)

reproducing the two-point effective resistance of equation (206).

VII.0.3 Current correlations

From these exact effective resistance values, we can solve for the current correlations in our effective circuit. The relation between the current in the edges can be determined via standard circuit graph techniques described above and by examining the sum of spanning trees that link nan_{a} and nbn_{b} while spanning e1e^{1} (see for instance Biggs (1974)). Here, we provide a different method to find correlations between currents without knowing the full network connectivity. We note that our effective circuit, which contains all possible paths between edges e0e^{0} and e1e^{1}, can be redrawn as an unbalanced Wheatstone bridge circuit, with voltage sources of ΔV0=VaVb\Delta V_{0}=V_{a}-V_{b} and ΔV1=VcVd\Delta V_{1}=V_{c}-V_{d}, respectively, as shown in Figure 4 (b) and (c).

We define a correlation for a unit of current in e0e^{0},

i1,i0=Ra,dRc,bRa,cRb,d(Ra,d+Rb,d)(Ra,c+Rc,b)R1+Ra,dRa,cRb,c+Ra,cRd,bRb,c+Ra,dRa,cRd,b+Ra,dRd,bRb,cR0R1effR1i02.\displaystyle\langle i_{1},i_{0}\rangle=\frac{R_{a,d}R_{c,b}-R_{a,c}R_{b,d}}{(R_{a,d}+R_{b,d})(R_{a,c}+R_{c,b})R_{1}+R_{a,d}R_{a,c}R_{b,c}+R_{a,c}R_{d,b}R_{b,c}+R_{a,d}R_{a,c}R_{d,b}+R_{a,d}R_{d,b}R_{b,c}}R_{0}\frac{R^{\text{eff}}_{1}}{R_{1}}i_{0}^{2}. (223)

Here Ra,bR_{a,b} are effective resistance values obtained from La,beffL^{\text{eff}}_{a,b} and Lb,aeffL^{\text{eff}}_{b,a}, e.g., by averaging.

We arrive at this correlation by noting that we can construct multiple subgraphs using the six effective resistances. We construct two unbalanced Wheatstone bridge circuits, and from these circuits we find a mapping that relates the voltages of the circuit nodes. If we know VcV_{c} and VdV_{d} we can find VaV_{a} and VbV_{b}. i.e., (Vc,Vd)(Va,Vb)(V_{c},V_{d})\rightarrow(V_{a},V_{b}). We find a mapping from ΔV0\Delta V_{0} to ΔV1\Delta V_{1}, the forward map, and the inverse map from ΔV1\Delta V_{1} to ΔV0\Delta V_{0}. We do this by removing the edge outside the Wheatstone bridge and solving for the voltage configurations, this is detailed in the supplementary material. In the supplementary material, we explicitly show the two linear transformations for each unbalanced Wheatstone bridge configuration. The two linear mappings are,

(VcVd)\displaystyle\begin{pmatrix}V_{c}\\ V_{d}\end{pmatrix} =\displaystyle= Q(VaVb)\displaystyle Q\begin{pmatrix}V_{a}\\ V_{b}\end{pmatrix} (224)
(VaVb)\displaystyle\begin{pmatrix}V_{a}\\ V_{b}\end{pmatrix} =\displaystyle= M(VcVd),\displaystyle M\begin{pmatrix}V_{c}\\ V_{d}\end{pmatrix}, (225)

we provide exact expressions for the matrix elements using the relationship M=Q1M=Q^{-1} in the supplementary material.

From these two linear mappings, we find a scaling relation between i0i_{0} and i1i_{1}. There exist linear mappings for the currents:

f(QR0R1)\displaystyle f\left(Q\frac{R_{0}}{R_{1}}\right) :\displaystyle: i0i1\displaystyle i_{0}\rightarrow i_{1} (226)
f(Q1R1R0)\displaystyle f\left(Q^{-1}\frac{R_{1}}{R_{0}}\right) :\displaystyle: i1i0\displaystyle i_{1}\rightarrow i_{0} (227)

The determinant of QQ and Q1Q^{-1} give us a scaling relation for the current transformation, this is detailed in the supplementary material. We have

detQR0R1\displaystyle\det{Q\frac{R_{0}}{R_{1}}} =Ra,dRc,bRa,cRb,d(Ra,d+Rb,d)(Ra,c+Rc,b)R1eff+Ra,dRa,cRb,c+Ra,cRd,bRb,c+Ra,dRa,cRd,b+Ra,dRd,bRb,cR0R1effR1\displaystyle=\frac{R_{a,d}R_{c,b}-R_{a,c}R_{b,d}}{(R_{a,d}+R_{b,d})(R_{a,c}+R_{c,b})R^{\text{eff}}_{1}+R_{a,d}R_{a,c}R_{b,c}+R_{a,c}R_{d,b}R_{b,c}+R_{a,d}R_{a,c}R_{d,b}+R_{a,d}R_{d,b}R_{b,c}}R_{0}\frac{R^{\text{eff}}_{1}}{R_{1}} (228)
detQ1R1R0\displaystyle\det{Q^{-1}\frac{R_{1}}{R_{0}}} =(Ra,d+Rb,d)(Ra,c+Rc,b)R1eff+Ra,dRa,cRb,c+Ra,cRd,bRb,c+Ra,dRa,cRd,b+Ra,dRd,bRb,cRa,dRc,bRa,cRb,d1R0R1R1eff,\displaystyle=\frac{(R_{a,d}+R_{b,d})(R_{a,c}+R_{c,b})R^{\text{eff}}_{1}+R_{a,d}R_{a,c}R_{b,c}+R_{a,c}R_{d,b}R_{b,c}+R_{a,d}R_{a,c}R_{d,b}+R_{a,d}R_{d,b}R_{b,c}}{R_{a,d}R_{c,b}-R_{a,c}R_{b,d}}\frac{1}{R_{0}}\frac{R_{1}}{R^{\text{eff}}_{1}}, (229)

where R0R_{0} and R1R_{1} are the true resistance in edges e0e^{0} and e1e^{1} while R1effR^{\text{eff}}_{1} is the effective resistance for e1e^{1} from R~eff\tilde{R}^{\text{eff}} and in QQ. Now, we have derived equation (223), we have,

i1=detQR0R1i0.\displaystyle i_{1}=\det Q\frac{R_{0}}{R_{1}}i_{0}. (230)

The two-point effective resistance is superadditive, and when all the effective resistance values in our effective circuit are positive, R(i,j)eff+R(j,k)effR(i,k)effR^{\text{eff}}_{(i,j)}+R^{\text{eff}}_{(j,k)}\geq R^{\text{eff}}_{(i,k)}, e.g., Ra,d+Rb,dR0R_{a,d}+R_{b,d}\geq R_{0}. When R1effR^{\text{eff}}_{1} is much greater than the other effective resistance values, we can simplify equation (228),

detQR0R1Ra,dRc,bRa,cRb,dR0R1.\det{Q\frac{R_{0}}{R_{1}}}\leq\frac{R_{a,d}R_{c,b}-R_{a,c}R_{b,d}}{R_{0}R_{1}}. (231)

We expect R1effR^{\text{eff}}_{1} to be large when there are few parallel paths between ncn_{c} and ndn_{d}, as when it is an input or output edge for an otherwise densely connected network.

It is experimentally more realizable to work with the two-point effective resistance for each of the six edges in our effective circuit, i.e., the diagonal elements in R~eff\tilde{R}^{\text{eff}}. It is possible to estimate the current correlations using the two-point effective resistances in equations (228) and (229). We can determine the error in approximating the conductivity La,beffL^{\text{eff}}_{a,b} by the two-point effective resistance. Given that R~eff\tilde{R}^{\text{eff}} it is not singular, we have

Leff\displaystyle L^{\text{eff}} =\displaystyle= B(R~eff)1Bt\displaystyle B\left(\tilde{R}^{\text{eff}}\right)^{-1}B^{t} (232)
=\displaystyle= B(Rd+R¯)1Bt\displaystyle B\left(R_{d}+\underline{R}\right)^{-1}B^{t}
=\displaystyle= BRd1BtBRd1R¯(Rd+R¯)1Bt\displaystyle BR_{d}^{-1}B^{t}-BR_{d}^{-1}\underline{R}\left(R_{d}+\underline{R}\right)^{-1}B^{t}

where RdR_{d} and R¯\underline{R} are the diagonal and off-diagonal elements of R~eff\tilde{R}^{\text{eff}}. We arrive at the last line using the Woodbury matrix identity. We note, in general, the matrix norm of R¯(Rd+R¯)1\underline{R}\left(R_{d}+\underline{R}\right)^{-1} is not restricted to values less than 11, and thus the contribution from the off-diagonal terms can be significant. It remains to be determined under what network conditions the matrix norm is less than 11. The resistance for an edge ea,be_{a,b} can be written

Ra,b\displaystyle R_{a,b} =\displaystyle= (La,beff)1\displaystyle\left(L^{\text{eff}}_{a,b}\right)^{-1} (233)
=\displaystyle= 1(BRd1Bt)a,b(BRd1R¯(Rd+R¯)1Bt)a,b\displaystyle\frac{1}{\left(BR_{d}^{-1}B^{t}\right)_{a,b}-\left(BR_{d}^{-1}\underline{R}\left(R_{d}+\underline{R}\right)^{-1}B^{t}\right)_{a,b}}
=\displaystyle= 1(BRd1Bt)a,b+(BRd1R¯(Rd+R¯)1Bt)a,b(BRd1Bt)a,b((BRd1Bt)a,b(BRd1R¯(Rd+R¯)1Bt)a,b)\displaystyle\frac{1}{\left(BR_{d}^{-1}B^{t}\right)_{a,b}}+\frac{\left(BR_{d}^{-1}\underline{R}\left(R_{d}+\underline{R}\right)^{-1}B^{t}\right)_{a,b}}{\left(BR_{d}^{-1}B^{t}\right)_{a,b}\left(\left(BR_{d}^{-1}B^{t}\right)_{a,b}-\left(BR_{d}^{-1}\underline{R}\left(R_{d}+\underline{R}\right)^{-1}B^{t}\right)_{a,b}\right)}

the first term on the right-hand side is two-point effective resistance for ea,be_{a,b}; thus, the second term on the right is the error in approximating Ra,bR_{a,b} with the two-point effective resistance. The percent error will be proportionate to the inverse of the two-point effective resistance, ((BRd1Bt)a,b)1\left(\left(BR_{d}^{-1}B^{t}\right)_{a,b}\right)^{-1}; thus, we expect this approximation to be valid for networks with high resistance edges.

We can write the equations of motions of the memory parameter, xx, under bias; we can calculate the divergence of the resistance values from an initial homogeneous resistance state given the full effective resistance. In the direct parameterization, we have,

x˙a(t0)x˙b(t0)=Ronβ(detQRbRa1)ib(t0).\displaystyle\dot{x}_{a}(t_{0})-\dot{x}_{b}(t_{0})=\frac{R_{\text{on}}}{\beta}\left(\det Q\frac{R_{b}}{R_{a}}-1\right){i}_{b}(t_{0}). (234)

Similarly we can estimate the trajectory of Δxa,b\Delta x_{a,b} as a function of time:

xa(t)xb(t)=t0tα(xa(s)xb(s))+Ronβ(detQ(s)Rb(s)Ra(s)1)ib(s)ds.\displaystyle x_{a}(t)-x_{b}(t)=\int^{t}_{t_{0}}-\alpha\left(x_{a}(s)-x_{b}(s)\right)+\frac{R_{\text{on}}}{\beta}\left(\det Q(s)\frac{R_{b}(s)}{R_{a}(s)}-1\right){i}_{b}(s)ds. (235)

In general, the appropriate dynamics of RR, QQ, and i\vec{i} will depend on the memory parameters xx, and thus solving for the trajectory Δxa,b\Delta x_{a,b} must be done in a self-consistent way. This approach is well suited for passive resistant or memristive circuits, wherein there are no sources between the edges of interest as the contributions of voltage or current sources are not accounted for in this current correlation. This is the case within the bulk of most memristive device networks.

VIII Discussion

In conclusion, the work presented here addresses many fundamental questions about the performance of memrisitve networks. The dynamics of memristor networks are nontrivial, and relating these dynamics to network structure is even more challenging. The dynamics of arbitrary memristor circuits with other two-terminal circuit elements are even more challenging to understand but are important as they produce even richer dynamics. In general it is not known whether a network will have stable dynamics, and when different circuits will have similar dynamics. In the work presented above we demonstrate techniques for answering many of these questions for a rich class of memristor networks.

Our research presents a comprehensive framework for analyzing meristor circuits, offering valuable insights into their dynamic behavior. We introduced techniques such as direct and flipped parameterization, extending to encompass resistance variations dependent on polynomial expansions of the internal memory parameters, xx, through Bernstein polynomials. Moreover, we elucidated diverse control schemes to deduce equations of motion, providing a thorough understanding of network dynamics. We generalize the non-orthogonal projection operators to identify the equations of motion of circuits with non-linear two-terminal circuit elements. From these equations, the eigenvalues give insight into the network properties; in particular, the RLC networks display resonator behavior in the under-dampened case, and memristors enhance system dampening.

From the equations of motions for xx, we derived Lyapunov functions and demonstrate that for memristors linear in xx the dynamics are passive and the equilibrium states are stable. Importantly this is not guaranteed for memristors nonlinear in xx. In the case where g(X)=tanh(X)g(X)=\tanh(X), e.g., a suitable activation function in neural networks, we could guarantee the memristive device network would have passive dynamics. The Lyapunov functions provide a powerful way to study memristive device networks’ complicated and often chaotic dynamics.

Additionally, our study delved into the role of invariances, shedding light on the extensive gauge freedom present in circuit components orthogonal to projector matrices. We found gauge transformations do not alter the short time dynamics or Lyapunov function stability for linear memristors. Spectral invariances implemented through an orthogonal transformation had similar passive Lyapunov functions, however they significantly impact the time derivative of these functions, highlighting the complex interplay between network properties and orthogonal components.

Moreover, we showed it is possible to devise an effective circuit to study the correlations across a network. We established a direct correspondence between the graph Laplacian and the non-orthogonal node projection operators, facilitating a deeper understanding of network properties. Leveraging these techniques, we constructed effective circuits between distant edges, from which it is possible to estimate the divergence of the internal memory parameter in distant edges. This provides a method to understand how the resistance evolves across a network.

These techniques can have direct application in experiments wherein the full network conductivity cannot be accessed. Experiments are often restricted to four point probe measurements that characterize the effective resistance between contacts. Under these conditions, effective circuits can serve as a powerful tool to relate the electrical properties between distant edges. The effective circuits we presented can be generalized to study the relation between input and output edges in a neuromorphic network.

These results demonstrate the rich dynamics within memristive device networks. The strength of network analysis is that it can obtain many useful governing equations and relations in a complicated circuit. Further work is needed to generalize these Lyapunov functions and correlation analysis to networks with capacitors, inductors, sources, and sinks. Such a generalization would provide a set of powerful techniques to describe memristive device networks in different complex network structures. This approach offers valuable insights into resistance evolution within networks, paving the way for advancements in circuit analysis and design.

IX Acknowledgements

Ongoing work by Samip Karki identified the Schottky barrier parameterization. The work of F. Caravelli and F. Barrows was carried out under the auspices of the NNSA of the U.S. DoE at LANL under Contract No. DE-AC52-06NA25396. F. Caravelli was financed via DOE LDRD grant 20240245ER, while F. Barrows via Director’s Fellowship. F. Barrows gratefully acknowledges support from the Center for Nonlinear Studies at LANL.

X Citations

References

  • Pershin and Ventra (2011) Y. V. Pershin and M. D. Ventra, “Memory effects in complex materials and nanoscale systems,” Advances in Physics 60, 145–227 (2011).
  • Traversa and et. al. (2014) F. L. Traversa and et. al., “Dynamic computing random access memory,” Nanotechnology 25, 285201 (2014).
  • Carbajal et al. (2015) J. P. Carbajal, J. Dambre, M. Hermans,  and B. Schrauwen, “Memristor models for machine learning,” Neural Computation 27 (2015), 10.1162/NECO_a_00694arXiv:1406.2210 .
  • Du and et. al. (2017) C. Du and et. al., “Reservoir computing using dynamic memristors for temporal information processing,” Nature Comm. 8 (2017), 10.1038/s41467-017-02337-y.
  • Milano et al. (2022) G. Milano, G. Pedretti, K. Montano,  and e. al., “In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks,” Nat. Mater. , 195–202 (2022).
  • Fu et al. (2020) K. Fu, R. Zhu, A. Loeffler,  and e. al, “Reservoir computing with neuromemristive nanowire networks,” Proceedings of the International Joint Conference on Neural Networks (IJCNN) 20006228 (2020).
  • Xu, Wang, and Yan (2021) W. Xu, J. Wang,  and X. Yan, “Advances in memristor-based neural networks,” Frontiers in Nanotechnology 3 (2021), 10.3389/fnano.2021.645995.
  • Pershin, Slipko, and Di Ventra (2013) Y. V. Pershin, V. A. Slipko,  and M. Di Ventra, “Complex dynamics and scale invariance of one-dimensional memristive networks,” Physical Review E 87, 022116 (2013).
  • Diaz-Alvarez et al. (2019) A. Diaz-Alvarez, R. Higuchi, P. Sanz-Leon, I. Marcus, Y. Shingaya, A. Z. Stieg, J. K. Gimzewski, Z. Kuncic,  and T. Nakayama, “Emergent dynamics of neuromorphic nanowire networks,” Scientific Reports 9, 14920 (2019).
  • Zhu et al. (2021) R. Zhu, J. Hochstetter, A. Loeffler, A. Diaz-Alvarez, T. Nakayama, J. T. Lizier,  and Z. Kuncic, “Information dynamics in neuromorphic nanowire networks,” Scientific Reports 11, 13047 (2021).
  • Caravelli, Traversa, and Di Ventra (2017) F. Caravelli, F. L. Traversa,  and M. Di Ventra, “Complex dynamics of memristive circuits: Analytical results and universal slow relaxation,” Physical Review E 95, 022140 (2017).
  • Riaza (2011) R. Riaza, “Dynamical properties of electrical circuits with fully nonlinear memristors,” Nonlinear Analysis: Real World Applications 12, 3674–3686 (2011).
  • Matsumoto (1984) T. Matsumoto, “A chaotic attractor from chua’s circuit,” IEEE Transactions on Circuits and Systems 31, 1055–1058 (1984).
  • Madan (1993) R. N. Madan, Chua’s Circuit: A Paradigm for Chaos (WORLD SCIENTIFIC, 1993) https://www.worldscientific.com/doi/pdf/10.1142/1997 .
  • Zegarac and Caravelli (2019) A. Zegarac and F. Caravelli, “Memristive networks: From graph theory to statistical physics,” EPL (Europhysics Letters) 125, 10001 (2019).
  • Tellini et al. (2021) B. Tellini, M. Bologna, K. J. Chandía,  and M. Macucci, “Revisiting the memristor concept within basic circuit theory,” International Journal of Circuit Theory and Applications 49, 3488–3506 (2021)https://onlinelibrary.wiley.com/doi/pdf/10.1002/cta.3111 .
  • Caravelli and Sheldon (2020) F. Caravelli and F. C. Sheldon, “Phases of memristive circuits via an interacting disorder approach,” https://arxiv.org/abs/2009.00114  (2020).
  • Caravelli and Barucca (2018) F. Caravelli and P. Barucca, “A mean-field model of memristive circuit interaction,” EPL (Europhysics Letters) 122, 40008 (2018).
  • Stern, Liu, and Balasubramanian (2024) M. Stern, A. J. Liu,  and V. Balasubramanian, “Physical effects of learning,” Physical Review E 109 (2024), 10.1103/physreve.109.024311.
  • Xiang, Ren, and Tan (2022) J. Xiang, J. Ren,  and M. Tan, “Stability analysis for memristor-based stochastic multi-layer neural networks with coupling disturbance,” Chaos, Solitons & Fractals 165, 112771 (2022).
  • Dhivakaran, Gowrisankar, and Vinodkumar (2024) P. B. Dhivakaran, M. Gowrisankar,  and A. Vinodkumar, “Bipartite synchronization of fractional order multiple memristor coupled delayed neural networks with event triggered pinning control,” Neural Processing Letters 56, 50 (2024).
  • Caravelli (2019) F. Caravelli, “Asymptotic behavior of memristive circuits,” Entropy 21, 789 (2019).
  • Sheldon, Kolchinsky, and Caravelli (2022) F. C. Sheldon, A. Kolchinsky,  and F. Caravelli, “The computational capacity of lrc, memristive and hybrid reservoirs,” Phys. Rev. E 106 (2022).
  • Baccetti et al. (2023) V. Baccetti, R. Zhu, Z. Kuncic,  and F. Caravelli, “Ergodicity, lack thereof, and the performance of reservoir computing with memristive networks,”  (2023), arXiv:2310.09530 [cond-mat.dis-nn] .
  • White, Lee, and Sompolinsky (2004) O. L. White, D. D. Lee,  and H. Sompolinsky, “Short-term memory in orthogonal neural networks,” Physical review letters 92, 148102 (2004).
  • Caravelli, Sheldon, and Traversa (2021) F. Caravelli, F. Sheldon,  and F. L. Traversa, “Global minimization via classical tunneling assisted by collective force field formation,” Science Advances 7, 1542 (2021)https://www.science.org/doi/pdf/10.1126/sciadv.abh1542 .
  • Joglekar and Wolf (2009) Y. N. Joglekar and S. J. Wolf, “The elusive memristor: properties of basic electrical circuits,” Eur. J. of Phys. 30, 661–675 (2009).
  • Ascoli et al. (2016) A. Ascoli, R. Tetzlaff, L. O. Chua, J. P. Strachan,  and R. S. Williams, “History erase effect in a non-volatile memristor,” IEEE Transactions on Circuits and Systems I: Regular Papers 63, 389–400 (2016).
  • Caravelli (2023) F. Caravelli, “Cycle equivalence classes, orthogonal weingarten calculus, and the mean field theory of memristive systems,” arXiv:2304.14890  (2023).
  • Note (1) Note that this is the result is the same as the bottom corner of a block-wise matrix inversion (Schur inverse),
    (ABCD)1=(A1+A1B(DCA1B)1CA1A1B(DCA1B)1(DCA1B)1CA1(DCA1B)1)\displaystyle\begin{pmatrix}A&B\\ C&D\end{pmatrix}^{-1}=\begin{pmatrix}A^{-1}+A^{-1}B\left(D-CA^{-1}B\right)^{-1}CA^{-1}&-A^{-1}B\left(D-CA^{-1}B\right)^{-1}\\ -\left(D-CA^{-1}B\right)^{-1}CA^{-1}&\left(D-CA^{-1}B\right)^{-1}\end{pmatrix} (236)
    .
  • Biggs (1974) N. Biggs, Algebraic Graph Theory, 2nd ed., Cambridge Mathematical Library (Cambridge University Press, 1974).
  • Doyle and Snell (2000) P. G. Doyle and J. L. Snell, “Random walks and electric networks,”  (2000), arXiv:math/0001057 [math.PR] .

XI Supplementary

XI.1 Space of Circuit Variables

Here, we prove that current and voltage configurations are dual and that there is a correspondence between the two representations of a circuit, the cycle space corresponding to the loops of the cycle matrices AA, and the vertex space corresponding to the vertices of the incidence matrix BB. We define four-spaces

  • CURR-SP (current space): The set of all current configurations. (i.e. 𝒩(B){\cal N}(B), the null-space of BB)

  • VOLT-SP (voltage space): The set of all voltage configurations. (i.e. 𝒩(A){\cal N}(A), the null-space of AA)

  • CYCLE-SP (cycle space): The set of all linear combinations of cycles. (i.e. (A){\cal R}(A), the row-space of AA)

  • VERT-SP (vertex space): The set of all linear combinations of rows of B. (i.e., (B){\cal R}(B), the row-space of BB)

We note that every row of AA is a valid current configuration: for every vertex kk (row of BB) and cycle ll (row of AA), if klk\notin l then it does not contribute, and otherwise, one edge of the cycle is directed towards kk while another is directed away, canceling. Thus BAt=0BA^{t}=0 and as a consequence (B)𝒩(A){\cal{R}}(B)\subset{\cal{N}}(A), thus:

CYCLE-SP CURR-SP.\displaystyle\text{{\bf CYCLE-SP} }\subset\text{{\bf CURR-SP}}. (237)

Taking the transpose of the above matrix equation, ABt=0AB^{t}=0 and as a consequence (A)𝒩(B){\cal{R}}(A)\subset{\cal{N}}(B), thus:

VERT-SP VOLT-SP.\displaystyle\text{{\bf VERT-SP} }\subset\text{{\bf VOLT-SP}}. (238)

The next step will be to show that the opposite inclusions hold, and any current (voltage) may be written AtyA^{t}y (BtyB^{t}y).

XI.1.1 Spanning Trees: VOLT-SP=VERT-SP\text{{\bf VOLT-SP}}=\text{{\bf VERT-SP}}

Assume the graph is connected with nn vertices. As such, mn1m\geq n-1. A tree is a connected graph with no cycles. A spanning tree of 𝒢\mathcal{G} contains all vertices T=(V,ET)T=(V,E_{T}) where ETE_{T} are the tree branches. The remaining edges are chords. Every spanning tree has n1n-1 branches.

Consider a spanning tree of a graph 𝒢\mathcal{G} and choose one node of this tree as the root or ground node. There is a unique path from the root to any node kk. Define the orientation along this path from the root to kk. Now, for a given voltage configuration, define pkp_{k} as the sum of voltage elements along the path with the appropriate orientation (added if the orientation of the tree and 𝒢\mathcal{G} agree and subtracted if not). With this construction, for any edge (k,l)(k,l) the voltage is ve=pkplv_{e}=p_{k}-p_{l} and we can write,

v=Btp,VOLT-SPVERT-SP\displaystyle v=B^{t}p,\quad\text{{\bf VOLT-SP}}\subset\text{{\bf VERT-SP}} (239)

And we have thus shown VOLT-SP=VERT-SP.\text{{\bf VOLT-SP}}=\text{{\bf VERT-SP}}. As every voltage can be defined by n1n-1 numbers in the potential, we have

dimVOLT-SP=n1\displaystyle\text{dim}\;\text{{\bf VOLT-SP}}=n-1 (240)

The relationship v=Btpv=B^{t}p is also analogous to writing a voltage configuration as a sum of Green’s functions given by the rows of BB as each row corresponds to the potential configuration on the nodes, pk=δkip_{k}=\delta_{ki} such that the potential is 1 for node ii and zero elsewhere.

XI.1.2 Algebraic Methods: CURR-SP=CYCLE-SP\text{{\bf CURR-SP}}=\text{{\bf CYCLE-SP}}

As a consequence of the above argument,

itv=itBtp=(Bi)p=0\displaystyle i^{t}v=i^{t}B^{t}p=(Bi)p=0 (241)

which is known as Tellegen’s theorem. So, the current configurations live in an orthogonal space to the voltage configurations. We need to do a bit of work to make use of this.

First, pick a maximal set of the voltage configurations v1vn1v_{1}\dots v_{n-1} by using, for example, n1n-1 rows of BB. Now pick a maximal set of linearly independent columns of AA, a1ara_{1}\dots a_{r} such that ai=Auia_{i}=Au_{i} where uiu_{i} is a unit vector. We claim, u1ur,v1,vn1u_{1}\dots u_{r},v_{1},\dots v_{n-1} are linearly independent. Suppose there are λi,μj\lambda_{i},\mu_{j} such that,

iλiui+jμjvj=0.\displaystyle\sum_{i}\lambda_{i}u_{i}+\sum_{j}\mu_{j}v_{j}=0. (242)

Acting with AA, as Av=0Av=0, we have λi=0\lambda_{i}=0 by the linear independence of uiu_{i} and then μi=0\mu_{i}=0 by the linear independence of vjv_{j}. For an arbitrary mm vector zz, we can write

Az=iλiai=Ay,y=iλiui\displaystyle Az=\sum_{i}\lambda_{i}a_{i}=Ay,\quad y=\sum_{i}\lambda_{i}u_{i} (243)

where we have used the linearly independent set of columns and unit vectors from above. Thus writing z=(zy)+yz=(z-y)+y we decompose the vector in a piece zyVOLT-SPz-y\in\text{{\bf VOLT-SP}} and another which can be written in terms of rr unit vectors, proving,

m=n1+r.\displaystyle m=n-1+r. (244)

We can thus form a linearly independent set of the n1n-1 voltage vectors and rr linearly independent rows of A, v1vn1,c1crv_{1}\dots v_{n-1},c_{1}\dots c_{r} with which we can decompose an arbitrary mm vector.

So, writing i=c+vi=c+v as a shorthand for the decomposition, we must have itv=0=ctv+vtv.i^{t}v=0=c^{t}v+v^{t}v. The first term must vanish as ctv=ctBp=(Btc)p=0c^{t}v=c^{t}Bp=(B^{t}c)p=0 and thus v=0v=0. So, ii may be written as a linear combination of the rows of AA, and we have shown the opposite inclusion, giving

CURR-SP=CYCLE-SP.\displaystyle\text{{\bf CURR-SP}}=\text{{\bf CYCLE-SP}}. (245)

Additionally, we have

dimCURR-SP=mn+1\displaystyle\text{dim}\;\text{{\bf CURR-SP}}=m-n+1 (246)

XI.2 Differential equation for the resistance

We note that it is possible to write a differential equation for the physically accessible resistive state without referring to the internal parameters xx. In the case in which R(x)R(x) is linear, this is rather straightforward and we have

ddtR(t)=R~(αRRonR~1β(I+ξΩRRonR~)1ΩS)\frac{d}{dt}\vec{R}(t)=\tilde{R}\left(\alpha\frac{\vec{R}-R_{\text{on}}}{\tilde{R}}-\frac{1}{\beta}\left(I+\xi\Omega\frac{R-R_{\text{on}}}{\tilde{R}}\right)^{-1}\Omega\vec{S}\right) (247)

where R~=RoffRon\tilde{R}=R_{\text{off}}-R_{\text{on}}.

On the other hand, if R(x)R(x) is not linear the equation is slightly more involved. We use

ddtR(x(t))=xR(x)ddtx(t)\frac{d}{dt}R\left(x(t)\right)=\partial_{x}R(x)\frac{d}{dt}x(t) (248)

and thus

|xR(x)|1ddtR(x(t))=ddtx(t)|\partial_{x}R(x)|^{-1}\frac{d}{dt}R\left(x(t)\right)=\frac{d}{dt}x(t) (249)

To write the equation as a function R(t)R(t), we must invoke the convexity of R(x)R(x) in xx. Let us define R=f(x)R=f(x), and x=f1(R)x=f^{-1}(R). If f1f^{-1} exists, e.g., if ff is an invertible function, then we can write the differential equation in terms of RR only. The function ff is invertible if, for instance, that R(x)R(x) is a monotonic function on R:[0,1][Ron,Roff]R:[0,1]\rightarrow[R_{\text{on}},R_{\text{off}}] as we expect in passive memristive devices. We now use the theorem of the inverse function. Let us call R=f(x)R=f(x), then it is not hard to see that since we assume that f(x)f(x) is monotonic in xx, we have

1xR(x)=Rf1(R).\frac{1}{\partial_{x}R(x)}=\partial_{R}f^{-1}\left(R\right). (250)

We can thus write also in the nonlinear case, given a certain function g(R)g(R)

ddtR(t)=g(R)(αG(R)1β(I+ξΩG(R))1ΩS)\frac{d}{dt}\vec{R}(t)=g(R)\left(\alpha\vec{G}(R)-\frac{1}{\beta}(I+\xi\Omega G(R))^{-1}\Omega\vec{S}\right) (251)

where GG, gg, and ff are generic functions, with the condition that g=RGg=\partial_{R}G and G:[Ron,Roff][0,1]G:[R_{\text{on}},R_{\text{off}}]\rightarrow[0,1].

XI.3 Nonlinear Lyapunov functions differential equations

The differential equation given by the quadratic and external field terms, equations (112) and (113), are linear nonhomogeneous differential equations of the first kind, of the type

y+p(z)y=f(z)y^{\prime}+p(z)y=f(z) (252)

with f(z)=Gn(z)zf(z)=\frac{G_{n}(z)}{z} in both cases, while p(z)=2zp(z)=\frac{2}{z} for the quadratic term and p(z)=1zp(z)=\frac{1}{z} for the external field term. In this case, by writing y=uvy=uv we solve separately for the two equations

v(z)+p(z)v(z)\displaystyle v^{\prime}(z)+p(z)v(z) =\displaystyle= 0\displaystyle 0 (253)
u(z)\displaystyle u^{\prime}(z) =\displaystyle= f(z)v(z)\displaystyle\frac{f(z)}{v(z)} (254)

The first equation is solved via

ddzlogv(z)=p(z)v(z)=v(1)ezp(q)𝑑q\frac{d}{dz}\log v(z)=-p(z)\rightarrow v(z)=v(1)e^{-\int^{z}p(q)dq} (255)

and since p(q)=kqp(q)=\frac{k}{q}, with k=1,2k=1,2 we have v(z)=v(1)z2v(z)=\frac{v(1)}{z^{2}} for the function FF and v(z)=v(1)zv(z)=\frac{v(1)}{z} for the external field. Thus we have

u(z)=zkv(1)f(z),u^{\prime}(z)=\frac{z^{k}}{v(1)}f(z), (256)

whose solution is

u(z)=u0+1v(1)zqkf(q)𝑑q.u(z)=u_{0}+\frac{1}{v(1)}\int^{z}q^{k}f(q)dq. (257)

These are used to arrive at the forms of F(z)F(z) and Q(z)Q(z) used in the main text.

XI.4 Explicit expressions for non-orthogonal projectors

We would like to derive the properties of the non-orthogonal projector ΩB/R1=Bt(BR1Bt)1BR1\Omega_{B/R^{-1}}=B^{t}(BR^{-1}B^{t})^{-1}BR^{-1} in terms of the orthogonal projector operator ΩB=Bt(BBt)1B\Omega_{B}=B^{t}(BB^{t})^{-1}B. Let us write

ΩB/R1R\displaystyle\Omega_{B/R^{-1}}R =\displaystyle= Bt(BR1Bt)1B\displaystyle B^{t}(BR^{-1}B^{t})^{-1}B (258)
=\displaystyle= RonBt(B(RRon)1Bt)1B,\displaystyle R_{\text{on}}B^{t}\left(B\left(\frac{R}{R_{\text{on}}}\right)^{-1}B^{t}\right)^{-1}B,

let us write R~=RRon=I+ξG(X)\tilde{R}=\frac{R}{R_{\text{on}}}=I+\xi G(\vec{X}). We have

ΩB/R1R\displaystyle\Omega_{B/R^{-1}}R =\displaystyle= RonBt(B(I+ξG(X))1Bt)1B\displaystyle R_{\text{on}}B^{t}\left(B(I+\xi G(X))^{-1}B^{t}\right)^{-1}B (259)
=\displaystyle= RonBt(k=0(1)kξkBgkBt)1B\displaystyle R_{\text{on}}B^{t}\left(\sum_{k=0}^{\infty}(-1)^{k}\xi^{k}Bg^{k}B^{t}\right)^{-1}B
=\displaystyle= RonBt(BBt+BZBt)1B\displaystyle R_{\text{on}}B^{t}\left(BB^{t}+BZB^{t}\right)^{-1}B

where Z=k=1(1)kξkGkZ=\sum_{k=1}^{\infty}(-1)^{k}\xi^{k}G^{k}, as ξ\xi is a constant. We now use the formula (P+Q)1=k=0(1)k(P1Q)kP1(P+Q)^{-1}=\sum_{k=0}^{\infty}(-1)^{k}(P^{-1}Q)^{k}P^{-1}. We obtain, for P=BBtP=BB^{t} and Q=BZBtQ=BZB^{t},

ΩB/R1R\displaystyle\Omega_{B/R^{-1}}R =\displaystyle= Ronk=0(1)kBt((BBt)1BZBt)k(BBt)1B)\displaystyle R_{\text{on}}\sum_{k=0}^{\infty}(-1)^{k}B^{t}\left((BB^{t})^{-1}BZB^{t})^{k}(BB^{t})^{-1}B\right) (260)
=\displaystyle= Ronk=0(1)k(Bt(BBt)1BZ)kBt(BBt)1B\displaystyle R_{\text{on}}\sum_{k=0}^{\infty}(-1)^{k}\left(B^{t}(BB^{t})^{-1}BZ\right)^{k}B^{t}(BB^{t})^{-1}B
=\displaystyle= Ronk=0(1)k(ΩBZ)kΩB\displaystyle R_{\text{on}}\sum_{k=0}^{\infty}(-1)^{k}(\Omega_{B}Z)^{k}\Omega_{B}
=\displaystyle= Ron(I+ΩBZ)1ΩB\displaystyle R_{\text{on}}(I+\Omega_{B}Z)^{-1}\Omega_{B}

and thus

ΩB/R1\displaystyle\Omega_{B/R^{-1}} =\displaystyle= (I+ΩB(R~1I))1ΩBR~1\displaystyle(I+\Omega_{B}(\tilde{R}^{-1}-I))^{-1}\Omega_{B}\tilde{R}^{-1} (261)
=\displaystyle= (ΩA+ΩBR~1)1ΩBR~1\displaystyle(\Omega_{A}+\Omega_{B}\tilde{R}^{-1})^{-1}\Omega_{B}\tilde{R}^{-1}

Here ΩA\Omega_{A} is the orthogonal projection operator of ΩB\Omega_{B} such that ΩB+ΩA=I\Omega_{B}+\Omega_{A}=I. Let us prove that the equation defines a projector operator. Let us define for simplicity R~1I=q\tilde{R}^{-1}-I=q. In order to prove that the equation above is an identity, it is sufficient to observe that

[ΩB(q+I)][(I+ΩBq)1]=ΩB.[\Omega_{B}(q+I)][(I+\Omega_{B}q)^{-1}]=\Omega_{B}. (262)

In fact, since ΩB2=ΩB\Omega_{B}^{2}=\Omega_{B}, we have

ΩB(q+1)=ΩB+ΩB2q=ΩB+ΩBq,\Omega_{B}(q+1)=\Omega_{B}+\Omega_{B}^{2}q=\Omega_{B}+\Omega_{B}q, (263)

which proves the equality. Then, we have

ΩB/R12\displaystyle\Omega_{B/R^{-1}}^{2} =\displaystyle= (I+ΩBq)1ΩB(q+I)(I+ΩBq)1ΩB(q+I)\displaystyle(I+\Omega_{B}q)^{-1}\Omega_{B}(q+I)(I+\Omega_{B}q)^{-1}\Omega_{B}(q+I) (264)
=\displaystyle= (I+ΩBq)1ΩBΩB(q+I)\displaystyle(I+\Omega_{B}q)^{-1}\Omega_{B}\Omega_{B}(q+I)
=\displaystyle= (I+ΩBq)1ΩB(q+I)=ΩB/R1,\displaystyle(I+\Omega_{B}q)^{-1}\Omega_{B}(q+I)=\Omega_{B/R^{-1}},

which shows that the formula we derived defines a projector operator. Note that if R=IR=I, then the formula becomes an orthogonal projector operator. Let us now construct the orthogonal complement, which is defined as

ΩB/R1+Ω=I.\Omega_{B/R^{-1}}+\Omega^{\prime}=I. (265)

We see that

Ω\displaystyle\Omega^{\prime} =\displaystyle= IΩB/R1\displaystyle I-\Omega_{B/R^{-1}} (266)
=\displaystyle= (I+ΩB(R~1I))1(I+ΩB(R~1I)ΩBR~1)\displaystyle(I+\Omega_{B}(\tilde{R}^{-1}-I))^{-1}(I+\Omega_{B}(\tilde{R}^{-1}-I)-\Omega_{B}\tilde{R}^{-1})
=\displaystyle= (I+ΩB(R~1I))1(IΩB)\displaystyle(I+\Omega_{B}(\tilde{R}^{-1}-I))^{-1}(I-\Omega_{B})
=\displaystyle= (ΩA+ΩBR~1)1ΩA.\displaystyle(\Omega_{A}+\Omega_{B}\tilde{R}^{-1})^{-1}\Omega_{A}.

In the case in which the projector is orthogonal, we also see in this case that for R~=I\tilde{R}=I, Ω=ΩA\Omega^{\prime}=\Omega_{A} which is the right complementary projector to ΩB\Omega_{B}. In the memristive RLC dynamics, we will need explicit expressions both for ΩB/R1\Omega_{B/R^{-1}} and IΩB/R1I-\Omega_{B/R^{-1}}.

XI.5 Power in circuit cycles

The matrix K=ARAtK=ARA^{t} can be used to calculate the power within the cycles of a memristive circuit via a mesh analysis. In the mesh analysis, the memristive cycles form a planar sub-circuit that is partitioned into fundamental cycles, all with the same orientation. Cycles overlap in no more than one edge and each of these fundamental cycles is assigned a unique current. In this case, P=imtKimP=\vec{i}^{t}_{m}K\vec{i}_{m}, where im\vec{i}_{m} is the mesh current. In this case, the off-diagonal elements of KK will be negative and we can write the power in the mesh analysis as,

P\displaystyle P =\displaystyle= imtKim\displaystyle\vec{i}^{t}_{m}K\vec{i}_{m} (267)
=\displaystyle= cikRkciici22ci<cjRci,cjiciicj\displaystyle\sum_{c_{i}}\sum_{k}R^{c_{i}}_{k}i_{c_{i}}^{2}-2\sum_{c_{i}<c_{j}}R^{c_{i},c_{j}}i_{c_{i}}i_{c_{j}}
=\displaystyle= ciRci,ciici2+ci<cjRci,cj(iciicj)2.\displaystyle\sum_{c_{i}}R^{c_{i},c_{i}}i_{c_{i}}^{2}+\sum_{c_{i}<c_{j}}R^{c_{i},c_{j}}(i_{c_{i}}-i_{c_{j}})^{2}.

Here, icii_{c_{i}} is the current in a cycle cic_{i} in the mesh analysis, kRkci\sum_{k}R^{c_{i}}_{k} is the sum of all kk resistive elements in a cycle cic_{i}, Rci,ciR^{c_{i},c_{i}} and Rci,cjR^{c_{i},c_{j}} are resistive elements that exist exclusively in cic_{i} or are shared in both cic_{i} and cjc_{j}, respectively. The off-diagonal elements in KK are the Rci,cjR^{c_{i},c_{j}} resistors.

For example, in a simple mesh circuit consisting of two cycles with three resistors shown in Figure 5, then

im\displaystyle\vec{i}_{m} =\displaystyle= (i1i2)\displaystyle\begin{pmatrix}i_{1}\\ i_{2}\end{pmatrix} (268)
K\displaystyle K =\displaystyle= (R0+R1R1R1R1+R2),\displaystyle\begin{pmatrix}R_{0}+R_{1}&-R_{1}\\ -R_{1}&R_{1}+R_{2}\end{pmatrix}, (269)

and the power in the cycles is

P\displaystyle P =\displaystyle= imtKim\displaystyle\vec{i}^{t}_{m}K\vec{i}_{m} (270)
=\displaystyle= R0i12+R2i22+R1(i1i2)2,\displaystyle R_{0}i_{1}^{2}+R_{2}i_{2}^{2}+R_{1}(i_{1}-i_{2})^{2},

which is the power we would expect from mesh currents.

Refer to caption
Figure 5: A small circuit consisting of two cycles with currents i1i_{1} and i2i_{2},and three resistors. The mesh analysis cycles are shown with red arrows.

XI.6 Effective power loss

We can rewrite the power dissipation due to Joule heating in our effective circuit using R~eff\tilde{R}^{\text{eff}} from the main text,

P\displaystyle P =\displaystyle= itR~effi\displaystyle\vec{i}^{t}\tilde{R}^{\text{eff}}\vec{i} (271)
=\displaystyle= iii2R~i,ieff+i,jiiijR~i,jeff.\displaystyle\sum_{i}\vec{i}_{i}^{2}\tilde{R}^{\text{eff}}_{i,i}+\sum_{i,j}\vec{i}_{i}\vec{i}_{j}\tilde{R}^{\text{eff}}_{i,j}.

As R~eff\tilde{R}^{\text{eff}} has off-diagonal terms, here we present a mean-field treatment to calculate the power dissipation when the current in each edge can be treated as nearly uniform with a local fluctuation, e.g., ii=j0+δii\vec{i_{i}}=j_{0}+\delta\vec{i}_{i}, j0j_{0} is the mean current and δii\delta\vec{i}_{i} is the small local fluctuation in current. We rewrite the power dissipation,

P\displaystyle P =\displaystyle= ij02(R~iieff+jR~ijeff)+2j0δii(R~iieff+jR~i,jeff)\displaystyle\sum_{i}j_{0}^{2}(\tilde{R}^{\text{eff}}_{ii}+\sum_{j}\tilde{R}^{\text{eff}}_{ij})+2j_{0}\delta\vec{i}_{i}(\tilde{R}^{\text{eff}}_{ii}+\sum_{j}\tilde{R}^{\text{eff}}_{i,j}) (272)
=\displaystyle= ij02(R~iieff+jR~ijeff)+2j0ii(R~iieff+jR~i,jeff)\displaystyle-\sum_{i}j_{0}^{2}(\tilde{R}^{\text{eff}}_{ii}+\sum_{j}\tilde{R}^{\text{eff}}_{ij})+2j_{0}\vec{i}_{i}(\tilde{R}^{\text{eff}}_{ii}+\sum_{j}\tilde{R}^{\text{eff}}_{i,j})
=\displaystyle= i(j0ii)(j0ii)(R~iieff+jR~ijeff)+ii2(R~iieff+jR~i,jeff)\displaystyle-\sum_{i}(j_{0}-\vec{i}_{i})(j_{0}-\vec{i}_{i})(\tilde{R}^{\text{eff}}_{ii}+\sum_{j}\tilde{R}^{\text{eff}}_{ij})+\vec{i}_{i}^{2}(\tilde{R}^{\text{eff}}_{ii}+\sum_{j}\tilde{R}^{\text{eff}}_{i,j})
=\displaystyle= iii2(R~iieff+jR~i,jeff).\displaystyle\sum_{i}\vec{i}_{i}^{2}(\tilde{R}^{\text{eff}}_{ii}+\sum_{j}\tilde{R}^{\text{eff}}_{i,j}).

In the first line we have neglected second-order fluctuation terms, and in the second line we have used δii=iij0\delta\vec{i}_{i}=\vec{i}_{i}-j_{0}. In the third line we have completed the square, and in the fourth line we again have neglected second-order fluctuations terms. Thus we can see the effective resistance Ri=(R~iieff+jR~i,jeff)R_{i}=(\tilde{R}^{\text{eff}}_{ii}+\sum_{j}\tilde{R}^{\text{eff}}_{i,j}), the sum of a row in R~eff\tilde{R}^{\text{eff}}. This approximation is valid only when the current fluctuations are small.

XI.7 Correlation Analysis

XI.7.1 Two-point effective resistance by a Markov process

Before we find the effective resistance of a circuit, it is helpful to work with the nodes of the circuit. The movement of charge in a network can be studied via a Markov process, wherein current jumps between nodes with a probability depending on the conductivity of the edges. We build a Markov matrix PP, the matrix elements encode the normalized probabilities that a charge moves between adjacent nodes; P(a,b)P(a,b) is the probability of moving from node nan_{a} to node nbn_{b}.

P(a,b)\displaystyle P(a,b) =\displaystyle= GabcGac\displaystyle\frac{G_{ab}}{\sum_{c}G_{ac}} (273)
=\displaystyle= G¯a,b\displaystyle\bar{G}_{a,b}

Here GabG_{ab} is the conductivity of the edge eabe_{ab}, and the sum over cc is the sum over all edges incident on nan_{a}. G¯a,b\bar{G}_{a,b} is the conductivity of eabe_{ab} normalized by the conductivity of all edges incident on nan_{a}. The matrix PP has a zero diagonal.

We gain insight into the current in a memristive device network by studying the paths charged particles can take between nodes. The probability of walks in this network (paths that the current could take) can be calculated from PP. The total probability of all kk-step long walks, pwp_{w}, that starts at nan_{a} and reaches a distant node nbn_{b} can be calculated,

pw=Pa,bk.\displaystyle p_{w}=P^{k}_{a,b}. (274)

As the charge moves through the network in a fast time scale compared to the change in the memristive device, we need to examine long walks. Given the values of P(a,b)P(a,b) are small we can write the sum of all the possible walks in the network, Pk\sum^{\infty}P^{k}, as:

(IP)1\displaystyle\left(I-P\right)^{-1} =\displaystyle= 1L¯\displaystyle\frac{1}{\bar{L}} (275)
=\displaystyle= L¯+\displaystyle\bar{L}^{+}

Here, L¯\bar{L} is the normalized Laplacian matrix. In the case of homogenous resistors, the weights are equal to the inverse degree of each edge, and in this case, L¯\bar{L} reduces to the standard normalized Laplacian matrix. In the more general case, the probability is the normalized conductivity of all edges connected to a given node, and the diagonal elements are 11, as given in

L¯=diag(BGBt)(BGBt),\displaystyle\bar{L}=\text{diag}(BGB^{t})\cdot(BGB^{t}), (276)

here diag(BGBt)\text{diag}(BGB^{t}) is the diagonal elements of BGBtBGB^{t}, which normalizes the conductivity. We arrive at equation (276) using Theorem 1 and dividing each row by the corresponding diagonal elements of the matrix such that the off-diagonal elements of each row sum to 1-1 and each row sums to 0.

If we define L¯+\bar{L}^{+} to be the pseudoinverse of L¯\bar{L}, then we have

L¯+=(BGBt)1diag(BGBt).\displaystyle\bar{L}^{+}=(BGB^{t})^{-1}\cdot\text{diag}(BGB^{t}). (277)

The effective resistance is a resistance such that if the entire network was removed, leaving only nodes nan_{a} and nbn_{b} with an edge (a,b)(a,b) with ReffR^{\text{eff}}, the potential difference and corresponding electrical flows between these nodes would be invariant. As such, we can find an effective circuit in order to calculate the correlation between electrical currents in non-adjacent edges. The effective resistance, ReffR^{\text{eff}}, between the nan_{a} and nbn_{b} can be found by examining the (a,b)(a,b) walks and removing the closed cycle walks which return to nan_{a}, (a,a)(a,a), and similarly examining the reverse walk from (b,a)(b,a):

pa,b\displaystyle p_{a,b} =\displaystyle= L¯+(a,b)L¯+(b,a)+L¯+(a,a)+L¯+(b,b)\displaystyle-\bar{L}^{+}(a,b)-\bar{L}^{+}(b,a)+\bar{L}^{+}(a,a)+\bar{L}^{+}(b,b) (278)
R(a,b)eff(a,b)\displaystyle R^{\text{eff}}_{(a,b)}(a,b) =\displaystyle= L¯+(a,b)(BGBt)(b,b)L¯+(b,a)(BGBt)(a,a)+L¯+(a,a)(BGBt)(a,a)+(BGBt)(b,b)(BGBt)(b,b)\displaystyle-\frac{\bar{L}^{+}(a,b)}{(BGB^{t})(b,b)}-\frac{\bar{L}^{+}(b,a)}{(BGB^{t})(a,a)}+\frac{\bar{L}^{+}(a,a)}{(BGB^{t})(a,a)}+\frac{(BGB^{t})(b,b)}{(BGB^{t})(b,b)} (279)
=\displaystyle= L+(a,b)L+(b,a)+L+(a,a)+L+(b,b)\displaystyle-L^{+}(a,b)-L^{+}(b,a)+L^{+}(a,a)+L^{+}(b,b)

Here, L+L^{+} is the unnormalized inverse Laplacian; we have found the effective resistance by examining the walks in L¯+\bar{L}^{+} and dividing by the normalization factor to restore the resistance values.

XI.7.2 Current Correlation in a Random Walk

Here we examine a toy model wherein current moves through the memristive device network via a Markov processDoyle and Snell (2000). As we are interested in studying the correlation of currents in edges, we build a Markov matrix, PP, wherein charge jumps between edges. In effect, we transform the network into the line graph wherein edges in our original network are transformed into nodes. Edges that are adjacent in our network become adjacent nodes in our new graph; these new nodes are connected by asymmetric edges. A Markov matrix, PP, encodes the probability of a stochastic process, e.g., a particle flowing through the network randomly without external bias, diffusing in the line graph. The probability depends on the conductivity of the edges:

P(a,b)\displaystyle P(a,b) =\displaystyle= GabcGac\displaystyle\frac{G_{ab}}{\sum_{c}G_{ac}} (280)
=\displaystyle= G¯a,b\displaystyle\bar{G}_{a,b}

Here GabG_{ab} is the conductivity of edge ebe^{b} in our original network, which is adjacent to edge eae^{a}, and GbaG_{ba} is the conductivity of edge eae^{a} in our original network (adjacent to ebe^{b}). The sum over cc in equation (280) is over all edges ece^{c} adjacent to eae^{a} in our original network. G¯a,b\bar{G}_{a,b} is the conductivity of ebe^{b} adjacent to eae^{a} normalized by the conductivity of all edges adjacent to eae^{a}. We can see that the matrix PP has a zero diagonal as above, but now PP is asymmetric.

As our random walk occurs via all-or-nothing jumps, we set a bound on the correlation of current in our network by underestimating the probability that current will propagate through the network. In our model, a particle can move in either direction on a given edge without accounting for the conductivity of the edge, i.e., a particle is equally likely to hop from either end of a given edge. Thus we overestimate the probability the charge will, in effect, be reflected by an edge.

The probability h(a,b)h_{(a,b)} is the probability a particle walks from edge eae^{a} and travels to a distant edge ebe^{b} along the shortest path; now ebe^{b} is not necessarily directly adjacent to eae^{a}. The random walk probability h(a,b)h_{(a,b)} can be calculated from PP,

h(a,b)\displaystyle h_{(a,b)} =\displaystyle= k=1Pa,bk|Pa,bk1=0.\displaystyle\sum_{k=1}P^{k}\,_{a,b}\;|\;\sum P^{k-1}\,_{a,b}=0. (281)

In a network, charge will not flow in cycles but instead flow in a directed manner until reaching an equilibrium. Thus, we are only interested in finding the shortest path when the charge does not return to any edge it has traversed. For example, in a simple square circuit with homogeneous resistance:

P=(012012120120012012120120)\displaystyle P=\begin{pmatrix}0&\frac{1}{2}&0&\frac{1}{2}\\ \frac{1}{2}&0&\frac{1}{2}&0\\ 0&\frac{1}{2}&0&\frac{1}{2}\\ \frac{1}{2}&0&\frac{1}{2}&0\\ \end{pmatrix} (282)

in this case h(a,b)=2122h_{(a,b)}=2\frac{1}{2}^{2}. This random walk probability, h(a,b)h_{(a,b)} is a minimum probability a particle injected in eae^{a} will travel through ebe^{b}. This can be seen by noting that for a unit charge injected in eae^{a}, then P(a,b)P(a,b) is the probability the charge will flow through an adjacent ebe^{b}. P(b,c)P(b,c) is then the probability that charge will move from ebe^{b} to an adjacent edge ece^{c}. Note that in this treatment, a charge is unbiased in the direction in which it flows; a charge injected into any edge will move to adjacent edges based solely on the conductivity of the adjacent edges.

In the systems we study, we treat the current as quasistatic as it equilibrates between edges and traverses the network at a fast time scale compared to the change in the resistance in the memristive devices. Thus, h(a,b)h_{(a,b)} is a correlation of current in edge eae^{a} and ebe^{b} under random walk conditions, e.g., when the current flow is unbiased. Given a known current ia{i}_{a} in eae^{a}, we can get a bound on the current ib{i}_{b} in ebe^{b}. We have

h(a,b)iaibiah(b,a)\displaystyle h(a,b)i_{a}\leq i_{b}\leq\frac{i_{a}}{h(b,a)} (283)

For example, if h(a,b)=h(b,a)=1h_{(a,b)}=h_{(b,a)}=1 the current is equivalent in edges eae^{a} and ebe^{b}, ia=ib{i}_{a}={i}_{b}, if h(a,b)=h(b,a)=0h_{(a,b)}=h_{(b,a)}=0 the current in the two edges are independent. The upper bound is found from the reverse walk: h(b,a)ibiah(b,a)i_{b}\leq i_{a}, as PP is asymmetric h(a,b)h(a,b)h_{(a,b)}\neq h_{(a,b)}.

These are inequalities as probability of moving between adjacent edges is smaller in the random walk case than the probability of moving between adjacent edges in the directed walk case; in the random walk case, there is a probability of moving in either direction along an edge. In addition, here we are just examining the probability that current injected at a single edge eae^{a} transmits through ebe^{b}. Under normal conditions, multiple edges could contribute to the current of any individual edge.

This correlation analysis can offer insight into correlations of the resistance values in distant memristive devices. Given a network of memristive devices initialized with a homogenous resistance, X(t0)k,k=x0X(t_{0})_{k,k}=x_{0}, the rate at which the memory parameter xx diverges in distant edges depends on h(a,b)h(a,b). In the direct parameterization, we have,

Ronβ(h(a,b)(t0)1)ib(t0)x˙a(t0)x˙b(t0)Ronβ(h(b,a)(t0)11)ib(t0).\displaystyle\frac{R_{\text{on}}}{\beta}\left(h_{(a,b)}(t_{0})-1\right){i}_{b}(t_{0})\leq\dot{x}_{a}(t_{0})-\dot{x}_{b}(t_{0})\leq\frac{R_{\text{on}}}{\beta}\left(h_{(b,a)}(t_{0})^{-1}-1\right){i}_{b}(t_{0}). (284)

We can simply put bounds on the divergence of Δxa,b=xaxb\Delta x_{a,b}=x_{a}-x_{b}. Similarly, we can put bounds on the trajectories of Δxa,b\Delta x_{a,b}, which relates the difference of resistance in distant memristive devices:

t0tα(xa(s)xb(s))+Ronβ(h(a,b)(s)1)ib(s)dsΔxa,b(t)t0tα(xa(s)xb(s))+Ronβ(h(b,a)(s)11)ib(s)ds\displaystyle\int^{t}_{t_{0}}-\alpha\left(x_{a}(s)-x_{b}(s)\right)+\frac{R_{\text{on}}}{\beta}\left(h_{(a,b)}(s)-1\right){i}_{b}(s)ds\leq\Delta x_{a,b}(t)\leq\int^{t}_{t_{0}}-\alpha\left(x_{a}(s)-x_{b}(s)\right)+\frac{R_{\text{on}}}{\beta}\left(h_{(b,a)}(s)^{-1}-1\right){i}_{b}(s)ds (285)

In general, the appropriate dynamics of RR, hh, and i\vec{i} will depend on the memory parameters xx, and thus solving for the trajectory Δxa,b\Delta x_{a,b} must be done in a self-consistent way. Inherent in the analysis thus far is the charged particles are traveling in a network nearly randomly; inhomogeneities in the current distributions are due solely to network connectivity. In this treatment, the voltage in the network does not strongly bias the propagation of electrons. This is only valid under low voltage bias such that the potential in the network can be considered locally flat; in effect, particles are being injected in an edge, but there is no bias in the network. In the main text, the case of a circuit under inhomogeneous applied bias in a network with spatially varying resistance is studied.

XI.7.3 Wheatstone bridges

We have two Wheatstone bridge circuits, which are equivalent representations of the circuit built from six effective resistances, as described in the main text. In one circuit, we can treat the edge with a known current, e0e^{0}, as a current injector that connects the ends of the Wheatstone bridge and the unknown edge, e1e^{1}, is the bridge. In the other circuit, the unknown edge, e1e^{1}, is the source that connects the ends of the Wheatstone bridge, where the bridge is now e0e^{0}.

In order to solve for valid current configurations, we remove one edge of our circuit; in both cases, we remove the edge linking the ends of our Wheatstone bridge, the source edge. In the first case, we have (Va,Vb)(V_{a},V_{b}) at the top and bottom of our Wheatstone bridge, with (Vc,Vd)(V_{c},V_{d}) on the ends of our bridge, as shown in Figure 4 (b) and (c) in the main text. We use an incidence matrix BB to solve for (Va,Vb)(V_{a},V_{b}) in terms of (Vc,Vd)(V_{c},V_{d}). Similarly, in the second circuit the positions of (Vc,Vb)(V_{c},V_{b}) and (Va,Vb)(V_{a},V_{b}) are reversed and we solve for (Vc,Vb)(V_{c},V_{b}) in terms of (Va,Vb)(V_{a},V_{b}). The following linear mappings are found,

(VaVb)\displaystyle\begin{pmatrix}V_{a}\\ V_{b}\end{pmatrix} =(R1R2R3+R1R2R4+R1R2R5+R1R4R5R5(R1R4R2R3)R1R2R3R1R2R4R1R2R5R2R3R5R5(R1R4R2R3)R1R3R4R2R3R4R2R3R5R3R4R5R5(R1R4R2R3)R1R3R4+R1R4R5+R2R3R4+R3R4R5R5(R1R4R2R3))(VcVd)\displaystyle=\begin{pmatrix}\frac{R_{1}R_{2}R_{3}+R_{1}R_{2}R_{4}+R_{1}R_{2}R_{5}+R_{1}R_{4}R_{5}}{R_{5}\left(R_{1}R_{4}-R_{2}R_{3}\right)}&\frac{-R_{1}R_{2}R_{3}-R_{1}R_{2}R_{4}-R_{1}R_{2}R_{5}-R_{2}R_{3}R_{5}}{R_{5}\left(R_{1}R_{4}-R_{2}R_{3}\right)}\\ \frac{-R_{1}R_{3}R_{4}-R_{2}R_{3}R_{4}-R_{2}R_{3}R_{5}-R_{3}R_{4}R_{5}}{R_{5}\left(R_{1}R_{4}-R_{2}R_{3}\right)}&\frac{R_{1}R_{3}R_{4}+R_{1}R_{4}R_{5}+R_{2}R_{3}R_{4}+R_{3}R_{4}R_{5}}{R_{5}\left(R_{1}R_{4}-R_{2}R_{3}\right)}\end{pmatrix}\begin{pmatrix}V_{c}\\ V_{d}\end{pmatrix} (286)
(VcVd)\displaystyle\begin{pmatrix}V_{c}\\ V_{d}\end{pmatrix} =(R1R3R4+R1R4R5+R2R3R4+R3R4R5R1R2R3+R1R2R4+R1R2R5+R1R3R4+R1R4R5+R2R3R4+R2R3R5+R3R4R5R1R2R3+R1R2R4+R1R2R5+R2R3R5R1R2R3+R1R2R4+R1R2R5+R1R3R4+R1R4R5+R2R3R4+R2R3R5+R3R4R5R1R3R4+R2R3R4+R2R3R5+R3R4R5R1R2R3+R1R2R4+R1R2R5+R1R3R4+R1R4R5+R2R3R4+R2R3R5+R3R4R5R1R2R3+R1R2R4+R1R2R5+R1R4R5R1R2R3+R1R2R4+R1R2R5+R1R3R4+R1R4R5+R2R3R4+R2R3R5+R3R4R5)(VaVb)\displaystyle=\begin{pmatrix}\frac{R_{1}R_{3}R_{4}+R_{1}R_{4}R_{5}+R_{2}R_{3}R_{4}+R_{3}R_{4}R_{5}}{R_{1}R_{2}R_{3}+R_{1}R_{2}R_{4}+R_{1}R_{2}R_{5}+R_{1}R_{3}R_{4}+R_{1}R_{4}R_{5}+R_{2}R_{3}R_{4}+R_{2}R_{3}R_{5}+R_{3}R_{4}R_{5}}&\frac{R_{1}R_{2}R_{3}+R_{1}R_{2}R_{4}+R_{1}R_{2}R_{5}+R_{2}R_{3}R_{5}}{R_{1}R_{2}R_{3}+R_{1}R_{2}R_{4}+R_{1}R_{2}R_{5}+R_{1}R_{3}R_{4}+R_{1}R_{4}R_{5}+R_{2}R_{3}R_{4}+R_{2}R_{3}R_{5}+R_{3}R_{4}R_{5}}\\ \frac{R_{1}R_{3}R_{4}+R_{2}R_{3}R_{4}+R_{2}R_{3}R_{5}+R_{3}R_{4}R_{5}}{R_{1}R_{2}R_{3}+R_{1}R_{2}R_{4}+R_{1}R_{2}R_{5}+R_{1}R_{3}R_{4}+R_{1}R_{4}R_{5}+R_{2}R_{3}R_{4}+R_{2}R_{3}R_{5}+R_{3}R_{4}R_{5}}&\frac{R_{1}R_{2}R_{3}+R_{1}R_{2}R_{4}+R_{1}R_{2}R_{5}+R_{1}R_{4}R_{5}}{R_{1}R_{2}R_{3}+R_{1}R_{2}R_{4}+R_{1}R_{2}R_{5}+R_{1}R_{3}R_{4}+R_{1}R_{4}R_{5}+R_{2}R_{3}R_{4}+R_{2}R_{3}R_{5}+R_{3}R_{4}R_{5}}\end{pmatrix}\begin{pmatrix}V_{a}\\ V_{b}\end{pmatrix} (287)

The effective resistances here are Ra,b=R0R_{a,b}=R_{0}, Ra,d=R1R_{a,d}=R_{1}, Ra,c=R2R_{a,c}=R_{2}, Rb,d=R3R_{b,d}=R_{3}, Rb,c=R4R_{b,c}=R_{4}, and Rc,d=R5R_{c,d}=R_{5}. These are the linear transformations we use in the main text, with the matrices in equations (286) and (287) corresponding to MM and QQ, respectively. We note that M=Q1M=Q^{-1}.

XI.7.4 Linear mapping between voltages

Here we detail how the linear mappings between voltage in our unbalanced Wheatstone bridge circuit are used to determine a correlation between currents i0i_{0} and i1i_{1}, in edges e0e^{0} and e1e^{1}, with resistance R0R_{0} and R1R_{1}, respectively. From the main text we have,

(VcVd)\displaystyle\begin{pmatrix}V_{c}\\ V_{d}\end{pmatrix} =\displaystyle= Q(VaVb)\displaystyle Q\begin{pmatrix}V_{a}\\ V_{b}\end{pmatrix} (288)
(VaVb)\displaystyle\begin{pmatrix}V_{a}\\ V_{b}\end{pmatrix} =\displaystyle= Q1(VcVd)\displaystyle Q^{-1}\begin{pmatrix}V_{c}\\ V_{d}\end{pmatrix} (289)

We can use these mappings to calculate the currents from the resistance R0R_{0} and R1R_{1},

(i10)\displaystyle\begin{pmatrix}i_{1}\\ 0\end{pmatrix} =\displaystyle= 1R1(1100)Q(VaVb)\displaystyle\frac{1}{R_{1}}\begin{pmatrix}1&-1\\ 0&0\end{pmatrix}Q\begin{pmatrix}V_{a}\\ V_{b}\end{pmatrix} (290)
=\displaystyle= 1R1(1100)(VcVd)\displaystyle\frac{1}{R_{1}}\begin{pmatrix}1&-1\\ 0&0\end{pmatrix}\begin{pmatrix}V_{c}\\ V_{d}\end{pmatrix}
(i00)\displaystyle\begin{pmatrix}i_{0}\\ 0\end{pmatrix} =\displaystyle= 1R0(1100)Q1(VcVd)\displaystyle\frac{1}{R_{0}}\begin{pmatrix}1&-1\\ 0&0\end{pmatrix}Q^{-1}\begin{pmatrix}V_{c}\\ V_{d}\end{pmatrix} (291)
=\displaystyle= 1R0(1100)(VaVb)\displaystyle\frac{1}{R_{0}}\begin{pmatrix}1&-1\\ 0&0\end{pmatrix}\begin{pmatrix}V_{a}\\ V_{b}\end{pmatrix}

We can rearrange these matrices to find a mapping between electrical currents, which we use to find a ratio between i0i_{0} and i1i_{1},

(i00)\displaystyle\begin{pmatrix}i_{0}\\ 0\end{pmatrix} =\displaystyle= R1R0(1100)Q1(1100)+(i10)\displaystyle\frac{R_{1}}{R_{0}}\begin{pmatrix}1&-1\\ 0&0\end{pmatrix}Q^{-1}\begin{pmatrix}1&-1\\ 0&0\end{pmatrix}^{+}\begin{pmatrix}i_{1}\\ 0\end{pmatrix} (292)
i1\displaystyle i_{1} =\displaystyle= 2R0R1Q(0,0)Q(1,1)Q(0,1)Q(1,0)Q(0,0)+Q(0,1)+Q(1,0)+Q(1,1)i0\displaystyle 2\frac{R_{0}}{R_{1}}\frac{Q(0,0)Q(1,1)-Q(0,1)Q(1,0)}{Q(0,0)+Q(0,1)+Q(1,0)+Q(1,1)}i_{0} (293)
=\displaystyle= R0R1det(Q)i0\displaystyle\frac{R_{0}}{R_{1}}\det{(Q)}i_{0}

The superscript + is the pseudo-inverse. Note the denominator, Q(0,0)+Q(0,1)+Q(1,0)+Q(1,1)Q(0,0)+Q(0,1)+Q(1,0)+Q(1,1), cancels the factor of 22. We can go through the same process to obtain the inverse mapping:

i0\displaystyle i_{0} =\displaystyle= R1R0det(Q1)i1.\displaystyle\frac{R_{1}}{R_{0}}\det{(Q^{-1})}i_{1}. (294)

If we are interested in the correlation between currents i0i_{0} and i1i_{1}, we treat i0i_{0} as a unit charge and equations (293) and (294) reproduce the current transformation from the main text. We can write,

i0i1\displaystyle i_{0}i_{1} =\displaystyle= R1R0det(Q1)i12\displaystyle\frac{R_{1}}{R_{0}}\det{(Q^{-1})}i_{1}^{2} (295)
=\displaystyle= R0R1det(Q)i02,\displaystyle\frac{R_{0}}{R_{1}}\det{(Q)}i_{0}^{2},

from which we define the dimensionless correlation i1,i0\langle i_{1},i_{0}\rangle in the main text for unit current in e0e^{0}.