Network analysis of memristive device circuits: dynamics, stability and correlations
Abstract
Networks with memristive devices are a potential basis for the next generation of computing devices. They are also an important model system for basic science, from modeling nanoscale conductivity to providing insight into the information-processing of neurons. The resistance in a memristive device depends on the history of the applied bias and thus displays a type of memory. The interplay of this memory with the dynamic properties of the network can give rise to new behavior, offering many fascinating theoretical challenges. But methods to analyze general memristive circuits are not well described in the literature. In this paper we develop a general circuit analysis for networks that combine memristive devices alongside resistors, capacitors and inductors and under various types of control. We derive equations of motion for the memory parameters of these circuits and describe the conditions for which a network should display properties characteristic of a resonator system. For the case of a purely memresistive network, we derive Lyapunov functions, which can be used to study the stability of the network dynamics. Surprisingly, analysis of the Lyapunov functions show that these circuits do not always have a stable equilibrium in the case of nonlinear resistance and window functions. The Lyapunov function allows us to study circuit invariances, wherein different circuits give rise to similar equations of motion, which manifest through a gauge freedom and node permutations. Finally, we identify the relation between the graph Laplacian and the operators governing the dynamics of memristor networks operators, and we use these tools to study the correlations between distant memristive devices through the effective resistance.
I Introduction
Once regarded as somewhat niche, memristance is now recognized as a ubiquitous feature of nature, especially at the nanoscale Pershin and Ventra (2011). As a result, memristive devices have been the subject of intense research over the past decade. In addition to the wide applicability of these components, ranging from scalable memory devices to machine learning and reservoir computingTraversa and et. al. (2014); Carbajal et al. (2015); Du and et. al. (2017); Milano et al. (2022); Fu et al. (2020); Xu, Wang, and Yan (2021), there is also interest in their fundamental dynamical propertiesPershin, Slipko, and Di Ventra (2013); Diaz-Alvarez et al. (2019); Zhu et al. (2021); Caravelli, Traversa, and Di Ventra (2017); Riaza (2011). For example, it has long been known that the introduction of memristive devices in a circuit can lead to chaotic dynamics and there are numerous papers studying the properties of the Chua attractorsMatsumoto (1984); Madan (1993).
However, less is known for generic circuits with memory, and even fewer exact results have been obtained in this regard. While little theoretical progress has been made for the most general cases of electronic circuits with memory, purely memristive circuits are amenable to an analytical approach: the equations for networks of these devices can often be obtained analytically, unveiling the most bizarre and unexpected properties of these dynamical circuitsZegarac and Caravelli (2019); Tellini et al. (2021). For instance, it has been shown that the asymptotic states of certain memristive-resistive circuits can be well described by the Curie-Weiß model and its ferromagnetic-paramagnetic transition. Further, it has been argued that random circuits of memristive devices (analyzed through the lens of their Lyapunov function) exhibit a ferromagnetic-glass transitionCaravelli and Sheldon (2020); Caravelli and Barucca (2018).
Obtaining a full dynamic picture of this behavior is difficult because the nonlinearity of the memristive devices makes the equations of motion impossible to solve analytically. This paper provides tools to understand the properties of generic, first-order memristive devices with window functions in networks. In particular, we derive dynamical equations for generic circuits with memristive devices, and we derive Lyapunov functions for the infinite family of first-order memristive circuits with sharp boundaries (e.g., discontinuous window functions).
We also generalize techniques used for network circuit analysis to the case of memristive device networks. Using these, we demonstrate that these circuits frequently give rise to analytically nontrivial or intractable dynamical equations. In doing so, we demonstrate mathematical techniques, including correlation analysis and Lyapunov functions, which can provide information on the dynamics, evolution, and stability of a network of memristive devices. Further, we discuss how different circuits give rise to similar dynamics through gauge freedom and node permutation in the governing equations.
This work builds upon previous works to study the dynamics of memristor networks most of which use the graph Laplacian or projector operatorsCaravelli, Traversa, and Di Ventra (2017); Stern, Liu, and Balasubramanian (2024); Xiang, Ren, and Tan (2022); Dhivakaran, Gowrisankar, and Vinodkumar (2024). We identify the relation between the graph Laplacian and the projection operators. Finally we use this correspondence to study the effective resistance in memristor networks and construct effective circuits which allow us to study the correlations between distant memristors.
II Graph theory for circuit dynamics
In network analysis of electrical circuits, an electrical circuit is mapped to a graph . A graph consists of a set of vertices/nodes and a set of edges . An undirected edge may be denoted , but in circuit analysis, each edge is assigned an arbitrary direction from to denoted . The graph allows no self-edges and no duplicate edges. As a simple example, let us work with the triangle graph defined by,
(1) |
A representation of this graph is shown in Figure 1.

Note that the edge between 1 and 3 is not cyclical. A graph representation of a circuit generally consists of edges corresponding to circuit elements and nodes, i.e., junctions in the circuit. Currents and voltages are assigned in the graph to satisfy Kirchhoff’s current law (KCL) and Kirchhoff’s voltage law (KVL). A circuit is solved when it satisfies these constraints given a circuit topology; thus, network analysis of a circuit consists of solving for valid current and voltage configurations.
A current configuration is a set of numbers associated with the edges which satisfy KCL. KCL could be enforced on each vertex by building an incidence matrix , where each row represents a node and each column an edge; the matrix entries take values of corresponding to a directed edge oriented towards or away from each node, and when an edge is not incident on a node. For our triangle graph, this is,
(2) |
are edges linking nodes and (edges are indexed with superscripts, ), and are indexed nodes. For each vertex and edge , let if is the start, if is the finish, and if the edge does not include . KCL are enforced such that a current vector satisfies,
(3) |
If is an allowed current configuration then
(4) |
The operator may be interpreted as a type of divergence on the graph, and the relationship is equivalent to saying that the current configurations must be divergence-less. In this sense, they are similar to magnetic fields.
Similarly, a voltage configuration is a set of numbers associated with the edges that satisfy KVL. That is, we define a cycle, on the graph as a sequence of nodes such that (in either direction) and . We can find the oriented edge set of the cycle , wherein the orientation matches the direction traversed in (these edges do not necessarily lie in ). We construct a cycle matrix ; each row represents a unique cycle in the graph, and each column an edge in . The matrix entries take values of when the edge is part of the cycle and otherwise; the sign indicates whether the orientation of the edge in aligns with the direction of the cycle or is opposite to it . The triangle graph has two possible cycles, or . We can include both for now,
(5) |
These are not independent, and we must eliminate one cycle to solve for a voltage configuration, generating a reduced cycle matrix. KVL are enforced on a voltage vector for each cycle in the graph; let be if the orientation matches that in the graph, if it is opposite, and 0 if . In this case, we can write,
(6) |
To solve for a voltage configuration we will have to do some extra work later to determine which cycles we include. If is a voltage configuration then
(7) |
may be interpreted as a type of curl on the graph that gives the circulation of a vector. In this sense, the relation is analogous to saying that the voltage drops are curl-less and thus analogous to electric fields.
The current and voltage configurations belong to a set of all current and voltage configurations. In the supplementary material, we prove that these current and voltage configurations are dual. Thus, we have two valid representations of the electrical properties of the circuit. Here, we define two ways of approaching the solution of a circuit: the node method and the loop method.
In the node method, the voltage configuration is calculated by finding voltage potentials of the nodes in the circuit. One potential value is set as the ground, and the remaining potential values are considered as the unknowns. For the non-root nodes, we write down KCL and solve for the voltage potential in succession via Gaussian elimination.
In the loop method, we must have a way of choosing cycles. To do this, we introduce a spanning tree . A spanning tree is the minimum number of edges to connect all nodes. The remaining cycle edges can be added to the spanning tree; when an individual cycle edge is added to the spanning tree it gives a unique cycle whose orientation we assign by that of the included edge. This is called the fundamental cycle of the edge, also called a fundamental loop. From the reduced cycle matrix, we write down KVL. A current may then be expressed by the fundamental loop currents as a weighted sum of the fundamental cycles. Writing a reduced cycle matrix in terms of the fundamental loops, we have
(8) |
Given a spanning tree , we can find unique paths from any arbitrary initial node, the root, and any other node . We can sum the voltages at the nodes along this path; if an edge is oriented along the walk from root to node, the voltage value of that edge is added; if the walk and edge are unaligned, the voltage is subtracted. We define this walk , and the voltage configuration can be written in terms of this fundamental walk:
(9) |
which reproduces the relation between the voltage and potential above. These identities will be useful to derive the results below.
II.1 Resistors and memristive devices
A memristive device can be treated as a resistor with variable resistance; its resistance can take continuous values between a low and high resistance. The state of the resistance between two limiting values can be parameterized by a variable , which is constrained by some window function such that . This can be thought of as the first-order term in a polynomial expansion for the resistance in terms of an adimensional parameter. Thus, the resistance varies linearly with ; later, we will generalize this to include higher-order terms of to describe nonlinear memresistance. In typical conditions, the resistance of a memristive device is bounded between two fixed high and low resistance states, corresponding to the ’off’ and ’on’ states. The resistance of these states is given by the values . As the resistance state depends on the history of applied voltage or bias, there is a memory stored in the resistance value, and we will refer to as the internal memory parameter. For the case of widely used metal oxide memristive devices, a simple toy model for the evolution of the resistance is the followingCaravelli, Traversa, and Di Ventra (2017):
(10) | |||||
(11) |
In the direct parametrization, and . In equation (11), is the current flowing in the device at time , is a dampening parameter with units of inverse time, and can be thought of as the inverse learning rate with units of time per voltage. These constants control the decay and reinforcement timescales. We define a scaling variable . , , can be measured experimentally. Alternatively, a similar formulation of the resistance dynamical equation is given,
(12) | |||||
(13) |
In the flipped parametrization and . Here we define the scaling factor which is bounded . These two parameterizations are inequivalent models of the memristive device dynamics. From the point of view of the dynamics of the single memristive device, the two parametrizations are related by the following change of variables: , and , i.e., .
II.2 Resistance characterization
The model given above, which has been previously studied Caravelli, Traversa, and Di Ventra (2017); Caravelli (2019), has a simple time dependency.
(14) |
This is valid to a first approximation, with in the direct parameterization and for the flipped parameterization. This is the simplest description of a device that saturates between two limiting values, and , and it can be generalized. Here, we focus on how to parametrize higher nonlinearities. This approach can also be extended to systems in which nonlinear Schottky barriers are present by modifying the effective equation for the resistance with an exponential term in front and by parametrizing the resistance with a voltage-dependent term of the form .
Consider a memristive system described by a single internal variable , with the property that . Then, if we consider a set of resistive variables , we can write without loss of generality
(15) |
where are Bernstein polynomials with . In the limit , pointwise. Now assume , we have that in a controlled experiment with sinusoidal inputs, and the assumption of symmetric memristive devices, we have
(16) |
and we can write
(29) |
from which we can infer the Bernstein parameters from the acquired data,
(30) |
Note that because the Bernstein polynomials are partitions of unity, one must have that if , then necessarily for some vector of resistances .
The Bernstein polynomials approximation is valid when is a smooth function, such that is smooth in terms of . Thus, it is possible to learn a nonlinear function given a and real-valued resistance data . In the case of nonlinear resistance, the resistance in time-correlated real-valued samples can be expanded in terms of . As the samples will be noisy in real data, and not noiseless like in the simulated ideal memristive device case, we may find deviations even from the functional form of equation (29), as shown schematically in Figure 2. In this case, the task will be to learn the correct function , where is an diagonal matrix of memory parameters.

It is interesting to note that a device with this property will still satisfy:
(31) | |||||
(32) |
with and , and will reduce to a trivial memristive device for . In the generic case we have a nontrivial time dependency,
(33) |
A pure memristor has or , the charge and flux, respectively.
II.3 Different control schemes
In this section, we derive various control schemes for the resistance in a memristive device network. The various ways in which the memristive circuit can be controlled can be cast in terms of a generic vector
(34) |
where, as we show below, we have
(35) |
which is useful in various proofs concerning the control of reservoirs using memristive devicesSheldon, Kolchinsky, and Caravelli (2022); Baccetti et al. (2023). Here and are orthogonal projection operators, and . and are discussed further below. Further, in the supplementary material, we show that it is possible to write similar differential equations for the resistance without referring to .
II.3.1 Voltages at Nodes and in Series
Here, we consider a network of resistors characterized by an incidence matrix , cycle matrix , and resistance values . We also consider a set of voltage sources attached to edges in a consistent way (not violating KVL). For each edge , we then have
(36) |
where is a diagonal matrix of resistances. Our goal is to solve for the current configuration .
Define as the reduced incidence matrix by omitting the last row of which renders it nonsingular. We then introduce a spanning tree and reorder the edges into those belonging to the tree and those belonging to the chords,
(37) |
With this construction, is the identity. We thus have
(38) |
as is invertible by our construction. Thus,
(39) |
which is just the expression in terms of the fundamental loop currents we gave previously. Then, imposing KVL,
(40) |
This gives the full solution,
(41) |
Since we have seen that for voltages in series, we have , it follows that we have for the voltages at nodes that .
Let us now derive the memristive device network equation for voltages in series, which was previously derivedCaravelli, Traversa, and Di Ventra (2017). Here, we provide a faster derivation. We use the following form of the Woodbury matrix identity
(42) |
where assume that the matrix is invertible.
Then, let us define and . We have
(43) | |||||
Here is the non-orthogonal projection operator . In the direct parameterization we can assume , where we have used from the flipped parameterization (which satisfies ) with some function of such that and . It follows that we can write
(44a) | ||||
(44b) |
Then, let us use the equation for each memristive device,
(45) |
then
(46) |
and if we define the voltage generators to be with the negative on the side of the memristive device, then we indicate the voltage generators with , and we have
(47) |
If instead in the flipped parameterization we assume , again with and and
(48) |
we have
(49) |
where . Note that we have defined the activation voltages as .
II.3.2 Currents in Parallel and at nodes
In analogy to the previous section, we can solve for the voltage configuration , given a network of resistors and a set of current sources on edges , where the current is added in parallel to the corresponding memristive device such that the total current through the two links is , with the conductance matrix. In this case, we have
(50) |
The matrix is and has rank . We can thus invert it to get,
(51) |
If we drive a current at the nodes, current conservation demands that . We eliminate one node to obtain the dimensional . This must satisfy,
(52) |
giving
(53) |
Note the difference in sign. This is because we have usually defined currents exiting nodes as being positive, and here, we have considered as positive when entering the nodes.
Let us thus consider a diagonal matrix of homogeneous resistances:
(54) | |||||
In terms of the Bernstein polynomials, for then simply reduces to , where as previously derived (we could similarly work in the direct parametrization). We note that interpolates linearly between and . For , we define this function as , which interpolates (nonlinearly in ) between and by the properties of the Bernstein polynomials. Thus, interpolates between and and is a generalization of the for . Thus, throughout the manuscript, it will be simply necessary to replace with everywhere to obtain the generalized device equations, where is a positive and bounded from above (by 1) matrix for arbitrary .
III Memristive circuit motifs
Here, we generalize dynamical equations for standard passive circuit elements to incorporate memristive devices and discuss the eigenvalues of such circuit motifs. Figure 3 depicts resistive, RC, and RLC devices.

We examine these circuit elements as they can form fundamental motifs of more complicated circuits, going beyond purely memristive circuits. We derive equations for the dynamics of RC and RLC motifs that incorporate memristive devices. Further, we derive dynamical equations for RLC memristive device networks and describe some simple circuit structures, e.g., all circuit elements in series, which make analyzing the circuit dynamics more tractable.
Let us now try to understand the eigenvalue properties of a resistive circuit and how they depend on the device. As shown in equation (41), we know that the equation for the currents is of the formCaravelli, Traversa, and Di Ventra (2017)
(55) |
where is the voltage applied on each edge. From equation (44b) (with as function of ), we can rewrite this as
(56) |
From equation (55), we can obtain the voltage across each edge
(57) |
which can be written as
(58) |
where is the non-orthogonal projector . Thus, a resistive circuit acts simply as a filter for the voltages and is linear. We note ; thus, the eigenvalues are simply zeros and ones.
Following equation (57), the memristor and source motif can be rewritten in the direct parametrization as:
(59) | |||||
From this we can note a stability condition of with respect to ,
(60) |
Thus when the voltage across each edge is equal to the voltage applied at each edge, when . This will occur when , the identity matrix, corresponding to when the cycles space is invertible and each cycle is linearly independent.
III.1 RC Networks
Let us now show how this framework can be used to study standard RC circuits. Changing our network design to edges consisting of a capacitor in parallel with a resistor and voltage generator, shown in Figure 3 (b), we have at each edge,
(61) | |||||
(62) | |||||
(63) |
here subscripts and correspond to the resistor and capacitor edges, respectively. Imposing Kirchhoff’s current law and using for a potential vector
(64) | |||||
(65) | |||||
(66) | |||||
where is a non-orthogonal projector and are diagonal matrices containing the resistance and capacitance. Notably, is asymmetric. Allowable voltage vectors satisfy and are thus entirely within the row space of . The resulting dynamics are linear but do satisfy the fading memory propertySheldon, Kolchinsky, and Caravelli (2022). Considering the trajectories of two voltage configurations, subject to the same source , for we have
(67) |
As has eigenvalue in the space of allowable voltages and , this gives exponential convergence of the trajectories.
The solution of equation (66) is known. Let us call , we have
(68) |
Note that the operator can be written in a simpler form considering that is a projector. In fact,
(69) |
and thus
(70) |
Using the formula above, we can simply write the solution as
(71) |
Let’s derive a memristive device version of an RC motif. We can rewrite the RC motif incorporating memristive device resistance. Plugging in the resistance from the direct parameterization, we have an equation of motion for the voltage:
(72) | |||||
Here, we used equation (63) to substitute for the resistance. This can be rewritten,
(73) | |||||
Here, we have a coupled nonlinear differential equation relating voltage and the internal memory parameter. Solving for and needs to be done in a self-consistent way, as depends on both applied current and voltage sources, as well as the current and resistance in the RC motif, as determined by equation (63). Interestingly, the voltage across the capacitor will depend on the time integral of ; thus, the voltage value has a memory of the memristive device resistance.
III.2 RLC Networks
The system above gives us a reservoir with real eigenvalues and some control over their distribution by choosing values of and . However, the only stable reservoirs we are aware of that give extensive memories are the so-called ’resonator’ systems previously studiedWhite, Lee, and Sompolinsky (2004). In discrete time, this consists of eigenvalues spread around a circle in the complex plane with radius . In continuous time, the appropriate reservoir spectrum is given by the -transform of these, giving eigenvalues with fixed real part and imaginary parts spread on the interval . An inductance is required to achieve imaginary eigenvalues in an electric circuit (the matrix is symmetric and thus gives real eigenvalues). We begin with a sketch of the single edge case and use it as a guide for the network case.
We first consider a single edge isolated from the network, which we isolate and split into parallel edges to characterize the memristive device resistance. Our motif will consist of two edges in parallel, one with an inductor and capacitor and the other with a resistor and voltage generator. The voltages across the two edges are equal,
(74a) | ||||
(74b) |
Here, and are the inductance and capacitance of the edge, shown in Figure 3 (c). The current through both edges is and must vanish by current conservation when no current is injected across our motif. Writing these equations in terms of the voltage is inconvenient as the result will depend on , making comparisons with other types of reservoirs more difficult. Using the connection between the capacitive charge and current, we can write everything in terms of ,
(75) |
This can be transformed into a first-order system,
(76) |
This gives eigenvalues . We will obtain imaginary eigenvalues for the underdamped case, .
Moving to an RLC network, we can build a network consisting of RLC motifs composed as above, i.e., with memristive elements and a source parallel to an LC element; these motifs are connected by resistive edges referred to as network edges. We construct a minimum spanning tree such that the memristive edges exist on the tree and the LC components are on the cycle edges. The current in the resistive elements and the LC edges can be separated, and , respectively. In addition, there is a current in the edges that connect the RLC motifs; we call this the network current , thus . Examining the cycles in the graph, due to KVL, , and we will have cycles in each motif equating the voltage in the parallel meristive and LC edges, thus equations (74a) and (74b) remain valid in the network case. In addition, some cycles are a linear sum of memristive edges and the passive-resistive network edges. We treat the network resistance (outside the RLC motifs) as constants; the matrix is now larger incorporating these constant resistive network edges that do not depend explicitly on an internal memory parameter, . Examining the cycles in the network, we obtain a model of the network circuit dynamics,
(77) | |||||
(78) |
We have used from equation (39). We have
(79) | |||||
from which we can rewrite equation (78)
(80) | |||||
(81) | |||||
Equation (81) is a governing equation for the RLC network. We can now examine some specific cases. In the case with very high resistance in the network edges connecting RLC motifs, , we can simplify equation (81),
(82) |
The first term on the right-hand side enforces KVL within the RLC motifs with an additional constraint that the linear sum of potential across the memristive device edges, , equals zero. These are the linear sums remaining from the cycles spanning the network edges. The second term is the standard RLC dynamics in equation (76) for isolated RLC motifs.
When the resistance in the network edges is very low, as when the connections are of negligibly small resistance compared to the RLC elements, e.g., wires, we can approximate this as . In this case, we have
(83) |
The first term on the right-hand side is a voltage constraint enforcing KVL, similar to the previous case. The second term is a dynamical equation incorporating the network current .
We examine some network topologies that simplify the circuit analysis and produce more tractable dynamical equations. If we have a network of RLC motifs in series each linked by a single network edge such that we form a ring of RLC motifs, we can again assign the LC edges to the cycle edges and the memristive edges to the tree edges. In the case of constant resistance in all the network edges, , then we have a single cycle that spans all the memristive edges, here, and the network edges,
(84) | |||||
(85) |
We can now impose using equations (85),
(86) | |||||
Here we write as above and have an equivalent resistance . To solve for we use , where is the dimensional potential vector defined on the nodes, and from which
(87) |
We can now write equation (86) as
(88) |
If we now multiply the equation above by on the left, we obtain
(89) | |||||
where we defined .
With this,
(90) | |||||
This can be written as a first-order linear system of equations,
(91) |
This gives eigenvalues of . We again obtain imaginary eigenvalues for the underdamped case, where .
We can also examine the case of a network of RLC motifs in series and without internal sources. For now, we work in the general case with nonzero sources. We incorporate the resistance from the network edges by introducing an equivalent impedance, a resistance , on the LC edges. We assign the LC edges to the tree edges and the memristive edges to the cycle edges. The RLC motifs satisfy KVL,
(92) |
Proceeding as in the previous case, we can remove the network edges and link the RLC motifs directly in series. The total current comprises only the current on the memristive and LC edges, . Now with , we can again solve for using where is the dimensional potential vector defined on the nodes, and from which
(93) |
With this we have,
(94) | |||||
This may be cast as a first-order linear system of equations,
(95) |
We can rewrite the RLC circuit motif by incorporating memristive devices into the resistor elements. We note the now depends on the resistance in the memristive device elements in a complicated way. Thus, we use . Here, we use the flipped parameterization and recast the system of equations with memristive device resistance .
(96) |
If the resistance between RLC motifs is small compared to the LC impedance, then can be neglected; this is what we expect in most cases where good conductors, e.g., wires, connect the memristive device elements and inductors. In the case of large resistance between RLC motifs, we now examine the network conductance when the sources within the RLC motifs are zero, e.g., . Then reduces to , where is the resistance of the memristive elements. This allows us to simplify equation (96),
(97) | |||||
In the supplementary material, we prove can be written in terms of the cycle projection matrix . In this case, the resistance is projected onto the cycles via . Compared with the case without memristive devices, equation (91), there is an additional positive term in the fourth element of the matrix. Thus, the change in the time constant in the cycles is a driving contribution to the current. We expect the current to not propagate beyond individual RLC cycles in this network. This can be seen noting that is proportionate to . The orthogonal complement to is related to the cycle projection matrix, , without any voltage bias within the RLC motif, the current will dissipate. We examine the eigenvalues of this system, . We have imaginary eigenvalues in the under dampened case , similarly .
The techniques employed here can be generalized to other RLC circuit motifs with different arrangements of circuit elements.
IV Lyapunov functions for purely memristive circuit motifs
The Lyapunov functions provide a method to analyze the stability of memristive device networks at equilibrium while offering insight into control parameters that can be adjusted to enforce stability. The Lyapunov functions are scalar positive-definite functions, and we construct Lyapunov functions that capture the dynamics of the memristive device networks in the first-order time derivative. The conditions that ensure the time derivative of the Lyapunov functions is negative provide insight into the stable equilibrium conditions of memristive device networks. We explore the case of a dynamical equation where is linearly proportional to , the case of asymmetric resistance, and when is proportional to a nonlinear function, . In addition, we study a few specific examples including the nonlinear case with , akin to activation functions in neural networks, and explicitly examine the role of a window function in the resistance.
IV.1 Lyapunov functions for resistance of the form
Let us discuss the Lyapunov function for the set of dynamical equations for memristive device networks, derived in earlier papers Caravelli (2019); Caravelli, Sheldon, and Traversa (2021), but that will serve as an introduction to the generalized Lyapunov functions. We begin with
(98) |
where is a diagonal matrix of the memory parameters in the network. We will call in the following. For the equation of motion above, we have
(99) |
Consider
(100) |
In this case, we have
(101) |
and we have that . We can move to the left, as all the terms in are symmetric here. Any positive semi-definite symmetric matrix, e.g., , can be decomposed into the product of a matrix and its transpose such that . Thus, we can rewrite the matrix norm of the Lyapunov function,
(102) | |||||
As an asymptotic form, in this case, we have we have
(103) |
where is the matrix with diagonal elements removed and is the vector of diagonal elements.
Let us now consider the Lyapunov function in the standard parameterization. We have
(104) |
or
(105) |
Because of the symmetry between the two differential equations, remains constant and . We thus attempt to write a Lyapunov function of the form:
(106) |
Taking a time derivative, we get
(107) |
As with the other dynamics, we see that if , the Lyapunov function always has a negative derivative. However, it is not hard to see that (if ), and since , thus the Lyapunov property also applies in this case. Thus, the dynamics are passive and asymptotically stable for both types of circuits.
IV.2 Proof that this works only for linear functions
The procedure we employed above for deriving the Lyapunov function only applies if the resistance is linear in the internal memory parameter . In order to see this, we begin again with the equations of motion,
(108) |
Here is a nonlinear function for the resistance , and throughout this section is a control vector of the memristance from equation (35), e.g., . This is equivalent to
(109) |
We now consider a generic Lyapunov function of the form
(110) |
with a diagonal matrix
(111) |
Now, we will show that is not guaranteed to be negative by examining the case of the external field and a quadratic diagonal term. Note that for the quadratic diagonal term, we must have
(112) |
while for the external field term
(113) |
We temporarily set . In the supplementary material, we discuss solving the quadratic and external field terms. It follows that the full solution can be written, in terms of the free parameter as
(114) | |||||
(115) |
which is the solution to the original problem. Let us assume that . Then we have
(116) |
If we require that we obtain , if we require then which leaves us with the solution . For , to enforce and we need to chose and . Thus there is a disagreement in the value of , which can be resolved by setting or .
We now note that the quadratic term is problematic and that we need to ask for a function such that we obtain the symmetric term, such that we can move to the left. If we ask for a function such that
(117) |
which means
(118) |
For this reason, a symmetric quadratic term in the Lyapunov function works only in the simple case where is proportional to a power of , and the simplest linear memristive device ( linearly proportionate to ) occurs when . Therefore, the Lyapunov functions found above apply only if the resistance is linear in the internal memory parameter .
IV.3 Asymmetric EOMs and extension to nonlinear components
As we will see below, there is another way of obtaining a Lyapunov function for the case of generic memristive devices by relaxing the requirement that has to be symmetric. We consider the generic equations of motion, in which is the Bernstein polynomial approximation we introduced earlier. We have
(119) |
for an arbitrary nonzero diagonal matrix . We consider again a generic Lyapunov function of the form
(120) |
with and diagonal matrices
(121) | |||||
We need to solve for the linear and diagonal quadratic terms, i.e., the equations
(122) | |||||
(123) |
For the asymmetric term, we need to be careful. We need in fact to satisfy
(124) |
Since we have now the freedom of choosing , we choose
(125) | |||||
(126) |
so that the quadratic coupling term is immediately satisfied. Thus, we need to solve
(127) | |||||
(128) |
which imposes . When then . On the other hand, we have
(129) |
Choosing , and , we obtain that
(130) |
from equation (119), we have
(131) | |||||
We arrive at the second form in equation (131) by noting that the inner product of real vector space is symmetric, so we can convert the matrix norm to the symmetric form. We have , thus our matrix is
(132) |
which is symmetric if is linear in .
In general, we can treat as a nonlinear function akin to the Bernstein polynomial expansion studied earlier, e.g., . Here, we examine the case in which explicitly, as it has the right properties to give analytical results. We have . The function is a little complicated, and is
(133) |
where is the polylogarithm, also called the Jonquiere’s function. We now want to impose and . Then we need to choose , and a normalization
(134) |
The result is
(135) |
Thus, the Lyapunov function depends on the eigenvalues of the matrix
(136) |
Note that we can separate into symmetric and asymmetric components, , where
(137) | |||||
(138) |
Now, it is clear that is positive semi-definite. We note that on we have
(139) |
thus we write
(140) |
We see that the obtained eigenvalues are always positive, and thus, this should guarantee that the constructed is a Lyapunov function.
IV.4 Case with window function
Let us now try to further generalize the Lyapunov function for memristive equations of motion with a window function Joglekar and Wolf (2009); Ascoli et al. (2016). In this case, we have
(141) |
where is the window function. We can rewrite equation (141) as
(142) |
We see that the problem is the same as before, but now, for the same Lyapunov function we have,
(143) |
which has to be symmetrized. Note that and are both diagonal and thus they commute and are symmetrical. Thus we have
(144) |
Note that the spectral properties of the function above are rather different. In order to see this, we can expand and write it as . Then
(145) |
We see that in the third term on the right-hand side, we have an operator that is not symmetric, similar to the previous case. However, the second term on the right-hand side, which was symmetric and positive before, is no longer symmetric due to the window function. This implies that we can have negative eigenvalues of order , and in the case of window functions, the Lyapunov function we have defined is not of general applicability. However, case-by-case analysis can identify when the window function applies to the Lyapunov function. We note that is diagonal, so the second row in equation (145) has similar symmetry restrictions as equation (132).
V Invariances
Due to the projection operators, , numerous circuit configurations give rise to similar properties. This is apparent by noting we can identify components of the circuit that are orthogonal to the projection operator; these orthogonal components are identified with the orthogonal projection operator , e.g., and as described in the supplementary material. This produces various local gauge symmetries, wherein the electrical properties do not vary with the components of the circuit that span the orthogonal component; this freedom is a gauge freedom. For example, for an arbitrary matrix or voltage source , under the projection operator there is freedom in an orthogonal component, and , respectively.
(146) | |||||
(147) |
Equivalently, there is gauge invariance in any arbitrary vectors or diagonal matrixs that are a projection from a ground truth , such that :
The projection operator may have extensive gauge freedom in any circuit, but examining the circuit properties and memristive dynamics can restrict this freedom. For example, consider , with diagonal matrices and . Note here and are not necessarily related. We can devise functional forms of that produce equivalent orthogonally projected vectors,
(148) |
where in a nonlinear function in , and . Consider a nonlinear function . As is a non-orthogonal projector operator, it is necessary to track the action of the projection operators to ensure gauge freedom in an arbitrary nonlinear function. For example, consider the following two functions:
(149) | |||||
(150) | |||||
Thus, the specific network properties need to be examined on a case-by-case basis to ensure invariance, as they could contribute to otherwise anomalous behaviors in the network properties.
Finally, there is also invariance within circuit configurations; a transformation between equivalent circuits preserves the eigenspectrum of . The projection operators can be transformed via orthogonal matrices, ; when is the cycle matrix different matrices, , respect different loop equivalent circuits. Since permutations are a subgroup of orthogonal transformations, they also leave the spectrum of invariant (and are interpreted as edge relabeling).
We examine how such a transformation affects the dynamics of the memristive network. For example, we can rewrite the equations of motion with an orthogonally similar projection operator,
(151) |
We can see that while we have freedom in choosing the orthogonal matrices, , restrictions on or nodal permutations may be required to preserve the network dynamics. Such restrictions will provide insight into the form of equivalent circuits. We investigate this below.
Here, we explore the effect of three invariances on the equations of motion of , the current , and the Lyapunov functions. We examine invariant functions of the types , and . We work with a voltage source in series, , in the flipped parameterization:
(152) |
The results from this case are generalizable to the direct parameterization and other sources, e.g., current sources.
There is a more general invariance lurking in the background, which generalizes the one we just discussed. The analysis above can be used to understand the relaxational dynamics of memristors near equilibria. To see this, we consider a network of memristive current-driven devices, whose internal memory is
(153) |
which leads again to equation (98) replacing equation (56) for . As we have seen, the transformation for arbitrary vectors leaves the dynamics invariant, since . This invariance does not take into account the fact that we have memristive devices. However, in memristive devices described by equation (98) this is a subset of a larger invariance set, which is more generally a transformation of the form
(154) |
which preserves the dynamics, such that the time derivative is invariant, e.g., . To see this, let us equate
(155) |
After a few algebraic steps, we get the following generalized relationship:
(156) | |||||
(157) |
for any vector . To see the origin of this generalized invariance, consider for instance resistances in series with generators . We have
(158) |
To preserve the current flowing in the mesh, we can either change and in such a way that the ratio is preserved. For networks of memristors, equation (157) generalizes this simple invariance. Since this is valid for arbitrary derivatives, it must be true also for the particular case of . As a result, this invariance maps equilibrium points as a function of a change in external control.
V.1 Orthogonal vector component
Let us examine the case where we have voltage sources with an orthogonal component applied in series, such that we can write our source as . We examine the orthogonal components’ role in the time evolution of the resistance,
(159) |
The resistance in the network is independent of the orthogonal component of the source. We now examine the Lyapunov functions of linear memristive device elements in the presence of an orthogonal contribution to the resistance. Consider the Lyapunov function,
(160) |
(161) |
Again, we obtain that if , the Lyapunov function always has a negative derivative. As in the domain , and , the Lyapunov function has a negative derivative and the same passive dynamics and stability conditions as the system without orthogonal sources.
Similarly we can examine the case where there is a component of the resistance that is orthogonal to , , similarly . We examine the orthogonal components’ role in the equations of motion,
(162) |
(163) |
The time derivative of has an additional component projected into . Here, we note that without loss of generality, has no orthogonal component as any orthogonal component can be in ; therefore, we can split equation (163) into projected and orthogonally projected components. The orthogonal component has dissipative dynamics of the memristive device elements,
(164) |
The dissipative dynamics are short-term dynamics in the orthogonally projected space, as there could be a contribution to the dynamics of from the non-orthogonal component. Thus, equation (164) does not fix the long-term dynamics. Treating the dynamics of the orthogonal and non-orthogonal components as separable we see does not change the electrical current in the network. We rewrite the current using an expansion similar to equation (56) for the flipped parameterization,
(165) | |||||
The orthogonal term does not contribute to the electrical current. Next, we examine the Lyapunov functions in the presence of an orthogonal component of the circuit. We rewrite the equations of motion, equation (163),
(166) |
We work with the transformed Lyapunov function,
(167) |
We continue to work in the regime where does not have a component orthogonal to and thus is perpendicular to . We can separate and in equation (166), and we have
(168) | |||||
Where we have used the dissipative dynamics of from equation (164). The Lyapunov function again has a semi-definite negative time derivative. It has a similar form as the Lyapunov functions without an orthogonal source term. Thus, the memristive device networks with distinct orthogonal sources have the same passive dynamics and stability conditions.
As and are separable we can separate these into two Lyapunov functions for the orthogonal and non-orthogonal components,
(169) | |||||
(170) |
These two Lyapunov functions follow trivially from equation (163) by applying and . In this case, we have two equations:
(171) |
(172) |
Equation (171) reduces to the normal Lyapunov functions studied above, reproducing equation (169). We can define a Lyapunov function for the orthogonal component.
(173) | |||||
(174) | |||||
Where we have used the dissipative dynamics of . Again, we see the orthogonal component has passive dynamics.
V.2 Projected vectors
Here, we examine the case where we have some voltage sources applied in series that is a projection of a larger space, such that we can write our source vector as . This is another form of the previous case, wherein has some orthogonal component; however, now we do not know an ab initio trivial separation into orthogonal and non-orthogonal components. We examine the equations of motion in the case of an orthogonal component in a source term,
(175) | |||||
The time derivative of does not depend on the orthogonal components in . Next, we examine the time derivative of when there is a component of memristive devices that are orthogonal to the projection operator, , using a modified form of equation (159),
(176) | |||
(177) |
The time derivative of is just the projection of ; there is no contribution from the orthogonal component. Similarly, we can see the electrical current in the network depends only on the projected component. The electrical current can be rewritten using the expansion similar to equation (56) and equation (43),
(178) |
Thus, the current in the network depends only upon the projection of onto . This implies when the resistance is linear in , then , and the current is invarient to . We verify this,
(179) | |||||
We now examine the Lyapunov functions of linear memristive devices as a function of . We have the equations of motion
(180) |
We study the time derivative of a suitable Lyapunov function:
(181) |
(182) |
Thus, the Lyapunov functions are projections of . As in the case with an orthogonal source, we are guaranteed that , but the stability conditions depend on the projection of the resistance given by .
V.3 The equivalence class of spectrally invariant projectors
It was shown in Caravelli (2023) that an orthogonal transformation on the matrix can be interpreted as edge rewiring that leaves the cycle content of the graph invariant. Since the graph-theoretical content is contained in the projector matrix, networks that are orthogonally related have orthogonally similar projection operators. We can interrelate these projection operators by the transformation via orthogonal matrices, . A simple example is shown here for a cycle matrix, :
(183) | |||||
(184) | |||||
(185) |
This transformed network has an orthogonally similar projection matrix, . We now investigate the memristive device properties of networks with orthogonally similar projection matrices, and . We rewrite the equations of motion of with this transformed projection operator for a voltage source, , in series,
(186) |
Obviously is invariant when respects symmetries in and such that , , this is true if respects the orthonormal basis of and acts as a permutation operator. We gain further insight by examining the electrical current in the network, using the Woodbury matrix identity, equation (42), discussed above,
(187) | |||||
Thus, the transformation is equivalent to a dual transformation:
(188) |
To examine the role of this dual transformation on the time evolution of , we rewrite equation (13),
(189) |
Thus, the transformation corresponds to a transformation of the orthogonal projectors with an orthogonally similar resistance matrix,
(190) |
therefore is invariant to this orthogonal transformation if .
Next, we examine the Lyapunov function of linear memristive devices under an orthogonal transformation. We rearrange equation (189) to rewrite the equations of motion,
(191) |
We find an appropriate Lyapunov function and examine the time derivative; we have
(192) | |||||
(193) | |||||
Therefore, under this transformation, the Lyapunov functions are equivalent to the original case (without orthogonal transformations) when , a similar restriction when ; this is true when or when respects the symmetries of the circuits such that a permutation leaves the circuit unchanged. Therefore, the Lyapunov functions of the orthogonally transformed system are identical only if . Here, the Lyapunov function has a negative derivative if .
We can move the orthogonal matrix from the matrix norm to better compare the dynamics to the original case. We substitute , and we can rewrite the time derivative of the Lyapunov function as
(194) |
Thus, networks related by an orthogonal transformation, , will similarly have a Lyapunov function with a negative derivative. As the inner product is invariant to orthogonal transformations, . Here, it is apparent the stability conditions are similar to the non-transformed case.
VI Laplacian matrix and the projection operator
We note a deep connection between the graph Laplacian matrix and the projection operators studied above. First, we prove that the graph Laplacian can be constructed from the index matrix and is the diagonal matrix of conductivity values.
Theorem 1.
The graph Laplacian can be written as
Proof.
We examine the Laplacian matrix element-wise, noting that is a diagonal matrix:
(195) | |||||
Here indicates the incidence of edge on node . As each edge is incident upon only two nodes, then if , the product is nonzero only if connects nodes and . If , then the sum is over all edges connected to node . Therefore, we can find the elements of :
(196) |
this reproduces the graph Laplacian, completing the proof. ∎
We can immediately see that the pseudoinverse of the graph Laplacian, is related to the projector operator, :
(197) | |||||
This retains the projection operator properties of ,
(198) | |||||
We can similarly relate to the Laplacian matrix,
(199) | |||||
and by multiplying on the left by and on the right by we can equate,
(200a) | ||||
(200b) |
where we have used equation (197) to arrive at equation (200b). Equating the right hand sides of equations (200a) and (200b), we see that is an involutory matrix within the non-orthogonal projector operator subspace. In the case of the unweighted Laplacian matrix, we can follow the derivation above using and substituting the identity matrix for .
Similarly, we can relate the Laplacian matrix and in the case of the circuit with current injected at the nodes. Using equation (53) we can equate the inner products,
(201) |
here is the current injected at the nodes, is the current configuration and is a voltage drop across the circuit edges. By dimensional analysis we can see that this equates the system’s power due to the injected current with the current and voltage configurations.
Finally, we can follow a similar procedure to generate a matrix based on the cycle matrix related to the cycle projection operators . Using an arbitrary diagonal matrix, , denoting the properties of the edges in the circuit, we define a matrix
(202) | |||||
(203) |
Examining element-wise, when , then is the sum of elements corresponding to all edges which are part of both cycles and ; the sign of in the sum indicates whether the orientation of cycles and are aligned or not on the edge , wen the cycles are aligned, when they are not aligned. As we are able to separate our fundamental cycle graph into and sub-matrices, wherein is an identity matrix, then generated from will have no component of the cycle edges in the off-diagonal elements as the cycle edges are never in more than one cycle. The diagonal elements, , of will consist of a sum of all elements which correspond to the edges which form the cycle . Therefore we can define as
(204) |
which is related to a projection operator as
(205) |
In the supplementary material, we discuss how this matrix can be used to calculate the power in the cycles of a circuit.
VII Effective resistance and memristive device correlations
In experimental studies of memristor networks it is often necessary to characterize the networks resistance without accessing all of the individual memristors. Instead one is often restricted to two-point or four-point probe measurements to characterize the network, in which case the effective resistance between contacts is being measured. Thus it is important to understand how the current and resistance in distant but accessible edges are related. This understanding could enable new methods to control the resistance between input and output edges.
Here, we use the effective resistance to study correlations between memristive device elements. The effective resistance can be calculated between individual edges, e.g., memristive devices, in the network. In the supplementary material, we develop a correlation analysis in the case where charge moves through a circuit via a random walk mechanism. Here, we generalize the correlation analysis to circuits under applied bias. By finding effective resistances between nodes in a network, we can examine the correlation of electrical current in two distant, non-adjacent, edges by constructing an effective Wheatstone bridge circuit.
VII.0.1 Two-point effective resistance
We begin by finding the effective resistance between nodes. The effective resistance, , between the and can be found by examining the walks and removing the closed cycle walks which return to , , and similarly examining the reverse walk from :
(206) |
The effective resistance can be considered the resistance connecting two distant nodes via a hypothetical edge, it is the resistance such that if we apply a single volt between and the measured current at our generator would be . This is the definition of the two-point effective resistance, which is motivated in the supplementary material by a Markov process for particle movement in a circuit. We rederive the two-point effective resistance below. We can similarly find the effective resistance by constructing an incidence matrix for the hypothetical edge we are interested in. We construct a vector, , that represents sending a single particle from to and is charge balanced otherwise. We define
(207) |
here is an -vector that is in , in , and zero otherwise. We can write
(208) |
From which we can see a transfer current projector operator reproduces the node projection operator,
(209) | |||||
(210) |
Thus the effective resistance is related to projection operators by the single particle current vectors. As is symmetric, the two point effective resistance is the positive semi-definite inner product of the single particle current vectors.
VII.0.2 Effective resistive circuit
If and are edges in our network ( and respectively), with corresponding currents and , then we can define a correlation between and . We define an effective circuit with effective resistors between the four nodes that define the edges and , , , , and ; here we drop the superscript. This allows us to construct a simple graph consisting of the edges and ( and ) with their effective resistance and (distinct from the true resistance values and ) and the effective edges , , , and with their calculated effective resistance. A schematic of this graph is shown in Figure 4.

To find the effective resistance of each edge in our effective circuit, we need to calculate the resistance between each effective edge that preserves the power dissipation in the network. This is a generalization of the Star-Delta transformation of resistors. We can find such a transformation to an effective circuit by considering a partition function defined by a Gaussian model of the total power in the circuit. Here is the potential on the nodes of our full network, and the power in the system is defined as . We define a partition function for the power of our circuit and integrate out the electric potential from all but the four nodes that remain in our effective circuit, .
We define,
(211) | |||||
where is , the vector of electric potentials excluding the four nodes in the effective circuit, e.g., the bulk in the network, and , e.g., the surface of the network. We have written the power in terms of the Laplacian matrix for the circuit, . Here we have rearranged our Laplacian and such that are the last nodes in our vector . The matrices , , , and are the block elements of the Laplacian matrix:
(212) |
is a symmetric submatrix, is a submatrix, and is submatrix. Performing the integral,
(213) |
here must be a real positive-definite matrix, as is a submatrix of it is not necesarily a positive semi-definite matrix. We can rewrite our Gaussian integral in terms of the bottom block, e.g., the block, of the inverse Laplacian matrix111Note that this is the result is the same as the bottom corner of a block-wise matrix inversion (Schur inverse), (214) :
(215) |
Here is the effective Laplacian for our effective circuit, ; it is symmetric, but in general, the rows and columns do not sum to and is not necessarily a singular matrix. We find the exact effective resistance matrix,
(216) | |||||
(217) | |||||
(218) | |||||
where is the incidence matrix for our effective circuit, which we define
(219) |
We can write in terms of the elements of the matrix:
(220) |
The power dissipation of this system is now , where is the difference electric potential, . Importantly, the diagonal elements are the effective resistance between pairs of nodes, equation (206), the off-diagonal terms are cross talk which defines a power loss between current in distinct edges, e.g., . Due to the presence of these off-diagonal terms in , it is more convenient to determine the effective conductance, and thus resistance, defined by the off-diagonal terms of , e.g., . In the supplementary material, we discuss a mean field approximation of the power dissipation in our effective circuit using the effective resistance .
The above methods can be extended to reproduce the effective resistance between two nodes. We can rewrite equation (213) as
(221) | |||||
the matrix in the exponent can be simplified . Using a two-point incident matrix, , the two-point effective resistance is written in term of the as:
(222) |
reproducing the two-point effective resistance of equation (206).
VII.0.3 Current correlations
From these exact effective resistance values, we can solve for the current correlations in our effective circuit. The relation between the current in the edges can be determined via standard circuit graph techniques described above and by examining the sum of spanning trees that link and while spanning (see for instance Biggs (1974)). Here, we provide a different method to find correlations between currents without knowing the full network connectivity. We note that our effective circuit, which contains all possible paths between edges and , can be redrawn as an unbalanced Wheatstone bridge circuit, with voltage sources of and , respectively, as shown in Figure 4 (b) and (c).
We define a correlation for a unit of current in ,
(223) |
Here are effective resistance values obtained from and , e.g., by averaging.
We arrive at this correlation by noting that we can construct multiple subgraphs using the six effective resistances. We construct two unbalanced Wheatstone bridge circuits, and from these circuits we find a mapping that relates the voltages of the circuit nodes. If we know and we can find and . i.e., . We find a mapping from to , the forward map, and the inverse map from to . We do this by removing the edge outside the Wheatstone bridge and solving for the voltage configurations, this is detailed in the supplementary material. In the supplementary material, we explicitly show the two linear transformations for each unbalanced Wheatstone bridge configuration. The two linear mappings are,
(224) | |||||
(225) |
we provide exact expressions for the matrix elements using the relationship in the supplementary material.
From these two linear mappings, we find a scaling relation between and . There exist linear mappings for the currents:
(226) | |||||
(227) |
The determinant of and give us a scaling relation for the current transformation, this is detailed in the supplementary material. We have
(228) | ||||
(229) |
where and are the true resistance in edges and while is the effective resistance for from and in . Now, we have derived equation (223), we have,
(230) |
The two-point effective resistance is superadditive, and when all the effective resistance values in our effective circuit are positive, , e.g., . When is much greater than the other effective resistance values, we can simplify equation (228),
(231) |
We expect to be large when there are few parallel paths between and , as when it is an input or output edge for an otherwise densely connected network.
It is experimentally more realizable to work with the two-point effective resistance for each of the six edges in our effective circuit, i.e., the diagonal elements in . It is possible to estimate the current correlations using the two-point effective resistances in equations (228) and (229). We can determine the error in approximating the conductivity by the two-point effective resistance. Given that it is not singular, we have
(232) | |||||
where and are the diagonal and off-diagonal elements of . We arrive at the last line using the Woodbury matrix identity. We note, in general, the matrix norm of is not restricted to values less than , and thus the contribution from the off-diagonal terms can be significant. It remains to be determined under what network conditions the matrix norm is less than . The resistance for an edge can be written
(233) | |||||
the first term on the right-hand side is two-point effective resistance for ; thus, the second term on the right is the error in approximating with the two-point effective resistance. The percent error will be proportionate to the inverse of the two-point effective resistance, ; thus, we expect this approximation to be valid for networks with high resistance edges.
We can write the equations of motions of the memory parameter, , under bias; we can calculate the divergence of the resistance values from an initial homogeneous resistance state given the full effective resistance. In the direct parameterization, we have,
(234) |
Similarly we can estimate the trajectory of as a function of time:
(235) |
In general, the appropriate dynamics of , , and will depend on the memory parameters , and thus solving for the trajectory must be done in a self-consistent way. This approach is well suited for passive resistant or memristive circuits, wherein there are no sources between the edges of interest as the contributions of voltage or current sources are not accounted for in this current correlation. This is the case within the bulk of most memristive device networks.
VIII Discussion
In conclusion, the work presented here addresses many fundamental questions about the performance of memrisitve networks. The dynamics of memristor networks are nontrivial, and relating these dynamics to network structure is even more challenging. The dynamics of arbitrary memristor circuits with other two-terminal circuit elements are even more challenging to understand but are important as they produce even richer dynamics. In general it is not known whether a network will have stable dynamics, and when different circuits will have similar dynamics. In the work presented above we demonstrate techniques for answering many of these questions for a rich class of memristor networks.
Our research presents a comprehensive framework for analyzing meristor circuits, offering valuable insights into their dynamic behavior. We introduced techniques such as direct and flipped parameterization, extending to encompass resistance variations dependent on polynomial expansions of the internal memory parameters, , through Bernstein polynomials. Moreover, we elucidated diverse control schemes to deduce equations of motion, providing a thorough understanding of network dynamics. We generalize the non-orthogonal projection operators to identify the equations of motion of circuits with non-linear two-terminal circuit elements. From these equations, the eigenvalues give insight into the network properties; in particular, the RLC networks display resonator behavior in the under-dampened case, and memristors enhance system dampening.
From the equations of motions for , we derived Lyapunov functions and demonstrate that for memristors linear in the dynamics are passive and the equilibrium states are stable. Importantly this is not guaranteed for memristors nonlinear in . In the case where , e.g., a suitable activation function in neural networks, we could guarantee the memristive device network would have passive dynamics. The Lyapunov functions provide a powerful way to study memristive device networks’ complicated and often chaotic dynamics.
Additionally, our study delved into the role of invariances, shedding light on the extensive gauge freedom present in circuit components orthogonal to projector matrices. We found gauge transformations do not alter the short time dynamics or Lyapunov function stability for linear memristors. Spectral invariances implemented through an orthogonal transformation had similar passive Lyapunov functions, however they significantly impact the time derivative of these functions, highlighting the complex interplay between network properties and orthogonal components.
Moreover, we showed it is possible to devise an effective circuit to study the correlations across a network. We established a direct correspondence between the graph Laplacian and the non-orthogonal node projection operators, facilitating a deeper understanding of network properties. Leveraging these techniques, we constructed effective circuits between distant edges, from which it is possible to estimate the divergence of the internal memory parameter in distant edges. This provides a method to understand how the resistance evolves across a network.
These techniques can have direct application in experiments wherein the full network conductivity cannot be accessed. Experiments are often restricted to four point probe measurements that characterize the effective resistance between contacts. Under these conditions, effective circuits can serve as a powerful tool to relate the electrical properties between distant edges. The effective circuits we presented can be generalized to study the relation between input and output edges in a neuromorphic network.
These results demonstrate the rich dynamics within memristive device networks. The strength of network analysis is that it can obtain many useful governing equations and relations in a complicated circuit. Further work is needed to generalize these Lyapunov functions and correlation analysis to networks with capacitors, inductors, sources, and sinks. Such a generalization would provide a set of powerful techniques to describe memristive device networks in different complex network structures. This approach offers valuable insights into resistance evolution within networks, paving the way for advancements in circuit analysis and design.
IX Acknowledgements
Ongoing work by Samip Karki identified the Schottky barrier parameterization. The work of F. Caravelli and F. Barrows was carried out under the auspices of the NNSA of the U.S. DoE at LANL under Contract No. DE-AC52-06NA25396. F. Caravelli was financed via DOE LDRD grant 20240245ER, while F. Barrows via Director’s Fellowship. F. Barrows gratefully acknowledges support from the Center for Nonlinear Studies at LANL.
X Citations
References
- Pershin and Ventra (2011) Y. V. Pershin and M. D. Ventra, “Memory effects in complex materials and nanoscale systems,” Advances in Physics 60, 145–227 (2011).
- Traversa and et. al. (2014) F. L. Traversa and et. al., “Dynamic computing random access memory,” Nanotechnology 25, 285201 (2014).
- Carbajal et al. (2015) J. P. Carbajal, J. Dambre, M. Hermans, and B. Schrauwen, “Memristor models for machine learning,” Neural Computation 27 (2015), 10.1162/NECO_a_00694, arXiv:1406.2210 .
- Du and et. al. (2017) C. Du and et. al., “Reservoir computing using dynamic memristors for temporal information processing,” Nature Comm. 8 (2017), 10.1038/s41467-017-02337-y.
- Milano et al. (2022) G. Milano, G. Pedretti, K. Montano, and e. al., “In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks,” Nat. Mater. , 195–202 (2022).
- Fu et al. (2020) K. Fu, R. Zhu, A. Loeffler, and e. al, “Reservoir computing with neuromemristive nanowire networks,” Proceedings of the International Joint Conference on Neural Networks (IJCNN) 20006228 (2020).
- Xu, Wang, and Yan (2021) W. Xu, J. Wang, and X. Yan, “Advances in memristor-based neural networks,” Frontiers in Nanotechnology 3 (2021), 10.3389/fnano.2021.645995.
- Pershin, Slipko, and Di Ventra (2013) Y. V. Pershin, V. A. Slipko, and M. Di Ventra, “Complex dynamics and scale invariance of one-dimensional memristive networks,” Physical Review E 87, 022116 (2013).
- Diaz-Alvarez et al. (2019) A. Diaz-Alvarez, R. Higuchi, P. Sanz-Leon, I. Marcus, Y. Shingaya, A. Z. Stieg, J. K. Gimzewski, Z. Kuncic, and T. Nakayama, “Emergent dynamics of neuromorphic nanowire networks,” Scientific Reports 9, 14920 (2019).
- Zhu et al. (2021) R. Zhu, J. Hochstetter, A. Loeffler, A. Diaz-Alvarez, T. Nakayama, J. T. Lizier, and Z. Kuncic, “Information dynamics in neuromorphic nanowire networks,” Scientific Reports 11, 13047 (2021).
- Caravelli, Traversa, and Di Ventra (2017) F. Caravelli, F. L. Traversa, and M. Di Ventra, “Complex dynamics of memristive circuits: Analytical results and universal slow relaxation,” Physical Review E 95, 022140 (2017).
- Riaza (2011) R. Riaza, “Dynamical properties of electrical circuits with fully nonlinear memristors,” Nonlinear Analysis: Real World Applications 12, 3674–3686 (2011).
- Matsumoto (1984) T. Matsumoto, “A chaotic attractor from chua’s circuit,” IEEE Transactions on Circuits and Systems 31, 1055–1058 (1984).
- Madan (1993) R. N. Madan, Chua’s Circuit: A Paradigm for Chaos (WORLD SCIENTIFIC, 1993) https://www.worldscientific.com/doi/pdf/10.1142/1997 .
- Zegarac and Caravelli (2019) A. Zegarac and F. Caravelli, “Memristive networks: From graph theory to statistical physics,” EPL (Europhysics Letters) 125, 10001 (2019).
- Tellini et al. (2021) B. Tellini, M. Bologna, K. J. Chandía, and M. Macucci, “Revisiting the memristor concept within basic circuit theory,” International Journal of Circuit Theory and Applications 49, 3488–3506 (2021), https://onlinelibrary.wiley.com/doi/pdf/10.1002/cta.3111 .
- Caravelli and Sheldon (2020) F. Caravelli and F. C. Sheldon, “Phases of memristive circuits via an interacting disorder approach,” https://arxiv.org/abs/2009.00114 (2020).
- Caravelli and Barucca (2018) F. Caravelli and P. Barucca, “A mean-field model of memristive circuit interaction,” EPL (Europhysics Letters) 122, 40008 (2018).
- Stern, Liu, and Balasubramanian (2024) M. Stern, A. J. Liu, and V. Balasubramanian, “Physical effects of learning,” Physical Review E 109 (2024), 10.1103/physreve.109.024311.
- Xiang, Ren, and Tan (2022) J. Xiang, J. Ren, and M. Tan, “Stability analysis for memristor-based stochastic multi-layer neural networks with coupling disturbance,” Chaos, Solitons & Fractals 165, 112771 (2022).
- Dhivakaran, Gowrisankar, and Vinodkumar (2024) P. B. Dhivakaran, M. Gowrisankar, and A. Vinodkumar, “Bipartite synchronization of fractional order multiple memristor coupled delayed neural networks with event triggered pinning control,” Neural Processing Letters 56, 50 (2024).
- Caravelli (2019) F. Caravelli, “Asymptotic behavior of memristive circuits,” Entropy 21, 789 (2019).
- Sheldon, Kolchinsky, and Caravelli (2022) F. C. Sheldon, A. Kolchinsky, and F. Caravelli, “The computational capacity of lrc, memristive and hybrid reservoirs,” Phys. Rev. E 106 (2022).
- Baccetti et al. (2023) V. Baccetti, R. Zhu, Z. Kuncic, and F. Caravelli, “Ergodicity, lack thereof, and the performance of reservoir computing with memristive networks,” (2023), arXiv:2310.09530 [cond-mat.dis-nn] .
- White, Lee, and Sompolinsky (2004) O. L. White, D. D. Lee, and H. Sompolinsky, “Short-term memory in orthogonal neural networks,” Physical review letters 92, 148102 (2004).
- Caravelli, Sheldon, and Traversa (2021) F. Caravelli, F. Sheldon, and F. L. Traversa, “Global minimization via classical tunneling assisted by collective force field formation,” Science Advances 7, 1542 (2021), https://www.science.org/doi/pdf/10.1126/sciadv.abh1542 .
- Joglekar and Wolf (2009) Y. N. Joglekar and S. J. Wolf, “The elusive memristor: properties of basic electrical circuits,” Eur. J. of Phys. 30, 661–675 (2009).
- Ascoli et al. (2016) A. Ascoli, R. Tetzlaff, L. O. Chua, J. P. Strachan, and R. S. Williams, “History erase effect in a non-volatile memristor,” IEEE Transactions on Circuits and Systems I: Regular Papers 63, 389–400 (2016).
- Caravelli (2023) F. Caravelli, “Cycle equivalence classes, orthogonal weingarten calculus, and the mean field theory of memristive systems,” arXiv:2304.14890 (2023).
-
Note (1)
Note that this is the result is the same as the bottom corner of a block-wise matrix inversion (Schur inverse),
(236) - Biggs (1974) N. Biggs, Algebraic Graph Theory, 2nd ed., Cambridge Mathematical Library (Cambridge University Press, 1974).
- Doyle and Snell (2000) P. G. Doyle and J. L. Snell, “Random walks and electric networks,” (2000), arXiv:math/0001057 [math.PR] .
XI Supplementary
XI.1 Space of Circuit Variables
Here, we prove that current and voltage configurations are dual and that there is a correspondence between the two representations of a circuit, the cycle space corresponding to the loops of the cycle matrices , and the vertex space corresponding to the vertices of the incidence matrix . We define four-spaces
-
•
CURR-SP (current space): The set of all current configurations. (i.e. , the null-space of )
-
•
VOLT-SP (voltage space): The set of all voltage configurations. (i.e. , the null-space of )
-
•
CYCLE-SP (cycle space): The set of all linear combinations of cycles. (i.e. , the row-space of )
-
•
VERT-SP (vertex space): The set of all linear combinations of rows of B. (i.e., , the row-space of )
We note that every row of is a valid current configuration: for every vertex (row of ) and cycle (row of ), if then it does not contribute, and otherwise, one edge of the cycle is directed towards while another is directed away, canceling. Thus and as a consequence , thus:
(237) |
Taking the transpose of the above matrix equation, and as a consequence , thus:
(238) |
The next step will be to show that the opposite inclusions hold, and any current (voltage) may be written ().
XI.1.1 Spanning Trees:
Assume the graph is connected with vertices. As such, . A tree is a connected graph with no cycles. A spanning tree of contains all vertices where are the tree branches. The remaining edges are chords. Every spanning tree has branches.
Consider a spanning tree of a graph and choose one node of this tree as the root or ground node. There is a unique path from the root to any node . Define the orientation along this path from the root to . Now, for a given voltage configuration, define as the sum of voltage elements along the path with the appropriate orientation (added if the orientation of the tree and agree and subtracted if not). With this construction, for any edge the voltage is and we can write,
(239) |
And we have thus shown As every voltage can be defined by numbers in the potential, we have
(240) |
The relationship is also analogous to writing a voltage configuration as a sum of Green’s functions given by the rows of as each row corresponds to the potential configuration on the nodes, such that the potential is 1 for node and zero elsewhere.
XI.1.2 Algebraic Methods:
As a consequence of the above argument,
(241) |
which is known as Tellegen’s theorem. So, the current configurations live in an orthogonal space to the voltage configurations. We need to do a bit of work to make use of this.
First, pick a maximal set of the voltage configurations by using, for example, rows of . Now pick a maximal set of linearly independent columns of , such that where is a unit vector. We claim, are linearly independent. Suppose there are such that,
(242) |
Acting with , as , we have by the linear independence of and then by the linear independence of . For an arbitrary vector , we can write
(243) |
where we have used the linearly independent set of columns and unit vectors from above. Thus writing we decompose the vector in a piece and another which can be written in terms of unit vectors, proving,
(244) |
We can thus form a linearly independent set of the voltage vectors and linearly independent rows of A, with which we can decompose an arbitrary vector.
So, writing as a shorthand for the decomposition, we must have The first term must vanish as and thus . So, may be written as a linear combination of the rows of , and we have shown the opposite inclusion, giving
(245) |
Additionally, we have
(246) |
XI.2 Differential equation for the resistance
We note that it is possible to write a differential equation for the physically accessible resistive state without referring to the internal parameters . In the case in which is linear, this is rather straightforward and we have
(247) |
where .
On the other hand, if is not linear the equation is slightly more involved. We use
(248) |
and thus
(249) |
To write the equation as a function , we must invoke the convexity of in . Let us define , and . If exists, e.g., if is an invertible function, then we can write the differential equation in terms of only. The function is invertible if, for instance, that is a monotonic function on as we expect in passive memristive devices. We now use the theorem of the inverse function. Let us call , then it is not hard to see that since we assume that is monotonic in , we have
(250) |
We can thus write also in the nonlinear case, given a certain function
(251) |
where , , and are generic functions, with the condition that and .
XI.3 Nonlinear Lyapunov functions differential equations
The differential equation given by the quadratic and external field terms, equations (112) and (113), are linear nonhomogeneous differential equations of the first kind, of the type
(252) |
with in both cases, while for the quadratic term and for the external field term. In this case, by writing we solve separately for the two equations
(253) | |||||
(254) |
The first equation is solved via
(255) |
and since , with we have for the function and for the external field. Thus we have
(256) |
whose solution is
(257) |
These are used to arrive at the forms of and used in the main text.
XI.4 Explicit expressions for non-orthogonal projectors
We would like to derive the properties of the non-orthogonal projector in terms of the orthogonal projector operator . Let us write
(258) | |||||
let us write . We have
(259) | |||||
where , as is a constant. We now use the formula . We obtain, for and ,
(260) | |||||
and thus
(261) | |||||
Here is the orthogonal projection operator of such that . Let us prove that the equation defines a projector operator. Let us define for simplicity . In order to prove that the equation above is an identity, it is sufficient to observe that
(262) |
In fact, since , we have
(263) |
which proves the equality. Then, we have
(264) | |||||
which shows that the formula we derived defines a projector operator. Note that if , then the formula becomes an orthogonal projector operator. Let us now construct the orthogonal complement, which is defined as
(265) |
We see that
(266) | |||||
In the case in which the projector is orthogonal, we also see in this case that for , which is the right complementary projector to . In the memristive RLC dynamics, we will need explicit expressions both for and .
XI.5 Power in circuit cycles
The matrix can be used to calculate the power within the cycles of a memristive circuit via a mesh analysis. In the mesh analysis, the memristive cycles form a planar sub-circuit that is partitioned into fundamental cycles, all with the same orientation. Cycles overlap in no more than one edge and each of these fundamental cycles is assigned a unique current. In this case, , where is the mesh current. In this case, the off-diagonal elements of will be negative and we can write the power in the mesh analysis as,
(267) | |||||
Here, is the current in a cycle in the mesh analysis, is the sum of all resistive elements in a cycle , and are resistive elements that exist exclusively in or are shared in both and , respectively. The off-diagonal elements in are the resistors.
For example, in a simple mesh circuit consisting of two cycles with three resistors shown in Figure 5, then
(268) | |||||
(269) |
and the power in the cycles is
(270) | |||||
which is the power we would expect from mesh currents.

XI.6 Effective power loss
We can rewrite the power dissipation due to Joule heating in our effective circuit using from the main text,
(271) | |||||
As has off-diagonal terms, here we present a mean-field treatment to calculate the power dissipation when the current in each edge can be treated as nearly uniform with a local fluctuation, e.g., , is the mean current and is the small local fluctuation in current. We rewrite the power dissipation,
(272) | |||||
In the first line we have neglected second-order fluctuation terms, and in the second line we have used . In the third line we have completed the square, and in the fourth line we again have neglected second-order fluctuations terms. Thus we can see the effective resistance , the sum of a row in . This approximation is valid only when the current fluctuations are small.
XI.7 Correlation Analysis
XI.7.1 Two-point effective resistance by a Markov process
Before we find the effective resistance of a circuit, it is helpful to work with the nodes of the circuit. The movement of charge in a network can be studied via a Markov process, wherein current jumps between nodes with a probability depending on the conductivity of the edges. We build a Markov matrix , the matrix elements encode the normalized probabilities that a charge moves between adjacent nodes; is the probability of moving from node to node .
(273) | |||||
Here is the conductivity of the edge , and the sum over is the sum over all edges incident on . is the conductivity of normalized by the conductivity of all edges incident on . The matrix has a zero diagonal.
We gain insight into the current in a memristive device network by studying the paths charged particles can take between nodes. The probability of walks in this network (paths that the current could take) can be calculated from . The total probability of all -step long walks, , that starts at and reaches a distant node can be calculated,
(274) |
As the charge moves through the network in a fast time scale compared to the change in the memristive device, we need to examine long walks. Given the values of are small we can write the sum of all the possible walks in the network, , as:
(275) | |||||
Here, is the normalized Laplacian matrix. In the case of homogenous resistors, the weights are equal to the inverse degree of each edge, and in this case, reduces to the standard normalized Laplacian matrix. In the more general case, the probability is the normalized conductivity of all edges connected to a given node, and the diagonal elements are , as given in
(276) |
here is the diagonal elements of , which normalizes the conductivity. We arrive at equation (276) using Theorem 1 and dividing each row by the corresponding diagonal elements of the matrix such that the off-diagonal elements of each row sum to and each row sums to .
If we define to be the pseudoinverse of , then we have
(277) |
The effective resistance is a resistance such that if the entire network was removed, leaving only nodes and with an edge with , the potential difference and corresponding electrical flows between these nodes would be invariant. As such, we can find an effective circuit in order to calculate the correlation between electrical currents in non-adjacent edges. The effective resistance, , between the and can be found by examining the walks and removing the closed cycle walks which return to , , and similarly examining the reverse walk from :
(278) | |||||
(279) | |||||
Here, is the unnormalized inverse Laplacian; we have found the effective resistance by examining the walks in and dividing by the normalization factor to restore the resistance values.
XI.7.2 Current Correlation in a Random Walk
Here we examine a toy model wherein current moves through the memristive device network via a Markov processDoyle and Snell (2000). As we are interested in studying the correlation of currents in edges, we build a Markov matrix, , wherein charge jumps between edges. In effect, we transform the network into the line graph wherein edges in our original network are transformed into nodes. Edges that are adjacent in our network become adjacent nodes in our new graph; these new nodes are connected by asymmetric edges. A Markov matrix, , encodes the probability of a stochastic process, e.g., a particle flowing through the network randomly without external bias, diffusing in the line graph. The probability depends on the conductivity of the edges:
(280) | |||||
Here is the conductivity of edge in our original network, which is adjacent to edge , and is the conductivity of edge in our original network (adjacent to ). The sum over in equation (280) is over all edges adjacent to in our original network. is the conductivity of adjacent to normalized by the conductivity of all edges adjacent to . We can see that the matrix has a zero diagonal as above, but now is asymmetric.
As our random walk occurs via all-or-nothing jumps, we set a bound on the correlation of current in our network by underestimating the probability that current will propagate through the network. In our model, a particle can move in either direction on a given edge without accounting for the conductivity of the edge, i.e., a particle is equally likely to hop from either end of a given edge. Thus we overestimate the probability the charge will, in effect, be reflected by an edge.
The probability is the probability a particle walks from edge and travels to a distant edge along the shortest path; now is not necessarily directly adjacent to . The random walk probability can be calculated from ,
(281) |
In a network, charge will not flow in cycles but instead flow in a directed manner until reaching an equilibrium. Thus, we are only interested in finding the shortest path when the charge does not return to any edge it has traversed. For example, in a simple square circuit with homogeneous resistance:
(282) |
in this case . This random walk probability, is a minimum probability a particle injected in will travel through . This can be seen by noting that for a unit charge injected in , then is the probability the charge will flow through an adjacent . is then the probability that charge will move from to an adjacent edge . Note that in this treatment, a charge is unbiased in the direction in which it flows; a charge injected into any edge will move to adjacent edges based solely on the conductivity of the adjacent edges.
In the systems we study, we treat the current as quasistatic as it equilibrates between edges and traverses the network at a fast time scale compared to the change in the resistance in the memristive devices. Thus, is a correlation of current in edge and under random walk conditions, e.g., when the current flow is unbiased. Given a known current in , we can get a bound on the current in . We have
(283) |
For example, if the current is equivalent in edges and , , if the current in the two edges are independent. The upper bound is found from the reverse walk: , as is asymmetric .
These are inequalities as probability of moving between adjacent edges is smaller in the random walk case than the probability of moving between adjacent edges in the directed walk case; in the random walk case, there is a probability of moving in either direction along an edge. In addition, here we are just examining the probability that current injected at a single edge transmits through . Under normal conditions, multiple edges could contribute to the current of any individual edge.
This correlation analysis can offer insight into correlations of the resistance values in distant memristive devices. Given a network of memristive devices initialized with a homogenous resistance, , the rate at which the memory parameter diverges in distant edges depends on . In the direct parameterization, we have,
(284) |
We can simply put bounds on the divergence of . Similarly, we can put bounds on the trajectories of , which relates the difference of resistance in distant memristive devices:
(285) |
In general, the appropriate dynamics of , , and will depend on the memory parameters , and thus solving for the trajectory must be done in a self-consistent way. Inherent in the analysis thus far is the charged particles are traveling in a network nearly randomly; inhomogeneities in the current distributions are due solely to network connectivity. In this treatment, the voltage in the network does not strongly bias the propagation of electrons. This is only valid under low voltage bias such that the potential in the network can be considered locally flat; in effect, particles are being injected in an edge, but there is no bias in the network. In the main text, the case of a circuit under inhomogeneous applied bias in a network with spatially varying resistance is studied.
XI.7.3 Wheatstone bridges
We have two Wheatstone bridge circuits, which are equivalent representations of the circuit built from six effective resistances, as described in the main text. In one circuit, we can treat the edge with a known current, , as a current injector that connects the ends of the Wheatstone bridge and the unknown edge, , is the bridge. In the other circuit, the unknown edge, , is the source that connects the ends of the Wheatstone bridge, where the bridge is now .
In order to solve for valid current configurations, we remove one edge of our circuit; in both cases, we remove the edge linking the ends of our Wheatstone bridge, the source edge. In the first case, we have at the top and bottom of our Wheatstone bridge, with on the ends of our bridge, as shown in Figure 4 (b) and (c) in the main text. We use an incidence matrix to solve for in terms of . Similarly, in the second circuit the positions of and are reversed and we solve for in terms of . The following linear mappings are found,
XI.7.4 Linear mapping between voltages
Here we detail how the linear mappings between voltage in our unbalanced Wheatstone bridge circuit are used to determine a correlation between currents and , in edges and , with resistance and , respectively. From the main text we have,
(288) | |||||
(289) |
We can use these mappings to calculate the currents from the resistance and ,
(290) | |||||
(291) | |||||
We can rearrange these matrices to find a mapping between electrical currents, which we use to find a ratio between and ,
(292) | |||||
(293) | |||||
The superscript + is the pseudo-inverse. Note the denominator, , cancels the factor of . We can go through the same process to obtain the inverse mapping:
(294) |
If we are interested in the correlation between currents and , we treat as a unit charge and equations (293) and (294) reproduce the current transformation from the main text. We can write,
(295) | |||||
from which we define the dimensionless correlation in the main text for unit current in .