Learning and organization of memory for evolving patterns
Abstract
Storing memory for molecular recognition is an efficient strategy for responding to external stimuli. Biological processes use different strategies to store memory. In the olfactory cortex, synaptic connections form when stimulated by an odor, and establish distributed memory that can be retrieved upon re-exposure. In contrast, the immune system encodes specialized memory by diverse receptors that recognize a multitude of evolving pathogens. Despite the mechanistic differences between the olfactory and the immune memory, these systems can still be viewed as different information encoding strategies. Here, we present a theoretical framework with artificial neural networks to characterize optimal memory strategies for both static and dynamic (evolving) patterns. Our approach is a generalization of the energy-based Hopfield model in which memory is stored as a network’s energy minima. We find that while classical Hopfield networks with distributed memory can efficiently encode a memory of static patterns, they are inadequate against evolving patterns. To follow an evolving pattern, we show that a distributed network should use a higher learning rate, which in turn, can distort the energy landscape associated with the stored memory attractors. Specifically, narrow connecting paths emerge between memory attractors, leading to misclassification of evolving patterns. We demonstrate that compartmentalized networks with specialized subnetworks are the optimal solutions to memory storage for evolving patterns. We postulate that evolution of pathogens may be the reason for the immune system to encoded a focused memory, in contrast to the distributed memory used in the olfactory cortex that interacts with mixtures of static odors.
I Introduction
Storing memory for molecular recognition is an efficient strategy for sensing and response to external stimuli. Apart from the cortical memory in the nervous system, molecular memory is also an integral part of the immune response, present in a broad range of organisms from the CRISPR-Cas system in bacteria [1, 2, 3] to adaptive immunity in vertebrates [4, 5, 6]. In all of these systems, a molecular encounter is encoded as a memory and is later retrieved and activated in response to a similar stimulus, be it a pathogenic reinfection or a re-exposure to a pheromone. Despite this high-level similarity, the immune system and the synaptic nervous system utilize vastly distinct molecular mechanisms for storage and retrieval of their memory.
Memory storage, and in particular, associative memory in the hippocampus and olfactory cortex has been a focus of theoretical and computational studies in neuroscience [7, 8, 9, 10, 11]. In the case of the olfactory cortex, the input is a combinatorial pattern produced by olfactory receptors which recognize the constituent mono-molecules of a given odor. A given odor composed of many mono-molecules at different concentrations [12, 13, 14] drawn from a space of about distinct mono-molecules reacts with olfactory receptors (a total of in mammals [15, 16, 17, 18, 19]), resulting in a distributed signal and spatiotemporal pattern in the olfactory bulb [20]. This pattern is transmitted to the olfactory cortex, which serves as a pattern recognition device and enables an organism to distinguish orders of magnitudes more odors compared to the number of olfactory receptors [21, 22, 23]. The synaptic connections in the cortex are formed as they are co-stimulated by a given odor pattern, thus forming an associative memory that can be retrieved in future exposures [7, 8, 9, 10, 11, 24].
Artificial neural networks that store auto-associative memory are used to model the olfactory cortex. In these networks, input patterns trigger interactions between encoding nodes. The ensemble of interactive nodes keeps a robust memory of the pattern since they can be simultaneously stimulated upon re-exposure and thus a stored pattern can be recovered by just knowing part of its content. This mode of memory storage resembles the co-activation of synaptic connections in a cortex. Energy-based models, such as Hopfield neural networks with Hebbian update rules [25], are among such systems, in which memory is stored as the network’s energy minima [26]; see Fig. 1. The connection between the standard Hopfield network and the real synaptic neural networks has been a subject of debate over the past decades. Still, the Hopfield network provides a simple and solvable coarse-grained model of the synaptic network, relevant for working memory in the olfactory system and the hippocampus [24].
Immune memory is encoded very differently from associative memory in the nervous system and the olfactory cortex. First, unlike olfactory receptors, immune receptors are extremely diverse and can specifically recognize pathogenic molecules without the need for a distributed and combinatorial encoding. In vertebrates, for example, the adaptive immune system consists of tens of billions of diverse B- and T-cells, whose unique surface receptors are generated through genomic rearrangement, mutation, and selection, and can recognize and mount specific responses against the multitude of pathogens [5]. Immune cells activated in response to a pathogen can differentiate into memory cells, which are long lived and can more efficiently respond upon reinfections. As in most molecular interactions, immune-pathogen recognition is cross-reactive, which would allow memory receptors to recognize slightly evolved forms of the pathogen [5, 27, 28, 29, 30, 31, 32]. Nonetheless, unlike the distributed memory in the olfactory cortex, the receptors encoding immune memory are focused and can only interact with pathogens with limited evolutionary divergence from the primary infection, in response to which memory was originally generated [5].
There is no question that there are vast mechanistic and molecular differences between how memory is stored in the olfactory system compared to the immune system. However, we can still ask which key features of the two systems prompt such distinct information encoding strategies for their respective distributed versus specialized memory. To probe the emergence of distinct memory strategies, we propose a generalized Hopfield model that can learn and store memory against both the static and the dynamic (evolving) patterns. We formulate this problem as an optimization task to find a strategy (i.e., learning rate and network structure) that confer the highest accuracy for memory retrieval in a network (Fig. 1).
In contrast to the static case, we show that a distributed memory in the style of a classical Hopfield model [26] fails to efficiently work for evolving patterns. We show that the optimal learning rate should increase with faster evolution of patterns, so that a network can follow the dynamics of the evolving patterns. This heightened learning rate tends to carve narrow connecting paths (mountain passes) between the memory attractors of a network’s energy landscape, through which patterns can equilibrate in and be associated with a wrong memory. To overcome this misclassification, we demonstrate that specialized memory compartments emerge in a neural network as an optimal solution to efficiently recognize and retrieve a memory of out-of-equilibrium evolving patterns. Our results suggest that evolution of pathogenic patterns may be one of the key reasons why the immune system encodes a focused memory, as opposed to the distributed memory used in the olfactory system, for which molecular mixtures largely present static patterns. Beyond this biological intuition, our model offers a principle-based analytical framework to study learning and memory generation in out-of-equilibrium dynamical systems.

II Results
II.1 Model of working memory for evolving patterns

To probe memory strategies against different types of stimuli, we propose a generalized energy-based model of associative memory, in which a Hopfield-like neural network is able to learn and subsequently recognize binary patterns. This neural network is characterized by an energy landscape and memory is stored as the network’s energy minima. We encode the target of recognition (stimulus) in a binary vector (pattern) with entries: , with , (Fig. 1A). To store associative memory, we define a fully connected network represented by an interaction matrix of size , and use a Hopfield-like energy function (Hamiltonian) to describe pattern recognition [26] (Fig. 1C). Here, we used a short-hand notation to denote the normalized pattern vector by , its transpose by , resulting in a normalized scalar product , and a matrix product .
The network undergoes a learning process, during which different patterns are presented sequentially and in random order (Fig. 1B). As a pattern is presented, the interaction matrix is updated according to the following Hebbian update rule [33]
(1) |
Here is the learning rate. In this model, the memorized patterns are represented by energy minima associated with the matrix . We consider the case where the number of different pattern classes is below the Hopfield capacity of the network (i.e., ; see refs. [26, 34, 35]).
With the update rule in eq. 1, the network develops energy minima as associative memory close to each of the previously presented pattern classes ()(Fig. 1C). Although the network also has minima close to the negated patterns, i.e., to , they do not play any role in what follows. To find an associative memory we let a presented pattern equilibrate in the energy landscape, whereby we accept spin-flips with a probability , where is the inverse equilibration (Hopfield) temperature (Appendix A). In the low temperature regime (i.e., high ), equilibration in networks with working memory drives a presented pattern towards a similar attractor , reflecting the memory associated with the corresponding energy minimum (Fig. 1C). This similarity is measured by the overlap and determines the accuracy of the associative memory.
Unlike the classical cases of pattern recognition by Hopfield networks, we assume that patterns can evolve over time with a rate that reflects the average number of spin-flips in a given pattern per network’s update event (Fig. 1A). Therefore, the expected number of spin-flips in a given pattern between two encounters is , as two successive encounters of the same pattern are on average separated by encounters (and updates) of the network with the other patterns. We work in the regime where the mutation rate is small enough such that the evolved patterns stemming from a common ancestor at time (i.e., the members of the class ) remain more similar to each other than to members of the other classes (i.e., ).
The special case of static patterns () can reflect distinct odor molecules, for which associative memory is stored in the olfactory cortex. On the other hand, the distinct pattern classes in the dynamic case () can be attributed to different types of evolving pathogens (e.g., influenza, HIV, etc), and the patterns within a class as different variants of a given pathogen. In our model, we will use the mutation rate as an order parameter to characterize the different phases of memory strategies in biological systems.
II.2 Optimal learning rate for evolving patterns
In the classical Hopfield model () the learning rate is set to very small values for the network to efficiently learn the patterns [33]. For evolving patterns, the learning rate should be tuned so the network can efficiently update the memory retained from the prior learning steps. At each encounter, the overlap between a pattern and the corresponding attractor for the associated energy minimum determines the accuracy of pattern recognition; the parameter explicitly indicates the dependency of the network’s energy landscape on the learning rate. We declare pattern recognition as successful if the accuracy of reconstruction (overlap) is larger than a set threshold , but our results are insensitive to the exact value of this threshold (Appendix A and Fig. S3). We define a network’s performance as the asymptotic accuracy of its associative memory averaged over the ensemble of pattern classes (Fig. 2A),
(2) |
The expectation is an empirical average over the ensemble of presented pattern classes over time, which in the stationary state approaches the asymptotic average of the argument. The optimal learning rate is determined by maximizing the network’s performance, .
The optimal learning rate increases with growing mutation rate so that a network can follow the evolving patterns (Fig. 2B). Although it is difficult to analytically calculate the optimal learning rate, we can use an approximate approach and find the learning rate that minimizes the expected energy of the patterns , assuming that patterns are shown to the network at a fixed order (Appendix B). In this case, the expected energy is given by
(3) |
where is the upper bound for the overlap between a pattern and its evolved form, when separated by the other patterns that are shown in between. The expected energy grows slowly with increasing mutation rate (i.e., with decreasing overlap ), and the approximation in eq. 3 agrees very well with the numerical estimates for the scenario where pattens are shown in a random order (Fig. 2C). In the regime where memory can still be associated with the evolved patterns (), minimization of the expected energy (eq. 3) results in an optimal learning rate,
(4) |
that scales with the square root of the mutation rate. Notably, this theoretical approximation agrees well with the numerical estimates (Fig. 2B).
II.3 Reduced accuracy of distributed associative memory against evolving patterns
Despite using an optimized learning rate, a network’s accuracy in pattern retrieval decays much faster than the naïve expectation solely based on the evolutionary divergence of patterns between two encounters with a given class (i.e., ); see Fig. 2A. There are two reasons for this reduced accuracy: (i) the lag in the network’s memory against evolving patterns, and (ii) misclassification of presented patterns.
The memory attractors associated with a given pattern class can lag behind and only reflect the older patterns presented prior to the most recent encounter of the network with the specified class. We characterize this lag by identifying a previous version of the pattern that has the maximum overlap with the network’s energy landscape at a given time : (Appendix B.2). measures time in units of (i.e., the effective separation time of pattern of the same class). An increase in the optimal learning rate reduces the time lag and enables the network to follow the evolving patterns more closely (Fig. S1). The accuracy of the memory subject to such a lag decays as , which is faster than the naïve expectation (i.e., ); see Fig. 2A. This memory lag explains the loss of performance for patterns that are still reconstructed by the network’s memory attractors (i.e., those with ; Fig. S1A). However, the overall performance of the network remains lower than the expectation obtained by taking into account this time lag (Fig. 2A)—a discrepancy that leads us to the second root of reduction in accuracy, i.e., pattern misclassification.
As the learning rate increases, the structure of the network’s energy landscape changes. In particular, we see that with large learning rates a few narrow paths emerge between the memory attractors of networks (Fig. 1C). As a result, the equilibration process for pattern retrieval can drive a presented pattern through the connecting paths towards a wrong memory attractor (i.e., one with a small overlap ), which leads to pattern misclassification (Fig. S2A and Fig. S3A,C). These paths are narrow as there are only a few possible spin-flips (mutations) that can drive a pattern from one valley to another during equilibration (Fig. S3B,D and Fig. S4A,C). In other words, a large learning rate carves narrow mountain passes in the network’s energy landscape (Fig. 1C), resulting in a growing fraction of patterns to be misclassified. Interestingly, pattern misclassification occurs even in the absence of mutations for networks with an increased learning rate (Fig. S2A). This suggests that mutations only indirectly contribute to the misclassification of memory, as they necessitate a larger learning rate for the networks to optimally operate, which in turn results in the emergence of mountain passes in the energy landscape.
To understand the memory misclassification, particularly for patterns with moderately low (i.e., non-random) energy (Fig. 2C), we use spectral decomposition to characterize the relative positioning of patterns in the energy landscape (Appendix C). The vector representing each pattern can be expressed in terms of the network’s eigenvectors , , where the overlap is the component of the pattern in the network’s coordinate system. During equilibration, we flip individual spins in a pattern and accept the flips based on their contribution to the recognition energy. We can view these spin-flips as rotations of the pattern in the space spanned by the eigenvectors of the network. Stability of a pattern depends on whether these rotations could carry the pattern from its original subspace over to an alternative region associated with a different energy minimum.
There are two key factors that modulate the stability of a pattern in a network. The dimensionality of the subspace in which a pattern resides, i.e., support of a pattern by the network’s eigenvectors, is one of the key determining factors for pattern stability. We quantify the support of a pattern using the participation ratio [36, 37] that counts the number of eigenvectors that substantially overlap with the pattern. A small support indicates that the pattern is spanned by only a few eigenvectors and is restricted to a small sub-space, whereas a large support indicates that the pattern is orthogonal to only a few eigenvectors. As the learning rate increases, patterns lie in lower dimensional sub-spaces supported by only a few eigenvectors (Fig. S4B,D). This effect is exacerbated by the fact the energy gap between the eigenstates of the network also broaden with increasing learning rate (Fig. S5). The combination of a smaller support for patterns and a larger energy gap in networks with increased learning rate leads to the destabilization of patterns by enabling the spin-flips during equilibration to drive a pattern from one subspace to another, through the mountain passes carved within the landscape; see Appendix C and Fig. S6 for the exact analytical criteria for pattern stability.
II.4 Compartmentalized learning and memory storage
Hopfield-like networks can store accurate associative memory for static patterns. However, these networks fail to perform and store retrievable associative memory for evolving patterns (e.g. pathogens), even when learning is done at an optimal rate (Fig. 2). To overcome this difficulty, we propose to store memory in compartmentalized networks, with sub-networks of size (i.e., the number of nodes in a sub-network). Each compartment (sub-network) can store a few of the total of pattern classes without an interference from the other compartments (Fig. 1D).
Recognition of a pattern in a compartmentalized network involves a two step process (Fig. 1D): First, we choose a sub-network associated with compartment with a probability , where is the inverse temperature for this decision and is the normalizing factor. Once the compartment is chosen, we follow the recipe for pattern retrieval in the energy landscape of the associated sub-network, whereby a pattern equilibrates into a memory attractor.
On average, each compartment stores a memory for pattern classes. To keep the networks with different number of compartments comparable, we scale the size of each compartment to keep , which keeps the (Hopfield) capacity of the network invariant under compartmentalization. Moreover, the mutation rate experienced by each sub-network scales with the number of compartments , which keeps the effective mutation rate invariant under compartmentalization. As a result, the optimal learning rate (eq. 4) scales with the number of compartments as, . However, since updates are restricted to sub-networks of size at a time, the expected amount of updates within a network remains invariant under compartmentalization. Lastly, since the change in energy by a single spin-flip scales as , we introduce the scaled Hopfield temperature to make the equilibration process comparable across networks with different number of compartments. No such scaling is necessary for .
By restricting the networks to satisfy the aforementioned scaling relationships, we are left with two independent variables, i.e., (i) the number of compartments , and (ii) the learning rate , which define a memory strategy . A memory strategy can be then optimized to achieve a maximum accuracy for retrieving an associative memory for evolving patterns with a given effective mutation rate .
II.5 Phases of learning and memory production
Pattern retrieval can be stochastic due to the noise in choosing the right compartment from the sub-networks (tuned by the inverse temperature ), or the noise in equilibrating into the right memory attractor in the energy landscape of the chosen sub-network (tuned by the Hopfield inverse temperature ). We use mutual information to quantify the accuracy of patten-compartment association, where larger values indicate a more accurate association; see Appendix A and Fig. 4. The optimal performance determines the overall accuracy of memory retrieval, which depends on both finding the right compartment and equilibrating into the correct memory attractor. The amplitudes of intra- versus inter- compartment stochasticity determine the optimal strategy used for learning and retrieval of patterns with a specified mutation rate. Varying the corresponding inverse temperatures () results in three distinct phases of optimal memory storage.





i. Small intra- and inter-compartment noise (, ).
In this regime, neither the compartment choice nor the pattern retrieval within a compartment are subject to strong noise. As a result, networks are functional with working memory and the optimal strategies can achieve the highest overall performance. For small mutation rates, we see that all networks perform equally well and can achieve almost perfect performance, irrespective of their number of compartments (Figs. 3A, 4A,B). As the mutation rate increases, networks with a larger number of compartments show a more favorable performance, and the 1-to-1 specialized network, in which each pattern is stored in a separate compartment (i.e., ), reaches the optimal performance (Figs. 3A, 4C,D). As predicted by the theory, the optimal learning rate for compartmentalized networks scales with the mutation rate as , except for the 1-to-1 network in which and sub-networks are steadily updated upon an encounter with a pattern (Fig. 3B). This rapid update is expected since there is no interference between the stored memories in the 1-to-1 network, and a steady update can keep the stored memory in each sub-network close to its associated pattern class without disrupting the other energy minima.
ii. Small intra- and large inter-compartment noise (, ).
In this regime there is low noise for equilibration within a compartment but a high level of noise in choosing the right compartment. The optimal strategy in this regime is to store patterns in a single network with a distributed memory, since identifying the correct compartment is difficult due to noise (Fig. 4B,D). For static patterns this strategy corresponds to the classical Hopfield model with a high accuracy (Figs. 2A, 4A,B). On the other hand, for evolving patterns this strategy results in a partial memory (Fig. 4C,D) due to the reduced accuracy of the distributed associative memory, as shown in Fig. 2A. Interestingly, the transition between the optimal strategy with highly specific (compartmentalized) memory for evolving patterns in the first regime and the generalized (distributed) memory in this regime is very sharp (Fig. 4D). This sharp transition suggests that depending on the noise in choosing the compartments , an optimal strategy either stores memory in a 1-to-1 specialized fashion () or in a distributed generalized fashion (), but no intermediate solution (i.e., quasi-specialized memory with ) is desirable.
iii. Large intra-compartment noise (). In this regime there is a high level of noise in equilibration within a network and memory cannot be reliably retrieved (Fig. 4A,C), regardless of the compartmentalization temperature . However, the impact of the equilibration noise on the accuracy of memory retrieval depends on the degree of compartmentalization. For the 1-to-1 specialized network (), the transition between the high and the low accuracy is smooth and occurs at , below which no memory attractor can be found. As we increase the equilibration noise (decrease ), the networks with distributed memory () show two-step transitions, with a plateau in the range of . Similar to the 1-to-1 network, the first transition at results in the reduced accuracy of the networks’ memory retrieval. At this transition point, the networks’ learning rate approaches its maximum value (Fig. S7), which implies that the memory is stored (and steadily updated) for only patterns (i.e., one pattern per sub-network). Due to the invariance of the networks’ mean energy under compartmentalization, the depth of the energy minima associated with the stored memory in each sub-network scales as , resulting in deeper and more stable energy minima in networks with smaller number of compartments . Therefore, as the noise increases (i.e., decreases), we observe a gradient in transition from partial retrieval to a no-memory state at , with the most compartmentalized network (larger ) transitioning the fastest, reflecting the shallowness of its energy minima.
Taken together, the optimal strategy leading to working memory depends on whether a network is trained to learn and retrieve dynamic (evolving) patterns or static patterns. Specifically, we see that the 1-to-1 specialized network is the unique optimal solution for storing working memory for evolving patterns, whereas the distributed generalized memory (i.e., the classical Hopfield network) performs equally well in learning and retrieval of memory for static patterns. The contrast between these memory strategies can shed light on the distinct molecular mechanisms utilized by different biological systems to store memory.
III Discussion
Storing and retrieving memory from prior molecular interactions is an efficient scheme to sense and respond to external stimuli. Here, we introduced a flexible energy-based network model that can adopt different memory strategies, including distributed memory, similar to the classical Hopfield network, or compartmentalized memory. The learning rate and the number of compartments in a network define a memory strategy, and we probed the efficacy of different strategies for static and dynamic patterns. We found that Hopfield-like networks with distributed memory are highly accurate in storing associative memory for static patterns. However, these networks fail to reliably store retrievable associative memory for evolving patterns, even when learning is done at an optimal rate.
To achieve a high accuracy, we showed that a retrievable memory for evolving patterns should be compartmentalized, where each pattern class is stored in a separate sub-network. In addition, we found a sharp transition between the different phases of working memory (i.e., compartmentalized and distributed memory), suggesting that intermediate solutions (i.e., quasi-specialized memory) are sub-optimal against evolving patterns.
The contrast between these memory strategies is reflective of the distinct molecular mechanisms used for memory storage in the adaptive immune system and in the olfactory cortex. In particular, the memory of odor complexes, which can be assumed as static, is stored in a distributed fashion in the olfactory cortex [7, 8, 9, 10, 11, 24]. On the other hand, the adaptive immune system, which encounters evolving pathogens, allocates distinct immune cells (i.e., compartments) to store a memory for different types of pathogens (e.g. different variants of influenza or HIV)—a strategy that resembles that of the 1-to-1 specialized networks [5, 27, 28, 29, 30, 31, 32]. Our results suggest that pathogenic evolution may be one of the reasons for the immune system to encode a specialized memory, as opposed to the distributed memory used in the olfactory system.
The increase in the optimal learning rate in anticipation of patterns’ evolution significantly changes the structure of the energy landscape for associative memory. In particular, we found the emergence of narrow connectors (mountain passes) between the memory attractors of a network, which destabilize the equilibration process and significantly reduce the accuracy of memory retrieval. Indeed, tuning the learning rate as a hyper-parameter is one of the challenges of current machine learning algorithms with deep neural networks (DNNs) [38, 39]. The goal is to navigate the tradeoff between the speed (i.e., rate of convergence) and accuracy without overshooting during optimization. It will be interesting to see how the insights developed in this work can inform rational approaches to choose an optimal learning rate in optimization tasks with DNNs.
Machine learning algorithms with DNNs [38] and modified Hopfield networks [40, 41, 42, 43] are able to accurately classify hierarchically correlated patterns, where different objects can be organized into an ultrametric tree based on some specified relations of similarity. For example, faces of cats and dogs have the oval shape in common but they branch out in the ultrametric tree according to the organism-specific features, e.g., whiskers in a cat, and the cat branch can then further diversify based on the breed-specific features. A classification algorithm can use these hierarchical relations to find features common among members of a given sub-type (cats) that can distinguish them from another sub-type (dogs). Although evolving patterns within each class are correlated, the random evolutionary dynamics of these patterns does not build a hierarchical structure where a pattern class branches in two sub-classes that share a common ancestral root. Therefore, the optimal memory strategies found here for evolving patterns are distinct from those of the hierarchically correlated patterns. It will be interesting to see how our approaches can be implemented in DNNs to classify dynamic and evolving patterns.
Acknowledgement
This work has been supported by the DFG grant (SFB1310) for Predictability in Evolution and the MPRG funding through the Max Planck Society. O.H.S also acknowledges funding from Georg-August University School of Science (GAUSS) and the Fulbright foundation.
Appendix A Computational procedures
A1 Initialization of the network
A network (with elements ) is presented with random (orthogonal) patterns (with ), with entries , reflecting the pattern classes. For a network with compartments (with ), we initialize each sub-network at time as and ; here, is a set of randomly chosen (without replacement) patterns initially assigned to the compartment (sub-network) . We then let the network undergo an initial learning process. At each step an arbitrary pattern is presented to the network and a sub-network is chosen for an update with a probability
(S5) |
where the energy is defined as,
(S6) |
and is the inverse temperature associated with choosing the right compartment. We then update the selected sub-network , using the Hebbian update rule,
(S7) |
For dynamic patterns, the presented patterns undergo evolution with mutation rate , which reflects the average number of spin flips in a given pattern per network’s update event (Fig. 1).
Our goal is to study the memory retrieval problem in a network that has reached its steady state. The state of a network at the time step can be traced back to the initial state as,
(S8) |
The contribution of the initial state to the state of the network at time decays as (eq. S8). Therefore, we choose the number of steps to reach the steady state as . This criteria ensures that and the memory of the initial state is removed from the network . We will then use this updated network to collect the statistics for memory retrieval. To report a specific quantity from the network (e.g., the energy), we pool the samples collected from each of the 50 realizations.
A2 Pattern retrieval from associative memory
Once the trained network approaches a stationary state, we collect the statistics of the stored memory.
To find a memory attractor for a given pattern we use a Metropolis algorithm in the energy landscape (eq. S6). To do so, we make spin-flips in a presented pattern and accept a spin-flip with probability
(S9) |
where and is the inverse (Hopfield) temperature for pattern retrieval in the network (see Fig. 1). We repeat this step for steps, which is sufficient to find a minimum of the landscape (see Fig. S3).
For systems with more than one compartment , we first choose a compartment according to eq. S5, and then perform the Metropolis algorithm within the associated compartment.
After finding the energy minima, we update the systems for steps. At each step we present patterns as described above and collect the statistics of the recognition energy between a presented pattern and the memory compartment , assigned according to eq. S5. These measurements are used to describe the energy statistics (Figs. 2,S2) of the patterns and the accuracy of pattern-compartment association (Fig. 4B,D). After the steps, we again use the Metropolis algorithm to find the memory attractors associated with the presented patterns. We repeat this analysis for independent realizations of the initializing pattern classes , for each set of parameters .
When calculating the mean performance of a strategy (see Figs. 2,3,4,S7), we set the overlap between attractor and pattern equal to zero when patterns are not recognized (). As a result, systems can only achieve a non-zero performance if they recognize some of the patterns. This choice eliminates the finite size effect of a random overlap between an attractor and a pattern (see Fig. S3). This correction is especially important when comparing systems with different sub-network sizes () in the regime (Figs. 4,S7), where random overlaps for small could otherwise result in a larger mean performance compared to larger systems that correctly reconstruct a fraction of the patterns.
A3 Accuracy of pattern-compartment association
We use the mutual information between the set of pattern classes and the set of all compartments to quantify the accuracy in associating a presented pattern with the correct compartment,
(S10) |
Here and are the entropy of the compartments, and the conditional entropy of the compartments given the presented patterns, respectively. If chosen randomly, the entropy associated with choosing a compartment is . The mutual information (eq. S10) measures the reduction in the entropy of compartments due to the association between the patterns and the compartments, measured by the conditional entropy . Figure 4B,D shows the mutual information scaled by its upper bound , in order to compare networks with a different number of compartments.
Appendix B Estimating energy and optimal learning rate for working memory
B1 Approximate solution for optimal learning rate
The optimal learning rate is determined by maximizing the network’s performance (eq. 2) against evolving patterns with a specified mutation rate:
(S11) |
We can numerically estimate the optimal learning rate as defined eq. S11; see Figs. 2,3. However, the exact analytical evaluation of the optimal learning rate is difficult and we use an approximate approach and find the learning rate that minimizes the expected energy of the patterns in the stationary state , assuming that patterns are shown to the network at a fixed order. Here, the subscripts explicitly indicate the learning rate of the network , and the evolutionary overlap of the pattern . To evaluate an analytical approximation for the energy, we first evaluate the state of the network at time step , given all the prior encounters of the networks with patterns shown at a fixed order.
(S12) | ||||
(S13) | ||||
(S14) |
Here, we referred to the (normalized) pattern vector from the class presented to the network at time step by . Without a loss of generality, we assumed that the last pattern presented to the network at time step is from the first pattern class , which enabled us to split the sum in eq. S12 into two separate summations over pattern classes and time-steps generations (eq. S13). Adding the identity matrix on the left-hand side of eq. S12 assures that the diagonal elements vanish, as defined in eq. S7.
The mean energy of the patterns, which in our setup is the energy of the pattern from the class at time , follows
(S15) |
Since the pattern families are orthogonal to each other, we can express the overlap between patterns at different times as , and simplify the energy function in eq. LABEL:eq:energypattern,
(S16) |
Since accurate pattern retrieval depends on the depth of the energy valley for the associative memory, we will use the expected energy of the patterns as a proxy for the performance of the network. We can find the approximate optimal learning rate that minimized the expected energy by setting , which results in
(S17) |
where we used the fact that both the mutation rate and the learning rate are small, and therefore, expanded eq. S17 up to the leading orders in these parameters.
B2 Lag of memory against evolving patterns
The memory attractors associated with a given pattern class can lag behind and only reflect the older patterns presented prior to the most recent encounter of the network with the specified class. As a result, the upper bound for the performance of a network is determined by both the evolutionary divergence of patterns between two encounters and number of generations by which the stored memory lags behind; we measure in units of generations; one generation is defined as the average time between a network’s encounter with the same pattern class i.e., . We characterize this lag by identifying the past pattern (at time ) that has the maximum overlap with the network’s energy landscape at given time :
(S18) |
where we introduced the expected lagged energy . Here, the vector refers to the pattern presented to the network at time , which can be from any of the pattern classes. Because of the translational symmetry in time in the stationary state, the lagged energy can also be expressed in terms of the overlap between a pattern at time and the network at a later time . We evaluate the lagged energy by substituting the expression for the network’s state from eq. S12, which entails,
(S19) | ||||
(S20) | ||||
(S21) | ||||
(S22) | ||||
(S23) |
Here, we used the expression of the network’s matrix in eq. S8 to arrive at eq. S20, and then followed the procedure introduced in eq. S13 to arrived at the double-summation in eq. S21. We then used the equation for pattern overlap to reduce the sum in eq. S22 and arrived at the result in eq. S23 by evaluating the geometric sum and substituting the empirical average for the energy from eq. LABEL:eq:energypattern_final.
We probe this lagged memory by looking at the performance for patterns that are correctly associated with their memory attractors (i.e., those with ). As shown in Fig. S1, for a broad parameter regime, the mean performance for these correctly associated patterns agrees well with the theoretical expectation , which is lower than the naive expectation .
Appendix C Structure of the energy landscape for working memory
C1 Formation of mountain passes in the energy landscape of memory for evolving patterns
As shown in Fig. 1, large learning rates in networks with memory for evolving patterns result in the emergence of narrow connecting paths between the minima of the energy landscape. We refer to these narrow connecting paths as mountain passes. In pattern retrieval, the Monte Carlo search can drive a pattern out of one energy minimum into another minimum and potentially lead to pattern misclassification.
We use two features of the energy landscape to probe the emergence of the mountain passes.
First, we show that if a pattern is misclassified, it has fallen into a memory attractor associated with another pattern class and not a spuriously made energy minima. To do so, we compare the overlap of the attractor with the original pattern (i.e., the reconstruction performance of the patterns) with the maximal overlap of the attractor with all other patterns . Indeed, as shown in Fig. S3A for evolving patterns, the memory attractors associated with of the originally stored patterns have either a large overlap with the correct pattern or with one of the other previously presented pattern. 71.3% of the patterns are correctly classified (stable patterns in sector I in Fig. S3A), whereas 28.1% of them are associated with a secondary energy minima after equilibration (unstable patterns in sector II in Fig. S3A). A very small fraction of patterns () fall into local minima given by the linear combinations of the presented patterns (sector IV in Fig. S3A). These minima are well-known in the classical Hopfield model [44, 45]. Moreover, we see that equilibration of a random pattern (i.e., a pattern orthogonal to all the presented classes) in the energy landscape leads to memory attractors for one of the originally presented pattern classes. The majority of these random patterns lie in sector II of Fig. S3A), i.e., they have a small overlap with the network since they are orthogonal to the originally presented pattern classes, and they fall into one of the existing memory attractors after equilibration.
Second, we characterize the possible paths for a pattern to move from one valley to another during equilibration, using Monte Carlo algorithm with the Metropolis acceptance probability,
(S24) |
We estimate the number of beneficial spin-flips (i.e., open paths) that decrease the energy of a pattern at the start of equilibration (Fig. S3B). The average number of open paths is smaller for stable patterns compared to the unstable patterns that are be miscalssified during retrieval (Fig. S3B). However, the distributions for the number of open paths largely overlap for stable and unstable patterns. Therefore, the local energy landscape of stable and unstable patterns are quite similar and it is difficult to discriminate between them solely based on the local gradients in the landscape. Fig. S4A shows that the average number of beneficial spin-flips grows with the mutation rate of the patterns but this number is comparable for stable and unstable patterns. Moreover, the unstable stored patterns (blue) have far fewer open paths available to them during equilibration compared to random patterns (red) that are presented to the network for the first time (Figs. S3B & S4A). Notably, on average half of the spin-flips reduce the energy of for random patterns, irrespective of the mutation rate. This indicates that even though previously presented pattern classes are statistically distinct from random patterns, they can still become unstable, especially in networks which are presented with evolving patterns.
It should be noted that the evolution of the patterns only indirectly contribute to the misclassification of memory, as it necessitates a larger learning rate for the networks to optimally operate, which in turn results in the emergence of mountain passes. To clearly demonstrate this effect, Figs. S3C,D, and S4D shows the misclassification behavior for a network trained to store memory for static pattern, while using a larger learning rate that is optimized for evolving patterns. Indeed, we see that pattern misclassification in this case is consistent with the existence of mountain passes in the network’s energy landscape.
C2 Spectral decomposition of the energy landscape
We use spectral decomposition of the energy landscape to characterize the relative positioning and the stability of patterns in the landscape. As shown in Figs. S3, S4, destabilization of patterns due to equilibration over mountain passes occurs in networks with high learning rates, even for static patterns. Therefore, we focus on how the learning rate impacts the spectral decomposition of the energy landscape in networks presented with static patterns. This simplification will enable us to analytically probe the structure of the energy landscape, which we will compare with numerical results for evolving patterns.
We can represent the network (of size ) that store a memory of static patterns with non-trivial eigenvectors with corresponding eigenvalues , and degenerate eigenvectors, with corresponding trivial eigenvalues :
(S25) |
The non-trivial eigenvectors span the space of the presented patterns, for which the recognition energy can be expressed by
(S26) |
An arbitrary configuration in general can have components orthogonal to the eigenvectors , as it points to a vertex of the hypercube, and should be expressed in terms of all the eigenvectors :
(S27) |
Any spin-flip in a pattern (e.g., during equilibration) can be understood as a rotation in the eigenspace of the network (eq. S27). As a first step in characterizing these rotations we remind ourselves of the identity
(S28) |
with the normalization condition
(S29) |
In addition, since the diagonal elements of the network are set to (eq. S7), the eigenvalues should sum to zero, or alternatively,
(S30) |
To asses the stability of a pattern , we compare its recognition energy with the energy of the rotated pattern after a spin-flip . To do so, we first consider a simple scenario, where we assume that the pattern has a large overlap with one dominant non-trivial eigenvector (i.e., ). The other components of the pattern can be expressed in terms of the remaining non-trivial eigenvectors with mean squared overlap . The expansion of the recognition energy for the presented pattern is restricted to the non-trivial directions (eq. S27), resulting in
(S31) |
where is the mean eigenvalue for the non-dominant directions.
A spin-flip ( ) can rotate the pattern out of the dominant direction and reduce the squared overlap by . The rotated pattern in general lies in the -dimensional space and is not restricted to the -dimensional (non-trivial) subspace. We first take a mean-field approach in describing the rotation of the pattern after a spin-flip. Because of the normalization condition (eq. S29), the loss in the overlap with the dominant direction should result in an average increase in the overlap with the other eigenvectors by . The energy of the rotated pattern after a spin-flip can be expressed in terms of all the eigenvectors (eq. S27),
(S32) | ||||
(S33) |
where in eq. S33 we used the fact that the eigenvalues should sum up to zero. On average, a spin-flip increases the recognition energy by . This is consistent with the results shown in Figs. S3B,D and Figs. S4A,D, which indicate that the majority of the spin-flips keep a pattern in the original energy minimum and only a few of the spin-flips drive a pattern out of the original attractor.
In the analysis above, we assumed that the reduction in overlap with the dominant eigenvector is absorbed equally by all the other eigenvectors (i.e., the mean-field approach). In this case, the change in energy is equally distributed across the positive and the negative eigenvalues (’s and ’s in eq. S32), resulting in an overall increase in the energy due to the reduced overlap with the dominant direction . The destabilizing spin-flips are associated with atypical changes that rotate a pattern onto a secondary non-trivial direction (with positive eigenvalue ), as a result of which the total energy could be reduced. To better characterize the conditions under which patterns become unstable, we will introduce a perturbation to the mean-field approach used in eq. S33. We will assume that a spin-flip results in a rotation with a dominant component along a secondary non-trivial direction . Specifically, we will assume the reduced overlap between the original pattern and the dominant direction is distributed in an imbalanced fashion between the other eigenvectors, with a fraction projected onto a new (non-trivial) direction , while all the other directions span the remaining . In this case, the energy of the rotated pattern is given by
(S34) |
Therefore, a spin-flip is beneficial if . To further concretize this condition, we will estimate the typical loss and gain in the squared overlap between the pattern and its dominating directions due to rotation by a single spin-flip.
Let us consider a rotation by a flip in the spin of the original pattern . This spin flip reduces the original overlap of the pattern with the dominant direction by the amount , where is the entry of the eigenvector . Since the original overlap is large (i.e., ), all entries of the dominant eigenvector are approximately , resulting in a reduced overlap of the rotated pattern . Therefore, the loss in the squared overlap by a spin flip is given by
(S35) |
Similarly, we can derive the gain in the squared overlap between the pattern and the new dominant direction after a spin-flip. Except for the direction , the expected squared overlap between the original pattern (prior to the spin flip) and any of the non-trivial eigenvectors including is . The flip in the -th spin of the original pattern increases the overlap of the rotated pattern with the new dominant direction by , where should be of the order of . Therefore, the largest gain in overlap due to a spin-flip is given by
(S36) |
By using the results from eqs. S35 and LABEL:eq:epsilon_squared_p, we can express the condition for beneficial spin-flips to drive a pattern over the carved mountain passes during equilibration (eq. C2),
(S37) |
where . This result suggests that the stability of a pattern depends on how the ratio of the eigenvalues associated with the dominant projections of the pattern before and after the spin-flip compare to the overlap of the original pattern with the dominant eigenvector and the change due to the spin-flip .
So far, we have constrained our analysis to patterns that have a dominant contribution to only one eigenvector . To extend our analysis to patterns which are instead constrained to a small sub-space spanned by non-trivial eigenvalues, we define the squared pattern overlap with the subspace and a weighted averaged eigenvalue in the subspace . As a result, the difference in the energy of a pattern before and after a spin-flip (eq. C2) can be extended to . Similarly, the stability condition in eq. S37 can be extended to . Patterns that are constrained to larger subspaces are more stable, since the weighted averaged eigenvalue for their containing subspace is closer to the mean of all eigenvalues (law of large numbers). Therefore, in these cases a much larger eigenvalue gap (or a broader eigenvalue spectrum) is necessary to satisfy the condition for pattern instability.
Fig. S6 compares the loss in energy with the original dominant direction to the maximal gain in any of the other directions to test the pattern stability criteria presented in eq. S37. To do so, we identify a spin flip in a secondary direction that confers the maximal energy gain: . In Fig. S6A,C we specifically focus on the subset of patterns that show a large (squared) overlap with the one dominant direction (i.e., ). Given that evolving patterns are not constraint to the (non-trivial) sub-space, we find a smaller fraction of these patterns to fulfill the condition for such large overlap (Fig. S6A), compared to the static patterns (Fig. S6C). Nonetheless, we see that the criteria in eq. S37 can be used to predict the stability of patterns in a network for both static and evolving patterns; note that here we use the same learning rate for both the static and evolving patterns.
We then relax the overlap condition by including all patterns that have a large overlap with a subspace , spanned by up to 10 eigenvectors (i.e., ). For these larger subspaces the transition between stable and unstable patterns is no longer exactly given by eq. S37. However, the two contributions and still clearly separate the patterns into stable and unstable classes for both static and evolving patterns (Figs. S6B,D). The softening of this condition is expected as in this regime we can no longer assume that a single spin-flip can reduce the overlap with all the eigenvectors in the original subspace. As a result, the effective loss in overlap become smaller than and patterns become unstable below the dotted line in Fig. S6B,D.
As the learning rate increases, the gap between the eigenvalues (Fig. S5) become larger. At the same time, patterns become more constrained to smaller subspaces (Fig. S4C,D). As a result of these two effects, more patterns satisfy the instability criteria in eq. S37. These patterns are misclassified as they fall into a wrong energy minimum by equilibrating through the mountain passes carved in the energy landscape of networks with large learning rates.
References
- [1] Labrie SJ, Samson JE, Moineau S (2010) Bacteriophage resistance mechanisms. Nature Rev Microbiol 8: 317–327.
- [2] Barrangou R, Marraffini LA (2014) CRISPR-Cas systems: Prokaryotes upgrade to adaptive immunity. Molecular cell 54: 234–244.
- [3] Bradde S, Nourmohammad A, Goyal S, Balasubramanian V (2020) The size of the immune repertoire of bacteria. Proc Natl Acad Sci USA 117: 5144–5151.
- [4] Perelson AS, Weisbuch G (1997) Immunology for physicists. Rev Mod Phys 69: 1219–1267.
- [5] Janeway C, Travers P, Walport M, Schlomchik M (2001) Immunobiology. The Immune System in Health and Disease. New York: Garland Science, 5 edition.
- [6] Altan-Bonnet G, Mora T, Walczak AM (2020) Quantitative immunology for physicists. Physics Reports 849: 1–83.
- [7] Haberly LB, Bower JM (1989) Olfactory cortex: Model circuit for study of associative memory? Trends Neurosci 12: 258–264.
- [8] Brennan P, Kaba H, Keverne EB (1990) Olfactory recognition: A simple memory system. Science 250: 1223–1226.
- [9] Granger R, Lynch G (1991) Higher olfactory processes: Perceptual learning and memory. Curr Opin Neurobiol 1: 209–214.
- [10] Haberly LB (2001) Parallel-distributed processing in olfactory cortex: New insights from morphological and physiological analysis of neuronal circuitry. Chemical senses 26: 551–576.
- [11] Wilson DA, Best AR, Sullivan RM (2004) Plasticity in the olfactory system: Lessons for the neurobiology of memory. Neuroscientist 10: 513–524.
- [12] Raguso RA (2008) Wake Up and Smell the Roses: The Ecology and Evolution of Floral Scent. Annu Rev Ecol Evol Syst 39: 549–569.
- [13] Dunkel A, et al. (2014) Nature’s chemical signatures in human olfaction: A foodborne perspective for future biotechnology. Angew Chem Int Ed 53: 7124–7143.
- [14] Beyaert I, Hilker M (2014) Plant odour plumes as mediators of plant-insect interactions: Plant odour plumes. Biol Rev 89: 68–81.
- [15] Glusman G, Yanai I, Rubin I, Lancet D (2001) The Complete Human Olfactory Subgenome. Genome Research 11: 685–702.
- [16] Bargmann CI (2006) Comparative chemosensation from receptors to ecology. Nature 444: 295–301.
- [17] Touhara K, Vosshall LB (2009) Sensing Odorants and Pheromones with Chemosensory Receptors. Annu Rev Physiol 71: 307–332.
- [18] Su CY, Menuz K, Carlson JR (2009) Olfactory Perception: Receptors, Cells, and Circuits. Cell 139: 45–59.
- [19] Verbeurgt C, et al. (2014) Profiling of Olfactory Receptor Gene Expression in Whole Human Olfactory Mucosa. PLoS ONE 9: e96333.
- [20] Shepherd GM, Greer CA (1998) Olfactory bulb. In: The Synaptic Organization of the Brain, 4th ed., New York, NY, US: Oxford University Press. pp. 159–203.
- [21] Bushdid C, Magnasco MO, Vosshall LB, Keller A (2014) Humans Can Discriminate More than 1 Trillion Olfactory Stimuli. Science 343: 1370–1372.
- [22] Gerkin RC, Castro JB (2015) The number of olfactory stimuli that humans can discriminate is still unknown. eLife 4: e08127.
- [23] Mayhew EJ, et al. (2020) Drawing the Borders of Olfactory Space. preprint, Neuroscience. doi:10.1101/2020.12.04.412254. URL http://biorxiv.org/lookup/doi/10.1101/2020.12.04.412254.
- [24] Lansner A (2009) Associative memory models: From the cell-assembly theory to biophysically detailed cortex simulations. Trends Neurosci 32: 178–186.
- [25] Hebb DO (1949) The Organization of Behavior: A Neuropsychological Theory. New York: Wiley.
- [26] Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79: 2554–2558.
- [27] Mayer A, Balasubramanian V, Mora T, Walczak AM (2015) How a well-adapted immune system is organized. Proc Natl Acad Sci USA 112: 5950–5955.
- [28] Shinnakasu R, et al. (2016) Regulated selection of germinal-center cells into the memory B cell compartment. Nat Immunol 17: 861–869.
- [29] Shinnakasu R, Kurosaki T (2017) Regulation of memory B and plasma cell differentiation. Curr Opin Immunol 45: 126–131.
- [30] Mayer A, Balasubramanian V, Walczak AM, Mora T (2019) How a well-adapting immune system remembers. Proc Natl Acad Sci USA 116: 8815–8823.
- [31] Schnaack OH, Nourmohammad A (2021) Optimal evolutionary decision-making to store immune memory. eLife 10: e61346.
- [32] Viant C, et al. (2020) Antibody affinity shapes the choice between memory and germinal center B cell fates. Cell 183: 1298–1311.e11.
- [33] Mezard M, Nadal JP, Toulouse G (1986) Solvable models of working memories. J Physique 47: 1457-1462.
- [34] Amit DJ, Gutfreund H, Sompolinsky H (1985) Storing infinite numbers of patterns in a spin-glass model of neural networks. Phys Rev Lett 55: 1530–1533.
- [35] McEliece R, Posner E, Rodemich E, Venkatesh S (1987) The capacity of the hopfield associative memory. IEEE Transactions on Information Theory 33: 461-482.
- [36] Bouchaud JP, Mezard M (1997) Universality classes for extreme-value statistics. J Phys A Math Gen 30: 7997–8016.
- [37] Derrida B (1997) From random walks to spin glasses. Physica D: Nonlinear Phenomena 107: 186–198.
- [38] Goodfellow I, Bengio Y, Courville A (2016) Deep Learning. MIT Press. http://www.deeplearningbook.org.
- [39] Mehta P, et al. (2019) A high-bias, low-variance introduction to Machine Learning for physicists. Physics Reports 810: 1–124.
- [40] Parga N, Virasoro M (1986) The ultrametric organization of memories in a neural network. J Phys France 47: 1857–1864.
- [41] Virasoro MA (1986) Ultrametricity, Hopfield Model and all that. In: Disordered Systems and Biological Organization, Springer, Berlin, Heidelberg. pp. 197–204.
- [42] Gutfreund H (1988) Neural networks with hierarchically correlated patterns. Phys Rev A 37: 570–577.
- [43] Tsodyks MV (1990) Hierarchical associative memory in neural networks with low activity level. Mod Phys Lett B 04: 259–265.
- [44] Amit DJ, Gutfreund H, Sompolinsky H (1985) Spin-glass models of neural networks. Physical Review A 32: 1007–1018.
- [45] Fontanari JF (1990) Generalization in a hopfield network. J Phys France 51: 2421–2430.
Supplementary Figures














