On the Expressive Power of Modern Hopfield Networks
Modern Hopfield networks (MHNs) have emerged as powerful tools in deep learning, capable of replacing components such as pooling layers, LSTMs, and attention mechanisms. Recent advancements have enhanced their storage capacity, retrieval speed, and error rates. However, the fundamental limits of their computational expressiveness remain unexplored. Understanding the expressive power of MHNs is crucial for optimizing their integration into deep learning architectures. In this work, we establish rigorous theoretical bounds on the computational capabilities of MHNs using circuit complexity theory. Our key contribution is that we show that MHNs are -uniform . Hence, unless , a -precision modern Hopfield networks with a constant number of layers and hidden dimension cannot solve -hard problems such as the undirected graph connectivity problem and the tree isomorphism problem. We also extended our results to Kernelized Hopfield Networks. These results demonstrate the limitation in the expressive power of the modern Hopfield networks. Moreover, Our theoretical analysis provides insights to guide the development of new Hopfield-based architectures.
1 Introduction
Hopfield networks [25], initially introduced as associative memories capable of storing and retrieving patterns, have undergone significant advancements in recent years. These developments led to the emergence of modern Hopfield networks [59]. Modern Hopfield networks (MHNs) address the limitations of their predecessors, particularly in terms of storage capacity, retrieval speed, and error rates. An important feature of MHNs is their ability to function as specialized components within deep networks, integrating memory capabilities and supporting diverse functionalities. For instance, MHNs can replace conventional layers such as pooling layers, permutation-equivariant layers [19, 60], GRU [13], LSTM [24, 27], and attention mechanisms [69, 5].
Understanding modern Hopfield networks’ computational capabilities and limitations is critical for their effective application. Specifically, it is crucial to explore the computational operations these networks can implement and the complexity of the problems they can solve collectively. These investigations are vital not only for theoretical insights but also for guiding the development of more efficient and powerful models.
While prior research has primarily focused on the dynamics and capacity of modern Hopfield networks [23, 70], their expressiveness from a circuit complexity perspective remains underexplored. This gap raises an important question:
What are the fundamental limits of modern Hopfield networks in terms of circuit complexity?
To address these questions, we turn to the framework of circuit complexity theory, which provides a robust method for analyzing the computational resources required to perform specific tasks. By mapping modern Hopfield networks to computational circuits, we can rigorously assess their capabilities and establish upper and lower bounds on the classes of problems they can solve.
In this work, we present a comprehensive theoretical investigation into the circuit complexity bounds of MHNs. Our approach involves analyzing the architecture of MHNs and the computational complexity of its components, such as the Hopfield layer. Hence, we show that uniform circuits can efficiently simulate these models.
Our main contributions can be outlined as follows:
- •
- •
Roadmap.
In Section 2, we provide an overview of the related works. In Section 3, we introduce the notations and definitions needed for modern Hopfield networks. In Section 4, we analyze the circuit complexity for modern Hopfield networks. In Section 5, we focus on the circuit complexity results of kernelized Hopfield networks. In Section 6, we discuss the hardness results of the modern Hopfield networks. Finally, in Section 7, we summarize our theoretical results in the conclusion. We defer more preliminaries and technique details in Appendix.
2 Related Works
In this section, we introduce the related works of the Hopfield models and the complexity of Transformers. In Appendix A, we introduce more related works.
2.1 Theory of Modern Hopfield Networks
Modern Hopfield Models have demonstrated both empirical success and theoretical value, providing a framework for understanding transformer attention mechanisms and Transformer architectures. [31, 70] propose a unified framework for analyzing and deriving modern Hopfield models using entropic regularizers. This framework includes a range of sparse variants, such as sparse and generalized sparse models, and also integrates the standard modern Hopfield model [59] as a special case. Despite these advancements, the modern Hopfield paradigm remains incomplete, lacking efficient implementations and variants, as highlighted by [31] To address these limitations, [21] propose a principled nonparametric approach for developing efficient variants of modern Hopfield models, including linear, top-K, and random feature-based versions. This study focuses on advancing research to create more efficient models. This study aims to push the boundaries of research in developing more efficient models, emphasizing their potential to shape the future of Hopfield-driven designs.
2.2 Complexity and Neural Network
Complexity theory offers a formal framework to analyze the computational power and limitations of models. Circuit complexity, a key domain within this field, characterizes the power of Boolean circuits and has recently been applied to investigate the expressive capacity of deep neural networks and Transformers [57, 40, 55, 56, 48, 7]. Several important circuit complexity classes play a central role in machine learning. For example, includes problems solvable with highly parallelizable logic gates, extends this class by incorporating threshold gates, and describes languages recognized by circuits with -depth and bounded fan-in [57]. These classes exhibit the hierarchy , although whether remains an open question.
Transformers’ depth requirements have been analyzed through the lens of these complexity classes. For instance, [40] demonstrates that the depth of a Transformer must scale with input length to simulate certain non-solvable semi-automata. In a similar vein, [48] investigates the connection between constant-depth Transformers, chain-of-thought (CoT) reasoning, and circuit complexity. They show that and Here, denotes the constant-depth Transformers characterized by precision of , an embedding size of , and exponent bits, while refers to a chain-of-thought (CoT) Transformer operating for steps. These findings highlight that intermediate reasoning steps in CoT models can improve computational expressivity.
The Strong Exponential Time Hypothesis (), which posits that no - algorithm exists with runtime for and , serves as a cornerstone in fine-grained complexity analyses [33]. Results derived from have shaped Transformer-related studies, particularly for tensor attention mechanisms [1, 4, 53, 50, 2]. For example, [1] shows that by assuming , it is impossible to perform forward pass computation of attention-based networks in time, while [4] extends this limitation to backward computations. These results underscore the intrinsic computational challenges of Transformer architectures.
3 Preliminaries
In Section 3.1, we define useful notations for defining modern Hopfield networks. In Section 3.2, we introduce some components of modern Hopfield networks. In Section 3.3, we state the basic definitions and architectures of kernelized Hopfiled networks. The background knowledge about circuit complexity and float point number is introduced in Appendix B.
3.1 Notations
We define as the set of natural numbers and as the set of real numbers. For any positive integer , represents the set . The vector denotes an -dimensional vector where all entries are ones. Given a matrix , refers to the element in the -th row and -th column, while represents the transpose of . The set represents all binary strings of finite length. Specifically, for , represents a finite binary string, with each bit being either or . A language is characterized as a subset of .
3.2 Modern Hopfield Network
With the above notations established, we proceed to introduce the foundational concepts of Modern Hopfield Networks.
The input query pattern is denoted as , while memory patterns are represented by the matrix . Hopfield models are associative memory models based on energy functions, where the stored memory patterns correspond to the local minima of these energy landscapes. For a given input query , the model retrieves the most similar memory pattern by employing energy minimization algorithms, referred to as retrieval dynamics , initialized at .
In [59], the Modern Hopfield Model is introduced with a specific energy function and retrieval dynamics , which are further incorporated into deep learning frameworks due to their relation to transformer attention mechanisms [69]. This approach enhances performance and provides a theoretical guarantee of exponential memory capacity. The energy function is defined as:
where the retrieval dynamics is given by
(1) |
and the function is the log-sum-exponential, defined as for any and . When applied to a sequence of input queries , Eq. (1) becomes: , and hence
where performs column-wise normalization. We assume that , meaning the growth rate of is sub-polynomial relative to .
Modern Hopfield Networks with continuous states are compatible with deep learning models due to their differentiable nature and the ability to retrieve patterns in a single update step. This aligns with the behavior of deep learning layers, which typically activate only once. These properties make Modern Hopfield Networks suitable as specialized layers to add memory functionalities to deep networks.
Definition 3.1 (Hopfield attention matrix, [59]).
The Hopfield attention matrix is defined using model weights , query patterns , and key patterns . For , the elements of are given by:
The floating-point number and their operations in our computational framework are formally defined in Appendix B.2. The Hopfield attention marix is used to compute a single Hopfield layer.
Definition 3.2 (Hopfield layer, page 6 in [59]).
A single Hopfield layer propagates patterns using query patterns and key patterns . In its most general form, the result patterns are a function of raw stored patterns , raw state patterns , and projection matrices :
(2) |
We set . Then, based on Definition 3.1, we define the -th Hopfield layer as
where we denote
(3) |
Here, the rank of is limited by dimension constraints of the matrix product . To provide the Hopfield layer with more flexibility, the matrix product can be replaced by one parameter matrix. In this case, is not the product from Eq. (3) but a stand-alone parameter matrix as in the original transformer setting.
Multiple Hopfield layers can be integrated with other elements to build a functional Hopfield architecture.
Definition 3.3 (Multi-layer Modern Hopfield Networks).
Consider a model consisting of Hopfield layers. For each , let denote the components of the network excluding the -th Hopfield layer, where . Denote the -th Hopfield layer as . Let the input matrix or query patterns be denoted by , and the stored patterns (keys) associated with the -th Hopfield layer by . The multi-layer Modern Hopfield Network, denoted as , is defined as:
where denotes the composition of functions.
We now present one type of function, specifically a two-layer ReLU feedforward neural network.
Definition 3.4 (Two-layer ReLU Feed-forward Neural Networks).
We use to denote the input matrix of size . For each , the two-layer ReLU Feed-forward Neural Networks is defined as follows:
3.3 Kernelized Hopfield Networks
The capacity of modern Hopfield networks is suboptimal. To improve the capacity, [70] introduces a kernel as a learnable similarity measure, using stored memory patterns as training data to enhance memory capacity. Specifically, they propose the kernelized Hopfield network (KHM) defined by following update rule and energy function:
where the kernel is associated with a learnable feature map . Here, acts column-wise on matrix: . Accordingly, we define the kernelized Hopfield layer.
Definition 3.5 (Kernelized attention matrix).
Let be the set of query (state) patterns, and be the set of key (stored) patterns. Let , and be learnable projection matrices. Consider a feature map associated with a kernel function defined by . Let be a scaling parameter. According to Definition 3.1, the kernelized Hopfield attention matrix is defined by, for every ,
The kernelized Hopfield attention matrix is the basis for computing a single kernelized Hopfield layer.
Definition 3.6 (Single kernelized Hopfield layer).
The result pattern are a function of raw stored patterns , raw state pattern , and projection matrices (For simplicity, we denote in Hopfield layer as ):
Note that the softmax is applied row-wise, . We set . Then, based on Definition 3.1, we define the i-th kernelized Hopfield layer as
Multiple kernelized Hopfield layers can be integrated with additional components to construct the kernelized Hopfield network.
Definition 3.7 (Kernelized Hopfield network).
We use to denote the number of kernelized Hopfield layers in the network. For , we use to denote the other components of -th kernelized Hopfield layer, where . Let denote the -th kernelized Hopfield layer. Define as the state (query) patterns or input matrix, and as the stored (key) patterns in the -th kernelized Hopfield layer. The -layer kernelized Hopfield network is defined as:
where denotes the composition of functions.
4 Complexity of Modern Hopfield Networks
This section explores key results regarding the circuit complexity of the computations involved in modern Hopfield networks. We begin with an analysis of the Hopfield attention matrix in Section 4.1. In Section 4.2, we delve into the computation of a single Hopfield layer. Section 4.3 extends the discussion to other components beyond the Hopfield layer. In Section 4.4, we examine the modern Hopfield network. Finally, Section 4.5 presents our main result: the circuit complexity bounds for modern Hopfield networks. These findings form the basis for our main theorem on the expressive power of Hopfield networks. In Appendix B.1, we provide fundamental definitions from circuit complexity theory.
4.1 Computing Hopfield Attention Matrix
We first recall that matrix multiplication of two matrices is in .
Lemma 4.1 (Matrix multiplication in , Lemma 4.2 in [8]).
Let , , and . Let and . Then the matrix product can be implemented using a -uniform threshold circuit which has depth and size bounded by .
Here, we extend the matrix operations to compute the Hopfield attention matrix.
Lemma 4.2 (Computation of Hopfield attention matrix in ).
Let precision . One can simulate the Hopfield attention matrix using a -uniform threshold circuit with depth and size bounded by .
Proof.
To compute , we use the following steps:
Summing the circuit depths for all steps, the total depth required for is:
Since all entries of can be simulated in parallel, the total depth and size , completing the proof. ∎
4.2 Computing Single Hopfield Layer
This section analyzes the computation of a single Hopfield layer, including the necessary depth and size requirements.
Lemma 4.3 (Computation of Hopfield layer in ).
Let precision . We can simulate the Hopfield layer using a -uniform threshold circuit with depth and size bounded by .
Proof.
The computation involves multiplying matrices . First, we compute and :
-
1.
is computed with depth , as shown in Part 1 and Part 3 of Lemma B.5.
-
2.
From Lemma 4.2, is computed with depth .
-
3.
From Lemma 4.2, is computed with depth .
-
4.
Finally, the division is computed in parallel with depth .
Combining these depths:
The total size of the circuit remains , completing the proof. ∎
4.3 Computing Common Components Layers
Definition 3.3 introduces modern Hopfield networks, which incorporate the Hopfield layer along with other modules such as layer normalization and two-layer ReLU feed-forward networks. In this section, we present the computational complexity of these additional components.
We begin by analyzing the circuit complexity of the two-layer ReLU feed-forward networks.
Lemma 4.4 (Computation of Two-layer ReLU Feed-forward Networks in ).
Let , one can simulate the two-layer ReLU feed-forward neural networks using a -uniform threshold circuit which has depth and size bounded by .
Proof.
For each , Lemma 4.1 guarantees that the computation of requires a -uniform circuit which has depth and size . Applying Part 1 of Lemma B.5, we can compute with an additional -uniform circuit which has depth and size . The ReLU activation, , also requires depth and size as per Part 1 of Lemma B.5.
For the second layer, the circuit depth needed is , and the size remains . Thus, the total circuit depth across both layers is , with the size still bounded by because these computations are able to be executed in parallel for each . ∎
4.4 Computing Modern Hopfield Networks
We demonstrate how to compute modern Hopfield networks within the complexity class.
Lemma 4.5 (Computation of Modern Hopfield Networks in ).
Suppose that for every , then we can simulate function in by a -uniform threshold circuit which has size and constant depth . When , the overall computation of the network can be implemented using a -uniform threshold circuit with depth and size bounded by .
Proof.
By assumption, we can simulate using a -uniform threshold circuit which has size bounded by and depth .
Next, by Lemma 4.3, we know that the computation of a single Hopfield layer can be performed using a -uniform threshold circuit with depth , while its size remains polynomial in .
The complete computation of involves and . The total circuit depth is, therefore, the sum of the depths of all these computations. Explicitly, the depth is ∎
4.5 Main Result: Circuit Complexity Bound of Modern Hopfield Networks
Our main result establishes the circuit complexity bounds for modern Hopfield networks.
Theorem 4.6 (Circuit complexity of modern Hopfield networks).
Consider a modern Hopfield network where we can simulate each component function for by a -uniform threshold circuit which has depth and size. If precision , hidden dimension , and number of layers , then we can simulate by a -uniform circuit family in .
Proof.
Given , Lemma 4.3 establishes that the circuit depth required for is given by:
Since is a constant, the total circuit depth simplifies to . Furthermore, the size of the circuit remains within .
Hence, the modern Hopfield network can be realized using a -uniform circuit family belonging to , completing the argument. ∎
5 Complexity of Kernelized Hopfield Networks
In this section, we extend the complexity results of modern Hopfield networks to kernelized Hopfield networks. The formal definition of kernelized Hopfield networks is provided in Appendix 3.3.
Section 5.1 analyzes the computation of the kernelized Hopfield attention matrix. In Section 5.2, we examine the computation of a single kernelized Hopfield layer. Section 5.3 details the computation of the entire kernelized Hopfield network. Finally, in Section 5.4, we present the main results on the circuit complexity bounds for kernelized Hopfield networks. In appendix E, we provide the complete proofs for this section.
5.1 Computing Kernelized Hopfield Attention Matrix
In this section, we generalize the computing of the Hopfield attention matrix to the kernelized Hopfield attention matrix.
Lemma 5.1 (Computation of kernelized Hopfield attention matrix in ).
Assuming that and the linear affine feature map is defined as for , where and the linear kernel is , then the kernelized Hopfield attention matrix can be implemented using a -uniform threshold circuit of depth and size bounded by .
Proof.
See Appendix E.1 for the complete proof. ∎
5.2 Computing Single Kernelized Hopfield Layer
This section provides an analysis of the computation of a single kernelized Hopfield layer, including its circuit depth.
Lemma 5.2 (Computation of Single kernelized Hopfield layer in ).
Assuming that , then the kernelized Hopfield layer can be implemented using a -uniform threshold circuit of depth and size bounded by .
Proof.
See Appendix E.2 for the complete proof. ∎
5.3 Computing Kernelized Hopfield Networks
Here, we analyze the computation of the entire kernelized Hopfield network.
Lemma 5.3 (Kernelized Hopfield networks computation in ).
Assume that for each , we can simulate the function in by a -uniform threshold circuit which has constant depth and size . If the precision , then we can simulate the kernelized Hopfield networks by a -uniform threshold circuit of depth and size bounded by .
Proof.
See Appendix E.3 for the complete proof. ∎
5.4 Main Result: Circuit Complexity Bound of Kernelized Hopfield Networks
We now present the main result for kernelized Hopfield networks, establishing their circuit complexity bounds.
Theorem 5.4 (Main result, Circuit complexity bound of kernelized Hopfield networks).
Assume that for each , the function in is computable by a -uniform threshold circuit with constant depth and size . If precision , hidden dimension , and number of layers , then we can simulate the kernelized Hopfield networks by a -uniform circuit family within .
Proof.
See Appendix E.4 for the complete proof. ∎
Theorem 5.4 demonstrates that, unless , kernelized Hopfield networks with polynomial precision, constant depth, and polynomial size can be implemented by a --uniform circuit family within . This implies that kernelized Hopfield networks are subject to inherent expressivity limitations under circuit complexity constraints despite their empirical success.
6 Hardness
In this section, we present two computational problems along with their corresponding hardness results. In In Section 6.1, we outline the corresponding hardness results. Appendix D, we introduce the undirected graph connectivity problem and the tree isomorphism problem.
6.1 Hardness Results
In this section, we present our hardness results for modern and kernelized Hopfield networks.
Theorem 6.1.
Assume that . There is no -precision modern Hopfield networks with constant layers, and hidden dimension can solve the undirected graph connectivity problem.
Proof.
Theorem 6.2.
Assume that . There is no -precision modern Hopfield networks with layers, and hidden dimension can solve the tree isomorphism problem.
Proof.
Theorem 6.3.
Assume that . There is no -precision kernelized Hopfield networks with constant layers, and hidden dimension can solve the undirected graph connectivity problem.
Theorem 6.4.
Assume that . There is no -precision kernelized Hopfield networks with constant layers, and hidden dimension can solve the tree isomorphism problem.
7 Conclusion
In this study, we conduct a comprehensive theoretical investigation of modern Hopfield networks and kernelized Hopfield networks, establishing key limitations on their computational power. Our analysis focuses on the circuit complexity of individual architectural components, showing that these networks can be simulated using uniform circuits. Crucially, we prove that unless , both modern and kernelized Hopfield networks, when implemented with polynomial precision, a constant number of layers, and hidden dimensions satisfying , are unable to solve problems such as undirected graph connectivity and tree isomorphism. These findings highlight the inherent expressivity constraints of Hopfield models, even in light of their demonstrated empirical success across various tasks.
However, our analysis has limitations. It primarily addresses the forward computation of these networks, assuming constant-depth nonlinear activation functions, and does not consider the training process or the implications of more complex activation mechanisms. Expanding this framework to encompass other Hopfield network variants and exploring whether similar computational bounds apply to advanced architectures would be a valuable direction for future research. Additionally, our results point to an intriguing gap between the theoretical constraints and the practical effectiveness of these models. Investigating how Hopfield networks achieve strong empirical performance despite these limitations could provide deeper insights into model scaling and architecture design, paving the way for more theoretically robust approaches.
References
- [1] Josh Alman and Zhao Song. Fast attention requires bounded entries. In NeurIPS, 2023.
- [2] Josh Alman and Zhao Song. How to capture higher-order correlations? generalizing matrix softmax attention to kronecker computation. arXiv preprint arXiv:2310.04064, 2023.
- [3] Josh Alman and Zhao Song. Fast rope attention: Combining the polynomial method and fast fourier transform. manuscript, 2024.
- [4] Josh Alman and Zhao Song. The fine-grained complexity of gradient computation for training large language models. In NeurIPS, 2024.
- BCB [16] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate, 2016.
- Cha [22] François Charton. What is my math transformer doing?–three results on interpretability and generalization. arXiv preprint arXiv:2211.00170, 2022.
- Chi [24] David Chiang. Transformers in uniform tc0. arXiv preprint arXiv:2409.13629, 2024.
- [8] Bo Chen, Xiaoyu Li, Yingyu Liang, Jiangxuan Long, Zhenmei Shi, and Zhao Song. Circuit complexity bounds for rope-based transformer architecture. arXiv preprint arXiv:2411.07602, 2024.
- [9] Bo Chen, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, and Zhao Song. Bypassing the exponential dependency: Looped transformers efficiently learn in-context by multi-step gradient descent. arXiv preprint arXiv:2410.11268, 2024.
- CLS+ [24] Bo Chen, Yingyu Liang, Zhizhou Sha, Zhenmei Shi, and Zhao Song. Hsr-enhanced sparse attention acceleration. arXiv preprint arXiv:2410.10165, 2024.
- CM [87] Stephen A Cook and Pierre McKenzie. Problems complete for deterministic logarithmic space. Journal of Algorithms, 8(3):385–394, 1987.
- Coo [85] Stephen A Cook. A taxonomy of problems with fast parallel algorithms. Information and control, 64(1-3):2–22, 1985.
- CvMG+ [14] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation, 2014.
- DHL+ [17] Mete Demircigil, Judith Heusel, Matthias Löwe, Sven Upgang, and Franck Vermet. On a model of associative memory with huge storage capacity. Journal of Statistical Physics, 168(2):288–299, May 2017.
- DSWY [22] Yichuan Deng, Zhao Song, Yitan Wang, and Yuanyuan Yang. A nearly optimal size coreset algorithm with nearly linear time. arXiv preprint arXiv:2210.08361, 2022.
- FRL+ [22] Andreas Fürst, Elisabeth Rumetshofer, Johannes Lehner, Viet Tran, Fei Tang, Hubert Ramsauer, David Kreil, Michael Kopp, Günter Klambauer, Angela Bitto-Nemling, and Sepp Hochreiter. Cloob: Modern hopfield networks with infoloob outperform clip, 2022.
- FZG+ [23] Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. Towards revealing the mystery behind chain of thought: a theoretical perspective. Advances in Neural Information Processing Systems, 36, 2023.
- GMS [23] Yeqi Gao, Sridhar Mahadevan, and Zhao Song. An over-parameterized exponential regression. arXiv preprint arXiv:2303.16504, 2023.
- GVW+ [16] Nicholas Guttenberg, Nathaniel Virgo, Olaf Witkowski, Hidetoshi Aoki, and Ryota Kanai. Permutation-equivariant neural networks applied to dynamics prediction. arXiv preprint arXiv:1612.04530, 2016.
- HCL+ [24] Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Haozheng Luo, Hong-Yu Chen, Weijian Li, Wei-Po Wang, and Han Liu. Outlier-efficient hopfield layers for large transformer-based models. In Forty-first International Conference on Machine Learning (ICML), 2024.
- HCW+ [24] Jerry Yao-Chieh Hu, Bo-Yu Chen, Dennis Wu, Feng Ruan, and Han Liu. Nonparametric modern hopfield models. arXiv preprint arXiv:2404.03900, 2024.
- HLP+ [23] Benjamin Hoover, Yuchen Liang, Bao Pham, Rameswar Panda, Hendrik Strobelt, Duen Horng Chau, Mohammed J. Zaki, and Dmitry Krotov. Energy transformer, 2023.
- HLSL [24] Jerry Yao-Chieh Hu, Thomas Lin, Zhao Song, and Han Liu. On computational limits of modern hopfield models: A fine-grained complexity analysis. In Forty-first International Conference on Machine Learning, 2024.
- Hoc [91] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, 91(1):31, 1991.
- Hop [82] John J Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554–2558, 1982.
- Hop [84] John J Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the national academy of sciences, 81(10):3088–3092, 1984.
- HS [97] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
- HSK+ [24] Jerry Yao-Chieh Hu, Maojiang Su, En-Jui Kuo, Zhao Song, and Han Liu. Computational limits of low-rank adaptation (lora) for transformer-based models. arXiv preprint arXiv:2406.03136, 2024.
- [29] Jerry Yao-Chieh Hu, Dennis Wu, and Han Liu. Provably optimal memory capacity for modern hopfield models: Transformer-compatible dense associative memories as spherical codes. In Thirty-eighth Conference on Neural Information Processing Systems (NeurIPS), 2024.
- [30] Jerry Yao-Chieh Hu, Weimin Wu, Zhuoru Li, Sophia Pi, , Zhao Song, and Han Liu. On statistical rates and provably efficient criteria of latent diffusion transformers (dits). In Thirty-eighth Conference on Neural Information Processing Systems (NeurIPS), 2024.
- HYW+ [23] Jerry Yao-Chieh Hu, Donglin Yang, Dennis Wu, Chenwei Xu, Bo-Yu Chen, and Han Liu. On sparse modern hopfield model. In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), 2023.
- Imm [98] Neil Immerman. Descriptive complexity. Springer Science & Business Media, 1998.
- IP [01] Russell Impagliazzo and Ramamohan Paturi. On the complexity of k-sat. Journal of Computer and System Sciences, 62(2):367–375, 2001.
- JMT [98] Birgit Jenner, Pierre McKenzie, and Jacobo Torán. A note on the hardness of tree isomorphism. In Proceedings. Thirteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference)(Cat. No. 98CB36247), pages 101–105. IEEE, 1998.
- KH [16] Dmitry Krotov and John J Hopfield. Dense associative memory for pattern recognition. Advances in neural information processing systems, 29, 2016.
- KH [20] Dmitry Krotov and John Hopfield. Large associative memory problem in neurobiology and machine learning. arXiv preprint arXiv:2008.06996, 2020.
- KKK [23] Leo Kozachkov, Ksenia V Kastanenka, and Dmitry Krotov. Building transformers from neurons and astrocytes. Proceedings of the National Academy of Sciences, 120(34):e2219150120, 2023.
- KLL+ [24] Yekun Ke, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, and Zhao Song. Advancing the understanding of fixed point iterations in deep neural networks: A detailed analytical study. arXiv preprint arXiv:2410.11279, 2024.
- KLSZ [24] Yekun Ke, Xiaoyu Li, Zhao Song, and Tianyi Zhou. Faster sampling algorithms for polytopes with small treewidth. In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024.
- LAG+ [22] Bingbin Liu, Jordan T Ash, Surbhi Goel, Akshay Krishnamurthy, and Cyril Zhang. Transformers learn shortcuts to automata. arXiv preprint arXiv:2210.10749, 2022.
- [41] Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Tianyi Zhou. Fourier circuits in neural networks and transformers: A case study of modular arithmetic with multiple inputs. arXiv preprint arXiv:2402.09469, 2024.
- [42] Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Junwei Yu. Fast john ellipsoid computation with differential privacy optimization. arXiv preprint arXiv:2408.06395, 2024.
- [43] Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou. Fine-grained attention i/o complexity: Comprehensive analysis for backward passes. arXiv preprint arXiv:2410.09397, 2024.
- [44] Yingyu Liang, Heshan Liu, Zhenmei Shi, Zhao Song, Zhuoyan Xu, and Junze Yin. Conv-basis: A new paradigm for efficient attention inference and gradient computation in transformers. arXiv preprint arXiv:2405.05219, 2024.
- [45] Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song, and Yufa Zhou. Beyond linear approximations: A novel pruning approach for attention matrix. arXiv preprint arXiv:2410.11261, 2024.
- LLSS [24] Xiaoyu Li, Yingyu Liang, Zhenmei Shi, and Zhao Song. A tighter complexity analysis of sparsegpt. arXiv preprint arXiv:2408.12151, 2024.
- LLSZ [24] Xiaoyu Li, Jiangxuan Long, Zhao Song, and Tianyi Zhou. Fast second-order method for neural network under small treewidth setting. In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024.
- LLZM [24] Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of thought empowers transformers to solve inherently serial problems. In The Twelfth International Conference on Learning Representations, 2024.
- [49] Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, and Yufa Zhou. Looped relu mlps may be all you need as practical programmable computers. arXiv preprint arXiv:2410.09375, 2024.
- [50] Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, and Yufa Zhou. Multi-layer transformers gradient can be approximated in almost linear time. arXiv preprint arXiv:2408.13233, 2024.
- LSSY [24] Yingyu Liang, Zhenmei Shi, Zhao Song, and Chiwun Yang. Toward infinite-long prefix in transformer. arXiv preprint arXiv:2406.14036, 2024.
- [52] Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou. Differential privacy of cross-attention with provable guarantee. arXiv preprint arXiv:2407.14717, 2024.
- [53] Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou. Tensor attention training: Provably efficient learning of higher-order transformers. arXiv preprint arXiv:2405.16411, 2024.
- LSY [24] Xiaoyu Li, Zhao Song, and Junwei Yu. Quantum speedups for approximating the john ellipsoid. arXiv preprint arXiv:2408.14018, 2024.
- [55] William Merrill and Ashish Sabharwal. A logic for expressing log-precision transformers. Advances in Neural Information Processing Systems, 36, 2023.
- [56] William Merrill and Ashish Sabharwal. The parallelism tradeoff: Limitations of log-precision transformers. Transactions of the Association for Computational Linguistics, 11:531–545, 2023.
- MSS [22] William Merrill, Ashish Sabharwal, and Noah A Smith. Saturated transformers are constant-depth threshold circuits. Transactions of the Association for Computational Linguistics, 10:843–856, 2022.
- PAP+ [23] Fabian Paischer, Thomas Adler, Vihang Patil, Angela Bitto-Nemling, Markus Holzleitner, Sebastian Lehner, Hamid Eghbal-zadeh, and Sepp Hochreiter. History compression via language models in reinforcement learning, 2023.
- RSL+ [21] Hubert Ramsauer, Bernhard Schafl, Johannes Lehner, Philipp Seidl, Michael Widrich, Lukas Gruber, Markus Holzleitner, Thomas Adler, David Kreil, Michael K Kopp, et al. Hopfield networks is all you need. In International Conference on Learning Representations, 2021.
- RSP [16] Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Deep learning with sets and point clouds. arXiv preprint arXiv:1611.04500, 2016.
- Sip [96] Michael Sipser. Introduction to the theory of computation. ACM Sigact News, 27(1):27–29, 1996.
- SMN+ [24] Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, Yingyu Liang, and Shafiq Joty. Discovering the gems in early layers: Accelerating long-context llms with 1000x input token reduction. arXiv preprint arXiv:2409.17422, 2024.
- SRD+ [22] Philipp Seidl, Philipp Renz, Natalia Dyubankova, Paulo Neves, Jonas Verhoeven, Jorg K Wegner, Marwin Segler, Sepp Hochreiter, and Gunter Klambauer. Improving few-and zero-shot reaction template prediction using modern hopfield networks. Journal of chemical information and modeling, 62(9):2111–2120, 2022.
- SSF+ [23] Johannes Schimunek, Philipp Seidl, Lukas Friedrich, Daniel Kuhn, Friedrich Rippmann, Sepp Hochreiter, and Günter Klambauer. Context-enriched molecule representations improve few-shot drug discovery. arXiv preprint arXiv:2305.09481, 2023.
- SSX [23] Anshumali Shrivastava, Zhao Song, and Zhaozhuo Xu. A theoretical analysis of nearest neighbor search on approximate near neighbor graph. arXiv preprint arXiv:2303.06210, 2023.
- SY [23] Zhao Song and Chiwun Yang. An automatic learning rate schedule algorithm for achieving faster convergence and steeper descent. arXiv preprint arXiv:2310.11291, 2023.
- SZZ [24] Zhao Song, Lichen Zhang, and Ruizhe Zhang. Training multi-layer over-parametrized neural network in subquadratic time. In Innovations in Theoretical Computer Science (ITCS), pages 93:1–93:15, 2024.
- Vol [99] Heribert Vollmer. Introduction to circuit complexity: a uniform approach. Springer Science & Business Media, 1999.
- VSP+ [23] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2023.
- WHHL [24] Dennis Wu, Jerry Yao-Chieh Hu, Teng-Yun Hsiao, and Han Liu. Uniform memory retrieval with larger capacity for modern hopfield models. arXiv preprint arXiv:2404.03827, 2024.
- WHL+ [24] Dennis Wu, Jerry Yao-Chieh Hu, Weijian Li, Bo-Yu Chen, and Han Liu. STanhop: Sparse tandem hopfield model for memory-enhanced time series prediction. In The Twelfth International Conference on Learning Representations (ICLR), 2024.
- WSR+ [20] Michael Widrich, Bernhard Schäfl, Hubert Ramsauer, Milena Pavlović, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter, Geir Kjetil Sandve, Victor Greiff, Sepp Hochreiter, and Günter Klambauer. Modern hopfield networks and attention for immune repertoire classification, 2020.
- XHH+ [24] Chenwei Xu, Yu-Chao Huang, Jerry Yao-Chieh Hu, Weijian Li, Ammar Gilani, Hsi-Sheng Goan, and Han Liu. Bishop: Bi-directional cellular learning for tabular data with generalized sparse modern hopfield model. In Forty-first International Conference on Machine Learning (ICML), 2024.
- XSL [24] Zhuoyan Xu, Zhenmei Shi, and Yingyu Liang. Do large language models have compositional ability? an investigation into limitations and scalability. In First Conference on Language Modeling, 2024.
Appendix
Roadmap.
In Section A, we give an extended discussion of related works. In Section B, we introduce essential definitions for circuit complexity and float point numbers used in this paper. In Section C, we present the additional theoretical results of the modern Hopfield model with a chain of thought (CoT). In Section D, we present the formulation of graph connectivity and tree isomorphism problems. In Section E, we present the formal proofs for kernelized Hopfield networks.
Appendix A More Related Works
Modern Hopfield Networks and Applications in Deep Learning.
Classical Hopfield networks [25, 26, 35] have long been recognized for their ability to emulate human brain associative memory, primarily focusing on the storage and retrieval of memory patterns. Originally limited by linear memory capacity [35], these networks now achieve exponential storage capabilities [14] and kernelized [70], allowing for efficient handling of vast memory patterns. Architectural innovations [22, 63, 16] have integrated Hopfield networks into models like Transformers, serving as advanced attention mechanisms [69]. Their biological plausibility [37, 36] has been a key focus, aligning computational models more closely with human associative memory. These developments have expanded applications across various fields, including tabular data learning [73], drug discovery [64], immunology [72], time series forecasting [70], reinforcement learning [58], and large foundation models [31, 16, 20], demonstrating their versatility in addressing complex problems.
Limitations of Transformer Architectures.
While Transformers excel in natural language processing, their capabilities in mathematical computation remain constrained [6]. To understand these limitations, researchers have examined two types of Transformers: (1) average-head attention, which assigns 1 to the largest probability entry and 0 to others, and (2) softmax-attention, which computes probabilities as . Average-head Transformers can recognize languages beyond , but they are confined to , as shown by [57]. Similarly, softmax-attention Transformers are also within [40]. Extending this, [56] formalize a generalized similarity function and demonstrate that softmax-attention Transformers fall under -uniform . Through a first-order logic framework with quantifiers () [32], [55] show that these Transformers can be simulated in -uniform . [7] further enhances precision, proving that Transformers with minimal error () also belong to this class. For practical problems, [17] reveal that unless , log-precision Transformers cannot solve arithmetic, equation-solving, or Context-Free Grammar (CFG) membership tasks [61]. These findings explain why Transformers often struggle with mathematical reasoning. Recently, [8] studied the circuit complexity bounds of RoPE-based transformer architectures. Relevant works related to this topic includes looped transformers [3, 49, 9, 38, 50, 10], accelerating attention computation [8, 31, 45, 51, 73, 21, 67, 70, 29, 23, 47, 39, 71, 54, 43, 44, 62, 46, 42, 30], and more related works [74, 66, 15, 65, 18, 52, 41, 28].
Appendix B Background
B.1 Background on Circuit Complexity
Definition B.1 (Boolean circuit).
We define a Boolean circuit that has inputs and outputs as a function on a directed acyclic graph. Each node in the graph represents a logic gate in . The graph contains input nodes with in-degree 0 and output nodes with out-degree 0. The circuit evaluates the value at each non-input gate based on the inputs it receives from other gates. The size of the circuit is the total number of gates, and its depth is the maximum length of a directed path.
To address varying input sizes, the concept of circuit families is introduced.
Definition B.2 (Circuit family and language recognition, Definition 1.10 on page 10 in [68]).
A circuit family is a set , where for each , is a circuit with inputs. The family computes a function if for every , . Similarly, is said to compute a language if computes the indicator function of .
A language is a set of binary strings that represent the ”yes” instances of a given decision problem, where denotes the set of all finite binary strings of any length.
The complexity classes , , and characterize languages recognizable by Boolean circuits under different constraints. consists of languages that can be computed using circuits with polynomial size, depth , and bounded fan-in , , and gates. Extending this, allows unbounded fan-in for and gates while maintaining the same size and depth constraints, enabling the recognition of a broader range of languages. Further generalizing, incorporates gates. These circuits also support unbounded fan-in , , and gates while adhering to the same size and depth limits, making them more expressive than both and .
The class encompasses languages decidable by deterministic Turing machines in polynomial time. Circuit hierarchies relate , , and , showing that for any fixed ,
However, whether is strictly contained in remains unresolved.
Uniform circuit families are more practical and relevant in computational theory compared to non-uniform families. For -uniformity, a Turing machine with logarithmic space must output a circuit for any given input size. Alternatively, -uniformity requires a random access Turing machine to construct circuits in logarithmic time. These definitions ensure feasibility and consistency in circuit design. While -uniformity is generally equivalent to -uniformity, differences arise in smaller circuit classes that lack the ability to simulate their own generation process.
B.2 Floating-Point Numbers
This section presents key definitions necessary for our computational framework. We introduce basic concepts related to floating-point numbers and the definition of the operations, which are fundamental to implementing efficient Hopfield network computations.
Definition B.3 (Floating-point number, Definition 9 in [7]).
A p-bit floating-point number is represented as a pair , where is the significand and is the exponent. Specifically, and . The floating-point value of is the real number . The set of all p-bit floating-point numbers is denoted as .
To work with floating-point numbers effectively, we define specific rules for rounding and performing arithmetic operations:
Definition B.4 (Rounding, Definition 9 in [7]).
For any real number or floating-point value , we define as the p-bit floating-point number with an even significand closest to .
Based on these definitions, we outline the main arithmetic operations for floating-point numbers required for Hopfield networks:
These operations are not merely theoretical, they can be practically implemented in hardware, as demonstrated by the following lemma:
Lemma B.5 (Basic floating-point operations in , Lemmas 10, 11 in [7]).
If the precision , then the following results are valid:
-
•
Part 1: Let be two -bits floating point numbers. The arithmetic operations of addition, multiplication, division, and comparison of and , as described in [7], can be executed using a -uniform threshold circuit characterized by constant depth, and a size bounded by . The maximum depth of the circuit required is denoted by .
-
•
Part 2: Let be p-bit floating-point number. Iterative multiplication of can be executed with a -uniform threshold circuit characterized by constant depth and a size bounded by . The circuit depth for this operation is indicated as .
-
•
Part 3: Let be p-bit floating-point number. Iterative addition of , where rounding is applied after the summation is completed, can be achieved using a -uniform threshold circuit characterized by constant depth, and a size bounded by . The depth required for this operation is represented by .
Lemma B.6 (Computing in , Lemma 12 in [7]).
Assuming that precision . For any p-bit floating-point number , there exists a threshold circuit characterized by uniformity, constant depth, and a size bounded by that approximates , which has a relative error at most . The depth of the circuit required for this computation is denoted by .
Lemma B.7 (Approximating square root in , Lemma 12 in [7]).
Assuming that precision . There exists a threshold circuit characterized by uniformity, constant depth, and a size bounded by that computes for any p-bit floating-point number , which has a relative error at most . The circuit depth required for computing is denoted by .
Appendix C Additional Theoretical Results for Modern Hopfield Model with Chain of Thought
We provide theoretical results for modern Hopfield networks that incorporate the chain of thought (CoT) mechanism. We first describe the architecture of the modern Hopfield model.
C.1 Modern Hopfield Model Architecture with CoT
Let be the vocabulary, and consider a Hopfield model with parameters and a maximum input length . This model maps an input token sequence (for ) to a probability distribution over , and we denote it by . We use as the function that selects the token in which can maximize , defined as:
We define the Next-token generator with parameters and a maximum input length maps sequences from to . It is recursively defined as:
For every , provided , we have the base case:
Architecture Overview: The architecture is similar to GPT-like models, comprising four main components: 1. A layer used for token embedding, 2. A layer used for position encoding, 3. An output linear layer, and 4. identical decoder layers. Each decoder layer consists of a single Hopfield layer (see Definition 3.2) and a two-layer ReLU feedforward network (see Definition 3.4). Together, these decoder layers form an -layer modern Hopfield network (see Definition 3.3). The model parameters are organized as: where are trainable parameters (see Algorithm 1).
The token embedding layer parameterized by maps tokens in to : The position encoding layer with parameters maps positions in to : The output layer parameterized by is defined by:
C.2 Results of Hopfield with CoT
In this subsection, we present the theoretical results for the modern Hopfield model. First, we show that the Hopfield update rule can achieve an attention mechanism.
Lemma C.1 (The Hopfield update rule can be interpreted as a form of the transformer attention mechanism, page 5 in [59]).
Let be the number of stored (key) patterns, represented as row vectors , and be the number of state (query) patterns, represented as row vectors . Define The Hopfield update rule can be expressed as:
(4) |
where the following hold: and and Here, is the dimension of the Hopfield space, and are the dimensions of the stored and state patterns, and represents the attention scores.
The middle part of Eq. 4 corresponds to the attention mechanism of the transformer. Specifically, when and are substituted with , the Hopfield update rule is reduced to transformer self-attention.
Definition C.2 (Word Problem for Group , [48]).
For a group and elements from , the word problem is defined as the problem of determining whether the product equals the identity element of .
Lemma C.3 (Theorem 3.5 in [48]).
Assuming , Transformer with step CoT, embedding size, and constant precision can solve the wording problem , is permutation groups over 5 elements. However, a Transformer with constant depth, polynomial embedding size, and precision, but without CoT, cannot solve it.
Theorem C.4.
Assuming , modern Hopfield model with step CoT, embedding size , and constant precision can solve the word problem . However, the modern Hopfield model with a constant depth, polynomial embedding size , and precision, but without CoT, cannot solve it.
Appendix D Graph Connectivity and Tree Isomorphism Problems
D.1 Undirected Graph Connectivity Problem
Definition D.1 (Undirected graph connectivity problem).
The undirected graph connectivity problem determines whether a path connects two specific vertices and in an undirected graph .
Definition D.2 (, page 8 on [12]).
The class is the set of languages recognized by a deterministic Turing Machine in space .
Following the definitions, we have some results.
Fact D.3 (Proposition 4.1 in [12]).
D.2 Tree Isomorphism Problem
Definition D.5 (Colored trees, page 2 on [34]).
A tree with n nodes is said to be colored if each node in the tree is labeled with a positive integer no greater than n.
Definition D.6 (Isomorphism, page 2 on [34]).
An isomorphism between two colored trees and is a bijection between their node sets that satisfies the following conditions:
-
•
The root of maps to the root of .
-
•
The colors of the nodes are preserved.
-
•
The structure of the edges is maintained.
We denote if an isomorphism exists between them. These notions are similarly applicable to non-colored trees.
Following the definition, we explore its computational implications.
Definition D.7 (Tree isomorphism problem, page 2 on [34]).
Given two trees and , the tree isomorphism problem is to determine whether . If the trees are colored, the problem is referred to as the colored tree isomorphism problem.
Lemma D.8 (Theorem 3.3 in [34]).
In the string representation, the colored tree isomorphism problem and the tree isomorphism problem are -complete under , where denotes reducibility.
Appendix E Proofs of Section 5
E.1 Proofs of Lemma 5.1
Lemma E.1 (Lemma 5.1 Restated).
Assuming that and the linear affine feature map is defined as for , where and the linear kernel is , then the kernelized Hopfield attention matrix can be simulated using a -uniform threshold circuit with depth and size bounded by .
Proof.
We compute the entry of the kernelized attention matrix :
Using Lemma 4.1, the scalar product
can be simulated by a -uniform threshold circuit which has size and depth.
Next, by Part 1 of Lemma B.5, the product requires depth .
Finally, by Lemma B.6, the exponential function can be computed with depth and size .
Summing the depths, the overall depth for is:
As all can be computed in parallel, the attention matrix requires a -uniform threshold circuit which has depth and size .
∎
E.2 Proofs of Lemma 5.2
Lemma E.2 (Lemma 5.2 Restated).
Assuming that . Then we can simulate kernelized Hopfield layer using a -uniform threshold circuit with depth and size bounded by .
Proof.
To compute a single layer, we must multiply the matrices , , , and . First, we compute and . Using , can be simulated with depth and size by Part 1 and Part 3 of Lemma B.5.
From Lemma 5.1, computing requires depth .
Next, matrix products , , and can be simulated with depth and size, as shown in Lemma 4.1. Finally, the computation of can be performed using parallel division, requiring depth and size , which follows from Part 1 of Lemma B.5.
Combining these results, the total depth is:
Since the operations can run in parallel with size , the kernelized Hopfield layer is simulated by -uniform threshold circuit with depth . ∎
E.3 Proofs of Lemma 5.3
Lemma E.3 (Lemma 5.3 Restated).
Assume that, for each , we can simulate the function in by a -uniform threshold circuit with size and constant depth . Let the precision . Then we can simulate kernelized Hopfield networks using a -uniform threshold circuit with depth and size bounded by .
Proof.
By assumption, for each , we are able to simulate by a -uniform circuit of depth and size. From Lemma 4.3, each has depth .
To compute , we need circuits for and . The total circuit depth becomes:
The circuit size remains , proving that the kernelized Hopfield network is computable under these conditions. ∎
E.4 Proofs of Theorem 5.4
Theorem E.4 (Theorem 5.4 Restated).
Assume that, for each , we can simulate the function in by a -uniform threshold circuit with constant depth and size . If precision , hidden dimension , and number of layers , then we can simulate kernelized Hopfield networks by a -uniform circuit family within .
Proof.
With , Lemma 5.3 ensures that the circuit depth for is:
with size . Thus, the kernelized Hopfield network can be implemented using a uniform circuit family within . ∎