This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On the Expressive Power of Modern Hopfield Networks

Xiaoyu Li [email protected]. Stevens Institute of Technology.    Yuanpeng Li [email protected]. Beijing Normal University.    Yingyu Liang [email protected]. The University of Hong Kong. [email protected]. University of Wisconsin-Madison.    Zhenmei Shi [email protected]. University of Wisconsin-Madison.    Zhao Song [email protected]. The Simons Institute for the Theory of Computing at UC Berkeley.

Modern Hopfield networks (MHNs) have emerged as powerful tools in deep learning, capable of replacing components such as pooling layers, LSTMs, and attention mechanisms. Recent advancements have enhanced their storage capacity, retrieval speed, and error rates. However, the fundamental limits of their computational expressiveness remain unexplored. Understanding the expressive power of MHNs is crucial for optimizing their integration into deep learning architectures. In this work, we establish rigorous theoretical bounds on the computational capabilities of MHNs using circuit complexity theory. Our key contribution is that we show that MHNs are 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform 𝖳𝖢0\mathsf{TC}^{0}. Hence, unless 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}, a poly(n)\mathrm{poly}(n)-precision modern Hopfield networks with a constant number of layers and O(n)O(n) hidden dimension cannot solve 𝖭𝖢1\mathsf{NC}^{1}-hard problems such as the undirected graph connectivity problem and the tree isomorphism problem. We also extended our results to Kernelized Hopfield Networks. These results demonstrate the limitation in the expressive power of the modern Hopfield networks. Moreover, Our theoretical analysis provides insights to guide the development of new Hopfield-based architectures.

1 Introduction

Hopfield networks [25], initially introduced as associative memories capable of storing and retrieving patterns, have undergone significant advancements in recent years. These developments led to the emergence of modern Hopfield networks [59]. Modern Hopfield networks (MHNs) address the limitations of their predecessors, particularly in terms of storage capacity, retrieval speed, and error rates. An important feature of MHNs is their ability to function as specialized components within deep networks, integrating memory capabilities and supporting diverse functionalities. For instance, MHNs can replace conventional layers such as pooling layers, permutation-equivariant layers [19, 60], GRU [13], LSTM [24, 27], and attention mechanisms [69, 5].

Understanding modern Hopfield networks’ computational capabilities and limitations is critical for their effective application. Specifically, it is crucial to explore the computational operations these networks can implement and the complexity of the problems they can solve collectively. These investigations are vital not only for theoretical insights but also for guiding the development of more efficient and powerful models.

While prior research has primarily focused on the dynamics and capacity of modern Hopfield networks [23, 70], their expressiveness from a circuit complexity perspective remains underexplored. This gap raises an important question:

What are the fundamental limits of modern Hopfield networks in terms of circuit complexity?

To address these questions, we turn to the framework of circuit complexity theory, which provides a robust method for analyzing the computational resources required to perform specific tasks. By mapping modern Hopfield networks to computational circuits, we can rigorously assess their capabilities and establish upper and lower bounds on the classes of problems they can solve.

In this work, we present a comprehensive theoretical investigation into the circuit complexity bounds of MHNs. Our approach involves analyzing the architecture of MHNs and the computational complexity of its components, such as the Hopfield layer. Hence, we show that uniform 𝖳𝖢0\mathsf{TC}^{0} circuits can efficiently simulate these models.

Our main contributions can be outlined as follows:

  • We prove that any poly(n)\operatorname{poly}(n)-precision modern Hopfield networks or kernelized Hopfield networks with constant-depth and O(n)O(n) hidden dimension is in 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform 𝖳𝖢0\mathsf{TC}^{0} circuit family (Theorem 4.6 and 5.4).

  • We prove that unless 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}, a poly(n)\operatorname{poly}(n)-precision modern Hopfield networks or kernelized Hopfield networks with constant layers, O(n)O(n) hidden dimension cannot solve 𝖭𝖢1\mathsf{NC}^{1}-hard problems such as the undirected graph connectivity problems and tree isomorphism problems (Theorems 6.1, 6.2, 6.3 and 6.4).

Roadmap.

In Section 2, we provide an overview of the related works. In Section 3, we introduce the notations and definitions needed for modern Hopfield networks. In Section 4, we analyze the circuit complexity for modern Hopfield networks. In Section 5, we focus on the circuit complexity results of kernelized Hopfield networks. In Section 6, we discuss the hardness results of the modern Hopfield networks. Finally, in Section 7, we summarize our theoretical results in the conclusion. We defer more preliminaries and technique details in Appendix.

2 Related Works

In this section, we introduce the related works of the Hopfield models and the complexity of Transformers. In Appendix A, we introduce more related works.

2.1 Theory of Modern Hopfield Networks

Modern Hopfield Models have demonstrated both empirical success and theoretical value, providing a framework for understanding transformer attention mechanisms and Transformer architectures. [31, 70] propose a unified framework for analyzing and deriving modern Hopfield models using entropic regularizers. This framework includes a range of sparse variants, such as sparse and generalized sparse models, and also integrates the standard modern Hopfield model [59] as a special case. Despite these advancements, the modern Hopfield paradigm remains incomplete, lacking efficient implementations and variants, as highlighted by [31] To address these limitations, [21] propose a principled nonparametric approach for developing efficient variants of modern Hopfield models, including linear, top-K, and random feature-based versions. This study focuses on advancing research to create more efficient models. This study aims to push the boundaries of research in developing more efficient models, emphasizing their potential to shape the future of Hopfield-driven designs.

2.2 Complexity and Neural Network

Complexity theory offers a formal framework to analyze the computational power and limitations of models. Circuit complexity, a key domain within this field, characterizes the power of Boolean circuits and has recently been applied to investigate the expressive capacity of deep neural networks and Transformers [57, 40, 55, 56, 48, 7]. Several important circuit complexity classes play a central role in machine learning. For example, 𝖠𝖢0\mathsf{AC}^{0} includes problems solvable with highly parallelizable logic gates, 𝖳𝖢0\mathsf{TC}^{0} extends this class by incorporating threshold gates, and 𝖭𝖢1\mathsf{NC}^{1} describes languages recognized by circuits with O(logn)O(\log n)-depth and bounded fan-in [57]. These classes exhibit the hierarchy 𝖠𝖢0𝖳𝖢0𝖭𝖢1\mathsf{AC}^{0}\subset\mathsf{TC}^{0}\subseteq\mathsf{NC}^{1}, although whether 𝖳𝖢0𝖭𝖢1\mathsf{TC}^{0}\neq\mathsf{NC}^{1} remains an open question.

Transformers’ depth requirements have been analyzed through the lens of these complexity classes. For instance, [40] demonstrates that the depth of a Transformer must scale with input length to simulate certain non-solvable semi-automata. In a similar vein, [48] investigates the connection between constant-depth Transformers, chain-of-thought (CoT) reasoning, and circuit complexity. They show that 𝖳[poly(n),1,1]𝖢𝗈𝖳[logn,poly(n),1,1]𝖠𝖢0,\mathsf{T}[\mathrm{poly}(n),1,1]\subseteq\mathsf{CoT}[\log n,\mathrm{poly}(n),1,1]\subseteq\mathsf{AC}^{0}, and 𝖳[poly(n),logn,0]𝖢𝗈𝖳[logn,poly(n),logn,0]𝖳𝖢0,\mathsf{T}[\mathrm{poly}(n),\log n,0]\subseteq\mathsf{CoT}[\log n,\mathrm{poly}(n),\log n,0]\subseteq\mathsf{TC}^{0}, Here, 𝖳[d(n),s(n),e(n)]\mathsf{T}[d(n),s(n),e(n)] denotes the constant-depth Transformers characterized by precision of s(n)s(n), an embedding size of d(n)d(n), and e(n)e(n) exponent bits, while 𝖢𝗈𝖳[T(n),d(n),s(n),e(n)]\mathsf{CoT}[T(n),d(n),s(n),e(n)] refers to a chain-of-thought (CoT) Transformer operating for T(n)T(n) steps. These findings highlight that intermediate reasoning steps in CoT models can improve computational expressivity.

The Strong Exponential Time Hypothesis (𝖲𝖤𝖳𝖧\mathsf{SETH}), which posits that no kk-𝖲𝖠𝖳\mathsf{SAT} algorithm exists with runtime O(2(1ϵ)n)O(2^{(1-\epsilon)n}) for ϵ>0\epsilon>0 and k3k\geq 3, serves as a cornerstone in fine-grained complexity analyses [33]. Results derived from 𝖲𝖤𝖳𝖧\mathsf{SETH} have shaped Transformer-related studies, particularly for tensor attention mechanisms [1, 4, 53, 50, 2]. For example, [1] shows that by assuming 𝖲𝖤𝖳𝖧\mathsf{SETH}, it is impossible to perform forward pass computation of attention-based networks in O(n2)O(n^{2}) time, while [4] extends this limitation to backward computations. These results underscore the intrinsic computational challenges of Transformer architectures.

3 Preliminaries

In Section 3.1, we define useful notations for defining modern Hopfield networks. In Section 3.2, we introduce some components of modern Hopfield networks. In Section 3.3, we state the basic definitions and architectures of kernelized Hopfiled networks. The background knowledge about circuit complexity and float point number is introduced in Appendix B.

3.1 Notations

We define :={0,1,2,}\mathbb{N}:=\{0,1,2,\ldots\} as the set of natural numbers and \mathbb{R} as the set of real numbers. For any positive integer nn, [n][n] represents the set {1,2,,n}\{1,2,\ldots,n\}. The vector 𝟏n\mathbf{1}_{n} denotes an nn-dimensional vector where all entries are ones. Given a matrix Am×nA\in\mathbb{R}^{m\times n}, Ai,jA_{i,j} refers to the element in the ii-th row and jj-th column, while AA^{\top} represents the transpose of AA. The set {0,1}\{0,1\}^{*} represents all binary strings of finite length. Specifically, for xi{0,1}x_{i}\in\{0,1\}^{*}, xix_{i} represents a finite binary string, with each bit being either 0 or 11. A language LL is characterized as a subset of {0,1}\{0,1\}^{*}.

3.2 Modern Hopfield Network

With the above notations established, we proceed to introduce the foundational concepts of Modern Hopfield Networks.

The input query pattern is denoted as xdx\in\mathbb{R}^{d}, while memory patterns are represented by the matrix Ξ=[ξ1,,ξM]d×M\Xi=[\xi_{1},\ldots,\xi_{M}]\in\mathbb{R}^{d\times M}. Hopfield models are associative memory models based on energy functions, where the stored memory patterns Ξ\Xi correspond to the local minima of these energy landscapes. For a given input query xx, the model retrieves the most similar memory pattern by employing energy minimization algorithms, referred to as retrieval dynamics 𝒯\mathcal{T}, initialized at xx.

In [59], the Modern Hopfield Model is introduced with a specific energy function EE and retrieval dynamics 𝒯\mathcal{T}, which are further incorporated into deep learning frameworks due to their relation to transformer attention mechanisms [69]. This approach enhances performance and provides a theoretical guarantee of exponential memory capacity. The energy function is defined as:

E(x)=lse(β,Ξx)+12x,x,\displaystyle E(x)=-\mathrm{lse}(\beta,\Xi^{\top}x)+\frac{1}{2}\langle x,x\rangle,

where the retrieval dynamics is given by

xnew=𝒯Dense(x)=ΞSoftmax(βΞx).\displaystyle x^{\mathrm{new}}=\mathcal{T}_{\mathrm{Dense}}(x)=\Xi\cdot\mathrm{Softmax}(\beta\Xi^{\top}x). (1)

and the function lse(β,z)\mathrm{lse}(\beta,z) is the log-sum-exponential, defined as lse(β,z):=log(μ=1Mexp{βzμ})/β\mathrm{lse}(\beta,z):=\log(\sum_{\mu=1}^{M}\exp\{\beta z_{\mu}\})\ /\beta for any zMz\in\mathbb{R}^{M} and β>0\beta>0. When applied to a sequence of input queries X=[x1,,xL]d×LX=[x_{1},\ldots,x_{L}]\in\mathbb{R}^{d\times L}, Eq. (1) becomes: Z:=[x1new,,xLnew]=𝒯Dense(X)Z:=[x_{1}^{\mathrm{new}},\ldots,x_{L}^{\mathrm{new}}]=\mathcal{T}_{\mathrm{Dense}}(X), and hence

𝒯Dense(X)=Ξd×MSoftmax(βΞM×dXd×LM×L)d×L,\displaystyle\mathcal{T}_{\mathrm{Dense}}(X)=\overbrace{\Xi}^{d\times M}\cdot\mathrm{Softmax}(\beta\overbrace{\underbrace{\Xi^{\top}}_{M\times d}\underbrace{X}_{d\times L}}^{M\times L})\in\mathbb{R}^{d\times L},

where Softmax()\mathrm{Softmax}(\cdot) performs column-wise normalization. We assume that d=Lo(1)d=L^{o(1)}, meaning the growth rate of dd is sub-polynomial relative to LL.

Modern Hopfield Networks with continuous states are compatible with deep learning models due to their differentiable nature and the ability to retrieve patterns in a single update step. This aligns with the behavior of deep learning layers, which typically activate only once. These properties make Modern Hopfield Networks suitable as specialized layers to add memory functionalities to deep networks.

Definition 3.1 (Hopfield attention matrix, [59]).

The Hopfield attention matrix A𝔽pn×nA\in\mathbb{F}_{p}^{n\times n} is defined using model weights WQ,WK𝔽pd×dW_{Q},W_{K}\in\mathbb{F}_{p}^{d\times d}, query patterns R𝔽pn×dR\in\mathbb{F}_{p}^{n\times d}, and key patterns Y𝔽pn×dY\in\mathbb{F}_{p}^{n\times d}. For i,j[n]i,j\in[n], the elements of AA are given by:

Ai,j:=exp(βRi,WQWKYj,)\displaystyle A_{i,j}:=\mathrm{\exp}(\beta\cdot R_{i,*}W_{Q}W_{K}^{\top}Y_{j,*}^{\top})

The floating-point number 𝔽p\mathbb{F}_{p} and their operations in our computational framework are formally defined in Appendix B.2. The Hopfield attention marix is used to compute a single Hopfield layer.

Definition 3.2 (Hopfield layer, page 6 in [59]).

A single Hopfield layer propagates patterns using query patterns RR and key patterns YY. In its most general form, the result patterns Z𝔽pn×dZ\in\mathbb{F}_{p}^{n\times d} are a function of raw stored patterns Y𝔽pn×dY\in\mathbb{F}_{p}^{n\times d}, raw state patterns R𝔽pn×dR\in\mathbb{F}_{p}^{n\times d}, and projection matrices WQ,WK,WV𝔽pd×dW_{Q},W_{K},W_{V}\in\mathbb{F}_{p}^{d\times d}:

Z=softmax(βRWQWKY)YWKWV\displaystyle Z=\mathrm{softmax}(\beta\cdot RW_{Q}W_{K}^{\top}Y^{\top})YW_{K}W_{V} (2)

We set D:=diag(βA𝟏n)𝔽pn×nD:=\operatorname{diag}(\beta\cdot A\mathbf{1}_{n})\in\mathbb{F}_{p}^{n\times n}. Then, based on Definition 3.1, we define the ii-th Hopfield layer as

𝖧𝗈𝗉i(R,Yi):=D1AYiW~V\displaystyle\mathsf{Hop}_{i}(R,Y_{i}):=D^{-1}AY_{i}\widetilde{W}_{V}

where we denote

W~V=WKWV.\displaystyle\widetilde{W}_{V}=W_{K}W_{V}. (3)

Here, the rank of W~V\widetilde{W}_{V} is limited by dimension constraints of the matrix product WKWVW_{K}W_{V}. To provide the Hopfield layer with more flexibility, the matrix product WKWVW_{K}W_{V} can be replaced by one parameter matrix. In this case, W~V\widetilde{W}_{V} is not the product from Eq. (3) but a stand-alone parameter matrix as in the original transformer setting.

Multiple Hopfield layers can be integrated with other elements to build a functional Hopfield architecture.

Definition 3.3 (Multi-layer Modern Hopfield Networks).

Consider a model consisting of mm Hopfield layers. For each i[m]i\in[m], let fif_{i} denote the components of the network excluding the ii-th Hopfield layer, where fi:𝔽pn×d𝔽pn×df_{i}:\mathbb{F}_{p}^{n\times d}\rightarrow\mathbb{F}_{p}^{n\times d}. Denote the ii-th Hopfield layer as 𝖧𝗈𝗉i\mathsf{Hop}_{i}. Let the input matrix or query patterns be denoted by R𝔽pn×dR\in\mathbb{F}_{p}^{n\times d}, and the stored patterns (keys) associated with the ii-th Hopfield layer by Yi𝔽pn×dY_{i}\in\mathbb{F}_{p}^{n\times d}. The multi-layer Modern Hopfield Network, denoted as 𝖬𝖧𝖭:𝔽pn×d𝔽pn×d\mathsf{MHN}:\mathbb{F}_{p}^{n\times d}\rightarrow\mathbb{F}_{p}^{n\times d}, is defined as:

𝖬𝖧𝖭(R)=fm𝖧𝗈𝗉m(fm1𝖧𝗈𝗉m1(f1𝖧𝗈𝗉1(f0(R),Y1),Ym1),Ym)𝔽pn×d,\displaystyle\mathsf{MHN}(R)=f_{m}\circ\mathsf{Hop}_{m}(f_{m-1}\circ\mathsf{Hop}_{m-1}(\cdots f_{1}\circ\mathsf{Hop}_{1}(f_{0}(R),Y_{1})\cdots,Y_{m-1}),Y_{m})\in\mathbb{F}_{p}^{n\times d},

where \circ denotes the composition of functions.

We now present one type of fif_{i} function, specifically a two-layer ReLU feedforward neural network.

Definition 3.4 (Two-layer ReLU Feed-forward Neural Networks).

We use X𝔽pn×dX\in\mathbb{F}_{p}^{n\times d} to denote the input matrix of size n×dn\times d. For each i[n]i\in[n], the two-layer ReLU Feed-forward Neural Networks is defined as follows:

gFNN(X)i,:=W2d×d𝖱𝖾𝖫𝖴(W1d×dXi,d×1+b1d×1)+b2d×1.\displaystyle g^{\mathrm{FNN}}(X)_{i,*}:=\underbrace{W_{2}}_{d\times d}\cdot\mathsf{ReLU}(\underbrace{W_{1}}_{d\times d}\cdot\underbrace{X_{i,*}}_{d\times 1}+\underbrace{b_{1}}_{d\times 1})+\underbrace{b_{2}}_{d\times 1}.

3.3 Kernelized Hopfield Networks

The capacity of modern Hopfield networks is suboptimal. To improve the capacity, [70] introduces a kernel as a learnable similarity measure, using stored memory patterns as training data to enhance memory capacity. Specifically, they propose the kernelized Hopfield network (KHM) defined by following update rule and energy function:

𝒯Φ(x):=ΞSoftmax(β𝒦(Ξ,x)),E𝒦(x)=12𝒦(x,x)+lse(β,𝒦(Ξ,x)),\displaystyle\mathcal{T}_{\Phi}(x):=\Xi\cdot\mathrm{Softmax}(\beta\mathcal{K}(\Xi,x)),\quad E_{\mathcal{K}}(x)=\frac{1}{2}\mathcal{K}(x,x)+\mathrm{lse}(\beta,\mathcal{K}(\Xi,x)),

where the kernel 𝒦(,):=Φ(),Φ():d×d\mathcal{K}(\cdot,\cdot):=\langle\Phi(\cdot),\Phi(\cdot)\rangle:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R} is associated with a learnable feature map Φ:dDΦ\Phi:\mathbb{R}^{d}\rightarrow\mathbb{R}^{D_{\Phi}}. Here, 𝒦(,)\mathcal{K}(\cdot,\cdot) acts column-wise on matrix: 𝒦(Ξ,x)=[{𝒦(ξμ,x)}μ=1M]=[{Φ(ξμ),Φ(x)}μ=1M]M\mathcal{K}(\Xi,x)=[\{\mathcal{K}(\xi_{\mu},x)\}_{\mu=1}^{M}]=[\{\langle\Phi(\xi_{\mu}),\Phi(x)\rangle\}_{\mu=1}^{M}]\in\mathbb{R}^{M}. Accordingly, we define the kernelized Hopfield layer.

Definition 3.5 (Kernelized attention matrix).

Let R𝔽pn×dR\in\mathbb{F}_{p}^{n\times d} be the set of query (state) patterns, and Y𝔽pn×dY\in\mathbb{F}_{p}^{n\times d} be the set of key (stored) patterns. Let WQ𝔽pd×DΦ,WK𝔽pd×DΦW_{Q}\in\mathbb{F}_{p}^{d\times D_{\Phi}},W_{K}\in\mathbb{F}_{p}^{d\times D_{\Phi}}, and WV𝔽pd×dW_{V}\in\mathbb{F}_{p}^{d\times d} be learnable projection matrices. Consider a feature map Φ:𝔽pDΦ𝔽pDΦ\Phi:\mathbb{F}_{p}^{D_{\Phi}}\rightarrow\mathbb{F}_{p}^{D_{\Phi}} associated with a kernel function 𝒦:𝔽pDΦ×𝔽pDΦ𝔽p\mathcal{K}:\mathbb{F}_{p}^{D_{\Phi}}\times\mathbb{F}_{p}^{D_{\Phi}}\rightarrow\mathbb{F}_{p} defined by 𝒦(,):=Φ(),Φ()\mathcal{K}(\cdot,\cdot):=\langle\Phi(\cdot),\Phi(\cdot)\rangle. Let β>0\beta>0 be a scaling parameter. According to Definition 3.1, the kernelized Hopfield attention matrix A𝔽pn×nA\in\mathbb{F}_{p}^{n\times n} is defined by, for every i,j[n]i,j\in[n],

Ai,j:=exp(βΦ(RiWQ),Φ(YjWK))\displaystyle A_{i,j}:=\exp(\beta\cdot\langle\Phi(R_{i}W_{Q}),\Phi(Y_{j}W_{K})\rangle)

The kernelized Hopfield attention matrix is the basis for computing a single kernelized Hopfield layer.

Definition 3.6 (Single kernelized Hopfield layer).

The result pattern Z𝔽pn×dZ\in\mathbb{F}_{p}^{n\times d} are a function of raw stored patterns Y𝔽pn×dY\in\mathbb{F}_{p}^{n\times d}, raw state pattern R𝔽pn×dR\in\mathbb{F}_{p}^{n\times d}, and projection matrices WQ,WK,WV𝔽pd×dW_{Q},W_{K},W_{V}\in\mathbb{F}_{p}^{d\times d}(For simplicity, we denote W~V\widetilde{W}_{V} in Hopfield layer as WVW_{V}):

Z=𝗌𝗈𝖿𝗍𝗆𝖺𝗑(βΦ(RWQ),Φ(YWK))YWV\displaystyle Z=\mathsf{softmax}(\beta\cdot\langle\Phi(RW_{Q}),\Phi(YW_{K})\rangle)YW_{V}

Note that the softmax is applied row-wise, Φ(RWQ),Φ(YWK)i,j=Φ(RiWQ),Φ(YjWK)\langle\Phi(RW_{Q}),\Phi(YW_{K})\rangle_{i,j}=\langle\Phi(R_{i}W_{Q}),\Phi(Y_{j}W_{K})\rangle. We set D:=diag(βA1n)𝔽pn×nD:=\operatorname{diag}(\beta\cdot A1_{n})\in\mathbb{F}_{p}^{n\times n}. Then, based on Definition 3.1, we define the i-th kernelized Hopfield layer as

𝖪𝖧𝗈𝗉i(R,Yi):=D1AYiWV\displaystyle\mathsf{KHop}_{i}(R,Y_{i}):=D^{-1}AY_{i}W_{V}

Multiple kernelized Hopfield layers can be integrated with additional components to construct the kernelized Hopfield network.

Definition 3.7 (Kernelized Hopfield network).

We use mm to denote the number of kernelized Hopfield layers in the network. For i{0,1,2,,m}i\in\{0,1,2,\ldots,m\}, we use fif_{i} to denote the other components of ii-th kernelized Hopfield layer, where fi:𝔽pn×d𝔽pn×df_{i}:\mathbb{F}_{p}^{n\times d}\rightarrow\mathbb{F}_{p}^{n\times d}. Let 𝖪𝖧𝗈𝗉i\mathsf{KHop}_{i} denote the ii-th kernelized Hopfield layer. Define R𝔽pn×dR\in\mathbb{F}_{p}^{n\times d} as the state (query) patterns or input matrix, and Yi𝔽pn×dY_{i}\in\mathbb{F}_{p}^{n\times d} as the stored (key) patterns in the ii-th kernelized Hopfield layer. The mm-layer kernelized Hopfield network 𝖪𝖧𝖭:𝔽pn×d𝔽pn×d\mathsf{KHN}:\mathbb{F}_{p}^{n\times d}\rightarrow\mathbb{F}_{p}^{n\times d} is defined as:

𝖪𝖧𝖭(R)=fm𝖪𝖧𝗈𝗉m(fm1𝖪𝖧𝗈𝗉m1(f1𝖪𝖧𝗈𝗉1(f0(X),Y1),Ym1),Ym)𝔽pn×d,\displaystyle\mathsf{KHN}(R)=f_{m}\circ\mathsf{KHop}_{m}(f_{m-1}\circ\mathsf{KHop}_{m-1}(\cdots f_{1}\circ\mathsf{KHop}_{1}(f_{0}(X),Y_{1})\cdots,Y_{m-1}),Y_{m})\in\mathbb{F}_{p}^{n\times d},

where \circ denotes the composition of functions.

4 Complexity of Modern Hopfield Networks

This section explores key results regarding the circuit complexity of the computations involved in modern Hopfield networks. We begin with an analysis of the Hopfield attention matrix in Section 4.1. In Section 4.2, we delve into the computation of a single Hopfield layer. Section 4.3 extends the discussion to other components beyond the Hopfield layer. In Section 4.4, we examine the modern Hopfield network. Finally, Section 4.5 presents our main result: the circuit complexity bounds for modern Hopfield networks. These findings form the basis for our main theorem on the expressive power of Hopfield networks. In Appendix B.1, we provide fundamental definitions from circuit complexity theory.

4.1 Computing Hopfield Attention Matrix

We first recall that matrix multiplication of two matrices is in 𝖳𝖢0\mathsf{TC}^{0}.

Lemma 4.1 (Matrix multiplication in 𝖳𝖢0\mathsf{TC}^{0}, Lemma 4.2 in [8]).

Let ppoly(n)p\leq\operatorname{poly}(n), n1,n2poly(n)n_{1},n_{2}\leq\operatorname{poly}(n), and dnd\leq n. Let A𝔽pn1×dA\in\mathbb{F}_{p}^{n_{1}\times d} and B𝔽pd×n2B\in\mathbb{F}_{p}^{d\times n_{2}}. Then the matrix product ABAB can be implemented using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit which has depth (dstd+d)(d_{\mathrm{std}}+d_{\oplus}) and size bounded by poly(n)\operatorname{poly}(n).

Here, we extend the matrix operations to compute the Hopfield attention matrix.

Lemma 4.2 (Computation of Hopfield attention matrix in 𝖳𝖢0\mathsf{TC}^{0}).

Let precision ppoly(n)p\leq\operatorname{poly}(n). One can simulate the Hopfield attention matrix AA using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with depth 4d𝗌𝗍𝖽+3d+dexp4d_{\mathsf{std}}+3d_{\oplus}+d_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

To compute Ai,jA_{i,j}, we use the following steps:

  1. 1.

    By Lemma 4.1, we can compute the matrix product WQWKW_{Q}W_{K}^{\top} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit which has size poly(n)\operatorname{poly}(n) and depth dstd+dd_{\mathrm{std}}+d_{\oplus}.

  2. 2.

    By Lemma 4.1, we can the scalar

    si,j=Ri,(WQWK)Yj,\displaystyle s_{i,j}=R_{i,*}(W_{Q}W_{K}^{\top})Y_{j,*}^{\top}

    by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit which has size poly(n)\operatorname{poly}(n) and depth 2(dstd+d)2(d_{\mathrm{std}}+d_{\oplus}), following Lemma 4.1.

  3. 3.

    The term βsi,j\beta\cdot s_{i,j} is computed with depth dstdd_{\mathrm{std}} using Part 1 of Lemma B.5.

  4. 4.

    Using Lemma B.6, the exponential function exp(βsi,j)\exp(\beta\cdot s_{i,j}) is computable with depth dexpd_{\mathrm{exp}}.

Summing the circuit depths for all steps, the total depth required for Ai,jA_{i,j} is:

dtotal=4dstd+3d+dexp.\displaystyle d_{\mathrm{total}}=4d_{\mathrm{std}}+3d_{\oplus}+d_{\mathrm{exp}}.

Since all entries of AA can be simulated in parallel, the total depth 4dstd+3d+dexp4d_{\mathrm{std}}+3d_{\oplus}+d_{\mathrm{exp}} and size poly(n)\operatorname{poly}(n), completing the proof. ∎

4.2 Computing Single Hopfield Layer

This section analyzes the computation of a single Hopfield layer, including the necessary depth and size requirements.

Lemma 4.3 (Computation of Hopfield layer in 𝖳𝖢0\mathsf{TC}^{0}).

Let precision ppoly(n)p\leq\operatorname{poly}(n). We can simulate the Hopfield layer using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with depth 8dstd+6d+dexp8d_{\mathrm{std}}+6d_{\oplus}+d_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

The computation involves multiplying matrices D1,A,Y,W~VD^{-1},A,Y,\widetilde{W}_{V}. First, we compute DD and AA:

  1. 1.

    D=diag(βA𝟏n)D=\mathrm{diag}(\beta\cdot A\mathbf{1}_{n}) is computed with depth d+dstdd_{\oplus}+d_{\mathrm{std}}, as shown in Part 1 and Part 3 of Lemma B.5.

  2. 2.

    From Lemma 4.2, AA is computed with depth 4dstd+3d+dexp4d_{\mathrm{std}}+3d_{\oplus}+d_{\mathrm{exp}}.

  3. 3.

    From Lemma 4.2, AA is computed with depth 4dstd+3d+dexp4d_{\mathrm{std}}+3d_{\oplus}+d_{\mathrm{exp}}.

  4. 4.

    Finally, the division D1(AYW~V)D^{-1}\cdot(AY\widetilde{W}_{V}) is computed in parallel with depth dstdd_{\mathrm{std}}.

Combining these depths:

dtotal=8dstd+6d+dexp.\displaystyle d_{\mathrm{total}}=8d_{\mathrm{std}}+6d_{\oplus}+d_{\mathrm{exp}}.

The total size of the circuit remains poly(n)\operatorname{poly}(n), completing the proof. ∎

4.3 Computing Common Components Layers

Definition 3.3 introduces modern Hopfield networks, which incorporate the Hopfield layer along with other modules such as layer normalization and two-layer ReLU feed-forward networks. In this section, we present the computational complexity of these additional components.

We begin by analyzing the circuit complexity of the two-layer ReLU feed-forward networks.

Lemma 4.4 (Computation of Two-layer ReLU Feed-forward Networks in 𝖳𝖢0\mathsf{TC}^{0}).

Let ppoly(n)p\leq\operatorname{poly}(n), one can simulate the two-layer ReLU feed-forward neural networks using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit which has depth 4dstd+3d4d_{\mathrm{std}}+3d_{\oplus} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

For each i[n]i\in[n], Lemma 4.1 guarantees that the computation of W1Xi,W_{1}\cdot X_{i,*} requires a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit which has depth dstd+dd_{\mathrm{std}}+d_{\oplus} and size poly(n)\operatorname{poly}(n). Applying Part 1 of Lemma B.5, we can compute W1Xi,+b1W_{1}\cdot X_{i,*}+b_{1} with an additional 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit which has depth dstdd_{\mathrm{std}} and size poly(n)\operatorname{poly}(n). The ReLU activation, 𝖱𝖾𝖫𝖴(W1Xi,+b1)\mathsf{ReLU}(W_{1}\cdot X_{i,*}+b_{1}), also requires depth dstdd_{\mathrm{std}} and size poly(n)\operatorname{poly}(n) as per Part 1 of Lemma B.5.

For the second layer, the circuit depth needed is 2dstd+d2d_{\mathrm{std}}+d_{\oplus}, and the size remains poly(n)\operatorname{poly}(n). Thus, the total circuit depth across both layers is 4dstd+3d4d_{\mathrm{std}}+3d_{\oplus}, with the size still bounded by poly(n)\operatorname{poly}(n) because these computations are able to be executed in parallel for each i[n]i\in[n]. ∎

4.4 Computing Modern Hopfield Networks

We demonstrate how to compute modern Hopfield networks within the 𝖳𝖢0\mathsf{TC}^{0} complexity class.

Lemma 4.5 (Computation of Modern Hopfield Networks in 𝖳𝖢0\mathsf{TC}^{0}).

Suppose that for every i[m]i\in[m], then we can simulate function fif_{i} in 𝖬𝖧𝖭\mathsf{MHN} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit which has size poly(n)\operatorname{poly}(n) and constant depth dfd_{f}. When ppoly(n)p\leq\operatorname{poly}(n), the overall computation of the network can be implemented using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with depth (m+1)df+8mdstd+6md+mdexp(m+1)d_{f}+8md_{\mathrm{std}}+6md_{\oplus}+md_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

By assumption, we can simulate fif_{i} using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit which has size bounded by poly(n)\operatorname{poly}(n) and depth dfd_{f}.

Next, by Lemma 4.3, we know that the computation of a single Hopfield layer 𝖧𝗈𝗉i\mathsf{Hop}_{i} can be performed using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with depth 8dstd+6d+dexp8d_{\mathrm{std}}+6d_{\oplus}+d_{\mathrm{exp}}, while its size remains polynomial in nn.

The complete computation of 𝖬𝖧𝖭(R)\mathsf{MHN}(R) involves g0,g1,,gmg_{0},g_{1},\ldots,g_{m} and 𝖧𝗈𝗉1,𝖧𝗈𝗉2,,𝖧𝗈𝗉m\mathsf{Hop}_{1},\mathsf{Hop}_{2},\ldots,\mathsf{Hop}_{m}. The total circuit depth is, therefore, the sum of the depths of all these computations. Explicitly, the depth is (m+1)df+8mdstd+6md+mdexp.(m+1)d_{f}+8md_{\mathrm{std}}+6md_{\oplus}+md_{\mathrm{exp}}.

4.5 Main Result: Circuit Complexity Bound of Modern Hopfield Networks

Our main result establishes the circuit complexity bounds for modern Hopfield networks.

Theorem 4.6 (Circuit complexity of modern Hopfield networks).

Consider a modern Hopfield network 𝖬𝖧𝖭\mathsf{MHN} where we can simulate each component function fif_{i} for i[m]i\in[m] by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit which has dfd_{f} depth and poly(n)\operatorname{poly}(n) size. If precision ppoly(n)p\leq\operatorname{poly}(n), hidden dimension dO(n)d\leq O(n), and number of layers m=O(1)m=O(1), then we can simulate 𝖬𝖧𝖭\mathsf{MHN} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit family in 𝖳𝖢0\mathsf{TC}^{0}.

Proof.

Given m=O(1)m=O(1), Lemma 4.3 establishes that the circuit depth required for 𝖬𝖧𝖭(R)\mathsf{MHN}(R) is given by:

(m+1)df+8mdstd+6md+mdexp.\displaystyle(m+1)d_{f}+8md_{\mathrm{std}}+6md_{\oplus}+md_{\mathrm{exp}}.

Since mm is a constant, the total circuit depth simplifies to O(1)O(1). Furthermore, the size of the circuit remains within poly(n)\operatorname{poly}(n).

Hence, the modern Hopfield network can be realized using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit family belonging to 𝖳𝖢0\mathsf{TC}^{0}, completing the argument. ∎

5 Complexity of Kernelized Hopfield Networks

In this section, we extend the complexity results of modern Hopfield networks to kernelized Hopfield networks. The formal definition of kernelized Hopfield networks is provided in Appendix 3.3.

Section 5.1 analyzes the computation of the kernelized Hopfield attention matrix. In Section 5.2, we examine the computation of a single kernelized Hopfield layer. Section 5.3 details the computation of the entire kernelized Hopfield network. Finally, in Section 5.4, we present the main results on the circuit complexity bounds for kernelized Hopfield networks. In appendix E, we provide the complete proofs for this section.

5.1 Computing Kernelized Hopfield Attention Matrix

In this section, we generalize the computing of the Hopfield attention matrix to the kernelized Hopfield attention matrix.

Lemma 5.1 (Computation of kernelized Hopfield attention matrix in 𝖳𝖢0\mathsf{TC}^{0}).

Assuming that ppoly(n)p\leq\operatorname{poly}(n) and the linear affine feature map Φ\Phi is defined as Φ(u):=Wu\Phi(u):=Wu for u,v𝔽pdu,v\in\mathbb{F}_{p}^{d}, where W𝔽pDΦ×dW\in\mathbb{F}_{p}^{D_{\Phi}\times d} and the linear kernel is 𝒦(u,v)=uWWv\mathcal{K}(u,v)=u^{\top}W^{\top}Wv, then the kernelized Hopfield attention matrix AA can be implemented using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit of depth 3dstd+2d+dexp3d_{\mathrm{std}}+2d_{\oplus}+d_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

See Appendix E.1 for the complete proof. ∎

5.2 Computing Single Kernelized Hopfield Layer

This section provides an analysis of the computation of a single kernelized Hopfield layer, including its circuit depth.

Lemma 5.2 (Computation of Single kernelized Hopfield layer in 𝖳𝖢0\mathsf{TC}^{0}).

Assuming that ppoly(n)p\leq\operatorname{poly}(n), then the kernelized Hopfield layer can be implemented using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit of depth 10dstd+8d+dexp10d_{\mathrm{std}}+8d_{\oplus}+d_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

See Appendix E.2 for the complete proof. ∎

5.3 Computing Kernelized Hopfield Networks

Here, we analyze the computation of the entire kernelized Hopfield network.

Lemma 5.3 (Kernelized Hopfield networks computation in 𝖳𝖢0\mathsf{TC}^{0}).

Assume that for each i[m]i\in[m], we can simulate the function fif_{i} in 𝖪𝖧𝖭\mathsf{KHN} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit which has constant depth dfd_{f} and size poly(n)\operatorname{poly}(n). If the precision ppoly(n)p\leq\operatorname{poly}(n), then we can simulate the kernelized Hopfield networks by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit of depth (m+1)df+10mdstd+8md+mdexp(m+1)d_{f}+10md_{\mathrm{std}}+8md_{\oplus}+md_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

See Appendix E.3 for the complete proof. ∎

5.4 Main Result: Circuit Complexity Bound of Kernelized Hopfield Networks

We now present the main result for kernelized Hopfield networks, establishing their circuit complexity bounds.

Theorem 5.4 (Main result, Circuit complexity bound of kernelized Hopfield networks).

Assume that for each i[m]i\in[m], the function fif_{i} in 𝖪𝖧𝖭\mathsf{KHN} is computable by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with constant depth dfd_{f} and size poly(n)\operatorname{poly}(n). If precision ppoly(n)p\leq\operatorname{poly}(n), hidden dimension dO(n)d\leq O(n), and number of layers mO(1)m\leq O(1), then we can simulate the kernelized Hopfield networks 𝖪𝖧𝖬\mathsf{KHM} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit family within 𝖳𝖢0\mathsf{TC}^{0}.

Proof.

See Appendix E.4 for the complete proof. ∎

Theorem 5.4 demonstrates that, unless 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}, kernelized Hopfield networks with polynomial precision, constant depth, and polynomial size can be implemented by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit family within 𝖳𝖢0\mathsf{TC}^{0}. This implies that kernelized Hopfield networks are subject to inherent expressivity limitations under circuit complexity constraints despite their empirical success.

6 Hardness

In this section, we present two computational problems along with their corresponding hardness results. In In Section 6.1, we outline the corresponding hardness results. Appendix D, we introduce the undirected graph connectivity problem and the tree isomorphism problem.

6.1 Hardness Results

In this section, we present our hardness results for modern and kernelized Hopfield networks.

Theorem 6.1.

Assume that 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}. There is no poly(n)\operatorname{poly}(n)-precision modern Hopfield networks with constant layers, and O(n)O(n) hidden dimension can solve the undirected graph connectivity problem.

Proof.

This result is obtained by integrating Theorem 4.6 (the circuit complexity bound of modern Hopfield networks), Lemma D.4 (showing that undirected graph connectivity is 𝖭𝖢1\mathsf{NC}^{1}-complete), and Fact D.3. ∎

Theorem 6.2.

Assume that 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}. There is no poly(n)\operatorname{poly}(n)-precision modern Hopfield networks with O(1)O(1) layers, and O(n)O(n) hidden dimension can solve the tree isomorphism problem.

Proof.

This result derives from Theorem 4.6 and Lemma D.8 (showing that tree isomorphism is 𝖭𝖢1\mathsf{NC}^{1}-complete). ∎

Theorem 6.3.

Assume that 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}. There is no poly(n)\operatorname{poly}(n)-precision kernelized Hopfield networks with constant layers, and O(n)O(n) hidden dimension can solve the undirected graph connectivity problem.

Proof.

This result is a direct consequence of Theorem 5.4 and Lemma D.4. ∎

Theorem 6.4.

Assume that 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}. There is no poly(n)\operatorname{poly}(n)-precision kernelized Hopfield networks with constant layers, and O(n)O(n) hidden dimension can solve the tree isomorphism problem.

Proof.

This result is a direct consequence of Theorem 5.4 and Lemma D.8. ∎

7 Conclusion

In this study, we conduct a comprehensive theoretical investigation of modern Hopfield networks and kernelized Hopfield networks, establishing key limitations on their computational power. Our analysis focuses on the circuit complexity of individual architectural components, showing that these networks can be simulated using uniform 𝖳𝖢0\mathsf{TC}^{0} circuits. Crucially, we prove that unless 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}, both modern and kernelized Hopfield networks, when implemented with polynomial precision, a constant number of layers, and hidden dimensions satisfying dO(n)d\leq O(n), are unable to solve problems such as undirected graph connectivity and tree isomorphism. These findings highlight the inherent expressivity constraints of Hopfield models, even in light of their demonstrated empirical success across various tasks.

However, our analysis has limitations. It primarily addresses the forward computation of these networks, assuming constant-depth nonlinear activation functions, and does not consider the training process or the implications of more complex activation mechanisms. Expanding this framework to encompass other Hopfield network variants and exploring whether similar computational bounds apply to advanced architectures would be a valuable direction for future research. Additionally, our results point to an intriguing gap between the theoretical constraints and the practical effectiveness of these models. Investigating how Hopfield networks achieve strong empirical performance despite these limitations could provide deeper insights into model scaling and architecture design, paving the way for more theoretically robust approaches.

References

  • [1] Josh Alman and Zhao Song. Fast attention requires bounded entries. In NeurIPS, 2023.
  • [2] Josh Alman and Zhao Song. How to capture higher-order correlations? generalizing matrix softmax attention to kronecker computation. arXiv preprint arXiv:2310.04064, 2023.
  • [3] Josh Alman and Zhao Song. Fast rope attention: Combining the polynomial method and fast fourier transform. manuscript, 2024.
  • [4] Josh Alman and Zhao Song. The fine-grained complexity of gradient computation for training large language models. In NeurIPS, 2024.
  • BCB [16] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate, 2016.
  • Cha [22] François Charton. What is my math transformer doing?–three results on interpretability and generalization. arXiv preprint arXiv:2211.00170, 2022.
  • Chi [24] David Chiang. Transformers in uniform tc0. arXiv preprint arXiv:2409.13629, 2024.
  • [8] Bo Chen, Xiaoyu Li, Yingyu Liang, Jiangxuan Long, Zhenmei Shi, and Zhao Song. Circuit complexity bounds for rope-based transformer architecture. arXiv preprint arXiv:2411.07602, 2024.
  • [9] Bo Chen, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, and Zhao Song. Bypassing the exponential dependency: Looped transformers efficiently learn in-context by multi-step gradient descent. arXiv preprint arXiv:2410.11268, 2024.
  • CLS+ [24] Bo Chen, Yingyu Liang, Zhizhou Sha, Zhenmei Shi, and Zhao Song. Hsr-enhanced sparse attention acceleration. arXiv preprint arXiv:2410.10165, 2024.
  • CM [87] Stephen A Cook and Pierre McKenzie. Problems complete for deterministic logarithmic space. Journal of Algorithms, 8(3):385–394, 1987.
  • Coo [85] Stephen A Cook. A taxonomy of problems with fast parallel algorithms. Information and control, 64(1-3):2–22, 1985.
  • CvMG+ [14] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation, 2014.
  • DHL+ [17] Mete Demircigil, Judith Heusel, Matthias Löwe, Sven Upgang, and Franck Vermet. On a model of associative memory with huge storage capacity. Journal of Statistical Physics, 168(2):288–299, May 2017.
  • DSWY [22] Yichuan Deng, Zhao Song, Yitan Wang, and Yuanyuan Yang. A nearly optimal size coreset algorithm with nearly linear time. arXiv preprint arXiv:2210.08361, 2022.
  • FRL+ [22] Andreas Fürst, Elisabeth Rumetshofer, Johannes Lehner, Viet Tran, Fei Tang, Hubert Ramsauer, David Kreil, Michael Kopp, Günter Klambauer, Angela Bitto-Nemling, and Sepp Hochreiter. Cloob: Modern hopfield networks with infoloob outperform clip, 2022.
  • FZG+ [23] Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. Towards revealing the mystery behind chain of thought: a theoretical perspective. Advances in Neural Information Processing Systems, 36, 2023.
  • GMS [23] Yeqi Gao, Sridhar Mahadevan, and Zhao Song. An over-parameterized exponential regression. arXiv preprint arXiv:2303.16504, 2023.
  • GVW+ [16] Nicholas Guttenberg, Nathaniel Virgo, Olaf Witkowski, Hidetoshi Aoki, and Ryota Kanai. Permutation-equivariant neural networks applied to dynamics prediction. arXiv preprint arXiv:1612.04530, 2016.
  • HCL+ [24] Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Haozheng Luo, Hong-Yu Chen, Weijian Li, Wei-Po Wang, and Han Liu. Outlier-efficient hopfield layers for large transformer-based models. In Forty-first International Conference on Machine Learning (ICML), 2024.
  • HCW+ [24] Jerry Yao-Chieh Hu, Bo-Yu Chen, Dennis Wu, Feng Ruan, and Han Liu. Nonparametric modern hopfield models. arXiv preprint arXiv:2404.03900, 2024.
  • HLP+ [23] Benjamin Hoover, Yuchen Liang, Bao Pham, Rameswar Panda, Hendrik Strobelt, Duen Horng Chau, Mohammed J. Zaki, and Dmitry Krotov. Energy transformer, 2023.
  • HLSL [24] Jerry Yao-Chieh Hu, Thomas Lin, Zhao Song, and Han Liu. On computational limits of modern hopfield models: A fine-grained complexity analysis. In Forty-first International Conference on Machine Learning, 2024.
  • Hoc [91] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, 91(1):31, 1991.
  • Hop [82] John J Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554–2558, 1982.
  • Hop [84] John J Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the national academy of sciences, 81(10):3088–3092, 1984.
  • HS [97] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
  • HSK+ [24] Jerry Yao-Chieh Hu, Maojiang Su, En-Jui Kuo, Zhao Song, and Han Liu. Computational limits of low-rank adaptation (lora) for transformer-based models. arXiv preprint arXiv:2406.03136, 2024.
  • [29] Jerry Yao-Chieh Hu, Dennis Wu, and Han Liu. Provably optimal memory capacity for modern hopfield models: Transformer-compatible dense associative memories as spherical codes. In Thirty-eighth Conference on Neural Information Processing Systems (NeurIPS), 2024.
  • [30] Jerry Yao-Chieh Hu, Weimin Wu, Zhuoru Li, Sophia Pi, , Zhao Song, and Han Liu. On statistical rates and provably efficient criteria of latent diffusion transformers (dits). In Thirty-eighth Conference on Neural Information Processing Systems (NeurIPS), 2024.
  • HYW+ [23] Jerry Yao-Chieh Hu, Donglin Yang, Dennis Wu, Chenwei Xu, Bo-Yu Chen, and Han Liu. On sparse modern hopfield model. In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), 2023.
  • Imm [98] Neil Immerman. Descriptive complexity. Springer Science & Business Media, 1998.
  • IP [01] Russell Impagliazzo and Ramamohan Paturi. On the complexity of k-sat. Journal of Computer and System Sciences, 62(2):367–375, 2001.
  • JMT [98] Birgit Jenner, Pierre McKenzie, and Jacobo Torán. A note on the hardness of tree isomorphism. In Proceedings. Thirteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference)(Cat. No. 98CB36247), pages 101–105. IEEE, 1998.
  • KH [16] Dmitry Krotov and John J Hopfield. Dense associative memory for pattern recognition. Advances in neural information processing systems, 29, 2016.
  • KH [20] Dmitry Krotov and John Hopfield. Large associative memory problem in neurobiology and machine learning. arXiv preprint arXiv:2008.06996, 2020.
  • KKK [23] Leo Kozachkov, Ksenia V Kastanenka, and Dmitry Krotov. Building transformers from neurons and astrocytes. Proceedings of the National Academy of Sciences, 120(34):e2219150120, 2023.
  • KLL+ [24] Yekun Ke, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, and Zhao Song. Advancing the understanding of fixed point iterations in deep neural networks: A detailed analytical study. arXiv preprint arXiv:2410.11279, 2024.
  • KLSZ [24] Yekun Ke, Xiaoyu Li, Zhao Song, and Tianyi Zhou. Faster sampling algorithms for polytopes with small treewidth. In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024.
  • LAG+ [22] Bingbin Liu, Jordan T Ash, Surbhi Goel, Akshay Krishnamurthy, and Cyril Zhang. Transformers learn shortcuts to automata. arXiv preprint arXiv:2210.10749, 2022.
  • [41] Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Tianyi Zhou. Fourier circuits in neural networks and transformers: A case study of modular arithmetic with multiple inputs. arXiv preprint arXiv:2402.09469, 2024.
  • [42] Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Junwei Yu. Fast john ellipsoid computation with differential privacy optimization. arXiv preprint arXiv:2408.06395, 2024.
  • [43] Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou. Fine-grained attention i/o complexity: Comprehensive analysis for backward passes. arXiv preprint arXiv:2410.09397, 2024.
  • [44] Yingyu Liang, Heshan Liu, Zhenmei Shi, Zhao Song, Zhuoyan Xu, and Junze Yin. Conv-basis: A new paradigm for efficient attention inference and gradient computation in transformers. arXiv preprint arXiv:2405.05219, 2024.
  • [45] Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song, and Yufa Zhou. Beyond linear approximations: A novel pruning approach for attention matrix. arXiv preprint arXiv:2410.11261, 2024.
  • LLSS [24] Xiaoyu Li, Yingyu Liang, Zhenmei Shi, and Zhao Song. A tighter complexity analysis of sparsegpt. arXiv preprint arXiv:2408.12151, 2024.
  • LLSZ [24] Xiaoyu Li, Jiangxuan Long, Zhao Song, and Tianyi Zhou. Fast second-order method for neural network under small treewidth setting. In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024.
  • LLZM [24] Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of thought empowers transformers to solve inherently serial problems. In The Twelfth International Conference on Learning Representations, 2024.
  • [49] Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, and Yufa Zhou. Looped relu mlps may be all you need as practical programmable computers. arXiv preprint arXiv:2410.09375, 2024.
  • [50] Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, and Yufa Zhou. Multi-layer transformers gradient can be approximated in almost linear time. arXiv preprint arXiv:2408.13233, 2024.
  • LSSY [24] Yingyu Liang, Zhenmei Shi, Zhao Song, and Chiwun Yang. Toward infinite-long prefix in transformer. arXiv preprint arXiv:2406.14036, 2024.
  • [52] Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou. Differential privacy of cross-attention with provable guarantee. arXiv preprint arXiv:2407.14717, 2024.
  • [53] Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou. Tensor attention training: Provably efficient learning of higher-order transformers. arXiv preprint arXiv:2405.16411, 2024.
  • LSY [24] Xiaoyu Li, Zhao Song, and Junwei Yu. Quantum speedups for approximating the john ellipsoid. arXiv preprint arXiv:2408.14018, 2024.
  • [55] William Merrill and Ashish Sabharwal. A logic for expressing log-precision transformers. Advances in Neural Information Processing Systems, 36, 2023.
  • [56] William Merrill and Ashish Sabharwal. The parallelism tradeoff: Limitations of log-precision transformers. Transactions of the Association for Computational Linguistics, 11:531–545, 2023.
  • MSS [22] William Merrill, Ashish Sabharwal, and Noah A Smith. Saturated transformers are constant-depth threshold circuits. Transactions of the Association for Computational Linguistics, 10:843–856, 2022.
  • PAP+ [23] Fabian Paischer, Thomas Adler, Vihang Patil, Angela Bitto-Nemling, Markus Holzleitner, Sebastian Lehner, Hamid Eghbal-zadeh, and Sepp Hochreiter. History compression via language models in reinforcement learning, 2023.
  • RSL+ [21] Hubert Ramsauer, Bernhard Schafl, Johannes Lehner, Philipp Seidl, Michael Widrich, Lukas Gruber, Markus Holzleitner, Thomas Adler, David Kreil, Michael K Kopp, et al. Hopfield networks is all you need. In International Conference on Learning Representations, 2021.
  • RSP [16] Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Deep learning with sets and point clouds. arXiv preprint arXiv:1611.04500, 2016.
  • Sip [96] Michael Sipser. Introduction to the theory of computation. ACM Sigact News, 27(1):27–29, 1996.
  • SMN+ [24] Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, Yingyu Liang, and Shafiq Joty. Discovering the gems in early layers: Accelerating long-context llms with 1000x input token reduction. arXiv preprint arXiv:2409.17422, 2024.
  • SRD+ [22] Philipp Seidl, Philipp Renz, Natalia Dyubankova, Paulo Neves, Jonas Verhoeven, Jorg K Wegner, Marwin Segler, Sepp Hochreiter, and Gunter Klambauer. Improving few-and zero-shot reaction template prediction using modern hopfield networks. Journal of chemical information and modeling, 62(9):2111–2120, 2022.
  • SSF+ [23] Johannes Schimunek, Philipp Seidl, Lukas Friedrich, Daniel Kuhn, Friedrich Rippmann, Sepp Hochreiter, and Günter Klambauer. Context-enriched molecule representations improve few-shot drug discovery. arXiv preprint arXiv:2305.09481, 2023.
  • SSX [23] Anshumali Shrivastava, Zhao Song, and Zhaozhuo Xu. A theoretical analysis of nearest neighbor search on approximate near neighbor graph. arXiv preprint arXiv:2303.06210, 2023.
  • SY [23] Zhao Song and Chiwun Yang. An automatic learning rate schedule algorithm for achieving faster convergence and steeper descent. arXiv preprint arXiv:2310.11291, 2023.
  • SZZ [24] Zhao Song, Lichen Zhang, and Ruizhe Zhang. Training multi-layer over-parametrized neural network in subquadratic time. In Innovations in Theoretical Computer Science (ITCS), pages 93:1–93:15, 2024.
  • Vol [99] Heribert Vollmer. Introduction to circuit complexity: a uniform approach. Springer Science & Business Media, 1999.
  • VSP+ [23] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2023.
  • WHHL [24] Dennis Wu, Jerry Yao-Chieh Hu, Teng-Yun Hsiao, and Han Liu. Uniform memory retrieval with larger capacity for modern hopfield models. arXiv preprint arXiv:2404.03827, 2024.
  • WHL+ [24] Dennis Wu, Jerry Yao-Chieh Hu, Weijian Li, Bo-Yu Chen, and Han Liu. STanhop: Sparse tandem hopfield model for memory-enhanced time series prediction. In The Twelfth International Conference on Learning Representations (ICLR), 2024.
  • WSR+ [20] Michael Widrich, Bernhard Schäfl, Hubert Ramsauer, Milena Pavlović, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter, Geir Kjetil Sandve, Victor Greiff, Sepp Hochreiter, and Günter Klambauer. Modern hopfield networks and attention for immune repertoire classification, 2020.
  • XHH+ [24] Chenwei Xu, Yu-Chao Huang, Jerry Yao-Chieh Hu, Weijian Li, Ammar Gilani, Hsi-Sheng Goan, and Han Liu. Bishop: Bi-directional cellular learning for tabular data with generalized sparse modern hopfield model. In Forty-first International Conference on Machine Learning (ICML), 2024.
  • XSL [24] Zhuoyan Xu, Zhenmei Shi, and Yingyu Liang. Do large language models have compositional ability? an investigation into limitations and scalability. In First Conference on Language Modeling, 2024.

Appendix

Roadmap.

In Section A, we give an extended discussion of related works. In Section B, we introduce essential definitions for circuit complexity and float point numbers used in this paper. In Section C, we present the additional theoretical results of the modern Hopfield model with a chain of thought (CoT). In Section D, we present the formulation of graph connectivity and tree isomorphism problems. In Section E, we present the formal proofs for kernelized Hopfield networks.

Appendix A More Related Works

Modern Hopfield Networks and Applications in Deep Learning.

Classical Hopfield networks [25, 26, 35] have long been recognized for their ability to emulate human brain associative memory, primarily focusing on the storage and retrieval of memory patterns. Originally limited by linear memory capacity [35], these networks now achieve exponential storage capabilities [14] and kernelized [70], allowing for efficient handling of vast memory patterns. Architectural innovations [22, 63, 16] have integrated Hopfield networks into models like Transformers, serving as advanced attention mechanisms [69]. Their biological plausibility [37, 36] has been a key focus, aligning computational models more closely with human associative memory. These developments have expanded applications across various fields, including tabular data learning [73], drug discovery [64], immunology [72], time series forecasting [70], reinforcement learning [58], and large foundation models [31, 16, 20], demonstrating their versatility in addressing complex problems.

Limitations of Transformer Architectures.

While Transformers excel in natural language processing, their capabilities in mathematical computation remain constrained [6]. To understand these limitations, researchers have examined two types of Transformers: (1) average-head attention, which assigns 1 to the largest probability entry and 0 to others, and (2) softmax-attention, which computes probabilities as 𝖲𝗈𝖿𝗍𝗆𝖺𝗑(X)=diag(exp(X)𝟏n)1exp(X)\mathsf{Softmax}(X)=\operatorname{diag}(\exp(X)\cdot{\bf 1}_{n})^{-1}\cdot\exp(X). Average-head Transformers can recognize languages beyond 𝖠𝖢0\mathsf{AC}^{0}, but they are confined to 𝖳𝖢0\mathsf{TC}^{0}, as shown by [57]. Similarly, softmax-attention Transformers are also within 𝖳𝖢0\mathsf{TC}^{0} [40]. Extending this, [56] formalize a generalized similarity function ss and demonstrate that softmax-attention Transformers fall under 𝖫\mathsf{L}-uniform 𝖳𝖢0\mathsf{TC}^{0}. Through a first-order logic framework with 𝖬𝖠𝖩𝖮𝖱𝖨𝖳𝖸\mathsf{MAJORITY} quantifiers (𝖥𝖮𝖬\mathsf{FOM}[32], [55] show that these Transformers can be simulated in 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform 𝖳𝖢0\mathsf{TC}^{0}. [7] further enhances precision, proving that Transformers with minimal error (2O(poly(n))2^{-O(\mathrm{poly}(n))}) also belong to this class. For practical problems, [17] reveal that unless 𝖳𝖢0=𝖭𝖢1\mathsf{TC}^{0}=\mathsf{NC}^{1}, log-precision Transformers cannot solve arithmetic, equation-solving, or Context-Free Grammar (CFG) membership tasks [61]. These findings explain why Transformers often struggle with mathematical reasoning. Recently, [8] studied the circuit complexity bounds of RoPE-based transformer architectures. Relevant works related to this topic includes looped transformers [3, 49, 9, 38, 50, 10], accelerating attention computation [8, 31, 45, 51, 73, 21, 67, 70, 29, 23, 47, 39, 71, 54, 43, 44, 62, 46, 42, 30], and more related works [74, 66, 15, 65, 18, 52, 41, 28].

Appendix B Background

B.1 Background on Circuit Complexity

Definition B.1 (Boolean circuit).

We define a Boolean circuit that has nn inputs and mm outputs as a function Cn:{0,1}n{0,1}mC_{n}:\{0,1\}^{n}\rightarrow\{0,1\}^{m} on a directed acyclic graph. Each node in the graph represents a logic gate in {𝖠𝖭𝖣,𝖮𝖱,𝖭𝖮𝖳}\{\mathsf{AND},\mathsf{OR},\mathsf{NOT}\}. The graph contains nn input nodes with in-degree 0 and mm output nodes with out-degree 0. The circuit evaluates the value at each non-input gate based on the inputs it receives from other gates. The size of the circuit is the total number of gates, and its depth is the maximum length of a directed path.

To address varying input sizes, the concept of circuit families is introduced.

Definition B.2 (Circuit family and language recognition, Definition 1.10 on page 10 in [68]).

A circuit family is a set 𝒞={Cn:n}\mathcal{C}=\{C_{n}:n\in\mathbb{N}\}, where for each nn\in\mathbb{N}, CnC_{n} is a circuit with nn inputs. The family 𝒞\mathcal{C} computes a function f:{0,1}{0,1}f:\{0,1\}^{*}\to\{0,1\}^{*} if for every x{0,1}x\in\{0,1\}^{*}, C|x|(x)=f(x)C_{|x|}(x)=f(x). Similarly, 𝒞\mathcal{C} is said to compute a language L{0,1}L\subseteq\{0,1\}^{*} if 𝒞\mathcal{C} computes the indicator function 𝟙L\mathbbm{1}_{L} of LL.

A language L{0,1}L\subseteq\{0,1\}^{*} is a set of binary strings that represent the ”yes” instances of a given decision problem, where {0,1}\{0,1\}^{*} denotes the set of all finite binary strings of any length.

The complexity classes 𝖭𝖢i\mathsf{NC}^{i}, 𝖠𝖢i\mathsf{AC}^{i}, and 𝖳𝖢i\mathsf{TC}^{i} characterize languages recognizable by Boolean circuits under different constraints. 𝖭𝖢i\mathsf{NC}^{i} consists of languages that can be computed using circuits with polynomial size, depth O((logn)i)O((\log n)^{i}), and bounded fan-in 𝖠𝖭𝖣\mathsf{AND}, 𝖮𝖱\mathsf{OR}, and 𝖭𝖮𝖳\mathsf{NOT} gates. Extending this, 𝖠𝖢i\mathsf{AC}^{i} allows unbounded fan-in for 𝖠𝖭𝖣\mathsf{AND} and 𝖮𝖱\mathsf{OR} gates while maintaining the same size and depth constraints, enabling the recognition of a broader range of languages. Further generalizing, 𝖳𝖢i\mathsf{TC}^{i} incorporates 𝖬𝖠𝖩𝖮𝖱𝖨𝖳𝖸\mathsf{MAJORITY} gates. These 𝖳𝖢i\mathsf{TC}^{i} circuits also support unbounded fan-in 𝖠𝖭𝖣\mathsf{AND}, 𝖮𝖱\mathsf{OR}, and 𝖭𝖮𝖳\mathsf{NOT} gates while adhering to the same size and depth limits, making them more expressive than both 𝖭𝖢i\mathsf{NC}^{i} and 𝖠𝖢i\mathsf{AC}^{i}.

The class 𝖯\mathsf{P} encompasses languages decidable by deterministic Turing machines in polynomial time. Circuit hierarchies relate 𝖭𝖢i\mathsf{NC}^{i}, 𝖠𝖢i\mathsf{AC}^{i}, and 𝖳𝖢i\mathsf{TC}^{i}, showing that for any fixed i[n]i\in[n],

𝖭𝖢i𝖠𝖢i𝖳𝖢i𝖭𝖢i+1𝖯.\displaystyle\mathsf{NC}^{i}\subseteq\mathsf{AC}^{i}\subseteq\mathsf{TC}^{i}\subseteq\mathsf{NC}^{i+1}\subseteq\mathsf{P}.

However, whether 𝖳𝖢0\mathsf{TC}^{0} is strictly contained in 𝖭𝖢1\mathsf{NC}^{1} remains unresolved.

Uniform circuit families are more practical and relevant in computational theory compared to non-uniform families. For 𝖫\mathsf{L}-uniformity, a Turing machine with logarithmic space must output a circuit for any given input size. Alternatively, 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniformity requires a random access Turing machine to construct circuits in logarithmic time. These definitions ensure feasibility and consistency in circuit design. While 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniformity is generally equivalent to 𝖫\mathsf{L}-uniformity, differences arise in smaller circuit classes that lack the ability to simulate their own generation process.

B.2 Floating-Point Numbers

This section presents key definitions necessary for our computational framework. We introduce basic concepts related to floating-point numbers and the definition of the operations, which are fundamental to implementing efficient Hopfield network computations.

Definition B.3 (Floating-point number, Definition 9 in [7]).

A p-bit floating-point number is represented as a pair m,e\langle m,e\rangle, where mm is the significand and ee is the exponent. Specifically, m(2p,2p1]{0}[2p1,2p)m\in(-2^{p},-2^{p-1}]\cup\{0\}\cup[2^{p-1},2^{p}) and e[2p,2p)e\in[-2^{p},2^{p}). The floating-point value of m,e\langle m,e\rangle is the real number m2em\cdot 2^{e}. The set of all p-bit floating-point numbers is denoted as 𝔽p\mathbb{F}_{p}.

To work with floating-point numbers effectively, we define specific rules for rounding and performing arithmetic operations:

Definition B.4 (Rounding, Definition 9 in [7]).

For any real number or floating-point value xx, we define roundp(x)\mathrm{round}_{p}(x) as the p-bit floating-point number with an even significand closest to xx.

Based on these definitions, we outline the main arithmetic operations for floating-point numbers required for Hopfield networks:

These operations are not merely theoretical, they can be practically implemented in hardware, as demonstrated by the following lemma:

Lemma B.5 (Basic floating-point operations in 𝖳𝖢0\mathsf{TC}^{0}, Lemmas 10, 11 in [7]).

If the precision ppoly(n)p\leq\operatorname{poly}(n), then the following results are valid:

  • Part 1: Let x1,x2x_{1},x_{2} be two pp-bits floating point numbers. The arithmetic operations of addition, multiplication, division, and comparison of x1x_{1} and x2x_{2}, as described in [7], can be executed using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit characterized by constant depth, and a size bounded by poly(n)\operatorname{poly}(n). The maximum depth of the circuit required is denoted by dstdd_{\mathrm{std}}.

  • Part 2: Let x1,,xnx_{1},\ldots,x_{n} be nn p-bit floating-point number. Iterative multiplication of x1,,xnx_{1},\ldots,x_{n} can be executed with a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit characterized by constant depth and a size bounded by poly(n)\operatorname{poly}(n). The circuit depth for this operation is indicated as dd_{\otimes}.

  • Part 3: Let x1,,xnx_{1},\ldots,x_{n} be nn p-bit floating-point number. Iterative addition of x1,,xnx_{1},\ldots,x_{n}, where rounding is applied after the summation is completed, can be achieved using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit characterized by constant depth, and a size bounded by poly(n)\operatorname{poly}(n). The depth required for this operation is represented by dd_{\oplus}.

Lemma B.6 (Computing exp\exp in 𝖳𝖢0\mathsf{TC}^{0}, Lemma 12 in [7]).

Assuming that precision ppoly(n)p\leq\operatorname{poly}(n). For any p-bit floating-point number xx, there exists a threshold circuit characterized by uniformity, constant depth, and a size bounded by poly(n)\operatorname{poly}(n) that approximates exp(x)\exp(x), which has a relative error at most 2p2^{-p}. The depth of the circuit required for this computation is denoted by dexpd_{\mathrm{exp}}.

Lemma B.7 (Approximating square root in 𝖳𝖢0\mathsf{TC}^{0}, Lemma 12 in [7]).

Assuming that precision ppoly(n)p\leq\operatorname{poly}(n). There exists a threshold circuit characterized by uniformity, constant depth, and a size bounded by poly(n)\operatorname{poly}(n) that computes x\sqrt{x} for any p-bit floating-point number xx, which has a relative error at most 2p2^{-p}. The circuit depth required for computing x\sqrt{x} is denoted by dsqrtd_{\mathrm{sqrt}}.

Appendix C Additional Theoretical Results for Modern Hopfield Model with Chain of Thought

We provide theoretical results for modern Hopfield networks that incorporate the chain of thought (CoT) mechanism. We first describe the architecture of the modern Hopfield model.

C.1 Modern Hopfield Model Architecture with CoT

Let 𝒱\mathcal{V} be the vocabulary, and consider a Hopfield model with parameters θ\theta and a maximum input length nmaxn_{\mathrm{max}}. This model maps an input token sequence (x1,,xn)𝒱n(x_{1},\ldots,x_{n})\in\mathcal{V}^{n} (for nnmaxn\leq n_{\mathrm{max}}) to a probability distribution over 𝒱\mathcal{V}, and we denote it by pΘ(|x1,,xn)p_{\Theta}(\cdot|x_{1},\ldots,x_{n}). We use 𝖬𝖧𝖬θ(x)\mathsf{MHM}_{\theta}(x) as the function that selects the token in 𝒱\mathcal{V} which can maximize pΘ(|x1,,xn)p_{\Theta}(\cdot|x_{1},\ldots,x_{n}), defined as:

𝖬𝖧𝖬θ(x1,,xn)=argmaxy𝒱pθ(y|x1,,xn).\displaystyle\mathsf{MHM}_{\theta}(x_{1},\ldots,x_{n})=\arg\max_{y\in\mathcal{V}}p_{\theta}(y|x_{1},\ldots,x_{n}).

We define the Next-token generator with parameters θ\theta and a maximum input length nmaxn_{\mathrm{max}} maps sequences from n=1nmax𝒱n\cup_{n=1}^{n_{\mathrm{max}}}\mathcal{V}^{n} to 𝒱\mathcal{V}. It is recursively defined as:

𝖬𝖧𝖬θi(x1,x2,,xn)=𝖬𝖧𝖬θi1(x1,x2,,xn,𝖬𝖧𝖬θi1(x1,x2,,xn)),\displaystyle\mathsf{MHM}_{\theta}^{i}(x_{1},x_{2},\ldots,x_{n})=\mathsf{MHM}_{\theta}^{i-1}(x_{1},x_{2},\ldots,x_{n},\mathsf{MHM}_{\theta}^{i-1}(x_{1},x_{2},\ldots,x_{n})),

For every i1i\geq 1, provided i+nnmax1i+n\leq n_{\mathrm{max}}-1, we have the base case:

𝖬𝖧𝖬θ1(x1,x2,xn)=𝖬𝖧𝖬θ(x1,x2,,xn).\displaystyle\mathsf{MHM}_{\theta}^{1}(x_{1},x_{2}\ldots,x_{n})=\mathsf{MHM}_{\theta}(x_{1},x_{2},\ldots,x_{n}).

Architecture Overview: The architecture is similar to GPT-like models, comprising four main components: 1. A layer used for token embedding, 2. A layer used for position encoding, 3. An output linear layer, and 4. LL identical decoder layers. Each decoder layer consists of a single Hopfield layer (see Definition 3.2) and a two-layer ReLU feedforward network (see Definition 3.4). Together, these decoder layers form an LL-layer modern Hopfield network (see Definition 3.3). The model parameters θ\theta are organized as: θ=(θ𝖯𝖤,θ𝖳𝖤,θ𝖮𝖴𝖳𝖯𝖴𝖳,θ𝖬𝖧𝖭),\theta=(\theta_{\mathsf{PE}},\theta_{\mathsf{TE}},\theta_{\mathsf{OUTPUT}},\theta_{\mathsf{MHN}}), where θ𝖬𝖧𝖭={θ𝖧𝗈𝗉(l),θ𝖥𝖭𝖭(l)}l=0L1\theta_{\mathsf{MHN}}=\{\theta_{\mathsf{Hop}}^{(l)},\theta_{\mathsf{FNN}}^{(l)}\}_{l=0}^{L-1} are trainable parameters (see Algorithm 1).

The token embedding layer parameterized by θ𝖳𝖤d×|𝒱|\theta_{\mathsf{TE}}\in\mathbb{R}^{d\times|\mathcal{V}|} maps tokens in 𝒱\mathcal{V} to d\mathbb{R}^{d}: θ𝖳𝖤(x),x𝒱.\theta_{\mathsf{TE}}(x),\quad\forall x\in\mathcal{V}. The position encoding layer with parameters θ𝖯𝖤d×|𝒱|\theta_{\mathsf{PE}}\in\mathbb{R}^{d\times|\mathcal{V}|} maps positions in [nmax][n_{\mathrm{max}}] to d\mathbb{R}^{d}: θ𝖯𝖤(n),n[nmax].\theta_{\mathsf{PE}}(n),\quad\forall n\in[n_{\mathrm{max}}]. The output layer parameterized by θ𝖮𝖴𝖳𝖯𝖴𝖳|𝒱|×d\theta_{\mathsf{OUTPUT}}\in\mathbb{R}^{|\mathcal{V}|\times d} is defined by:

𝖮𝖴𝖳𝖯𝖴𝖳θ𝖮𝖴𝖳𝖯𝖴𝖳(h)=softmax(θ𝖮𝖴𝖳𝖯𝖴𝖳h),hd.\displaystyle\mathsf{OUTPUT}_{\theta_{\mathsf{OUTPUT}}}(h)=\mathrm{softmax}(\theta_{\mathsf{OUTPUT}}h),\quad\forall h\in\mathbb{R}^{d}.
Algorithm 1 Modern Hopfield Model, 𝖬𝖧𝖬θ\mathsf{MHM}_{\theta} and pθp_{\theta}
1:Input: Parameters θ𝖯𝖤,θ𝖳𝖤,θ𝖮𝖴𝖳𝖯𝖴𝖳,θ𝖬𝖧𝖭\theta_{\mathsf{PE}},\theta_{\mathsf{TE}},\theta_{\mathsf{OUTPUT}},\theta_{\mathsf{MHN}} and input tokens (x1,,xn)𝒱n(x_{1},\ldots,x_{n})\in\mathcal{V}^{n}
2:Output: An output token 𝖬𝖧𝖬θ(x)\mathsf{MHM}_{\theta}(x), an output distribution pθ(|x1,,xi)p_{\theta}(\cdot|x_{1},\ldots,x_{i}) for every i[n]i\in[n].
3:hi(0)θ𝖳𝖤(xi)+θ𝖯𝖤(i),i[n]h_{i}^{(0)}\leftarrow\theta_{\mathsf{TE}}(x_{i})+\theta_{\mathsf{PE}}(i),\quad\forall i\in[n]
4:(h1(1),,hn(1))𝖬𝖧𝖭θ𝖬𝖧𝖭(h1(0),,hn(0))(h_{1}^{(1)},\ldots,h_{n}^{(1)})\leftarrow\mathsf{MHN}_{\theta_{\mathsf{MHN}}}(h_{1}^{(0)},\ldots,h_{n}^{(0)})
5:pθ(|x1,,xi)𝖮𝖴𝖳𝖯𝖴𝖳θ𝖮𝖴𝖳𝖯𝖴𝖳(hi(1))p_{\theta}(\cdot|x_{1},\ldots,x_{i})\leftarrow\mathsf{OUTPUT}_{\theta_{\mathsf{OUTPUT}}}(h_{i}^{(1)}) for i[n]i\in[n]
6:𝖬𝖧𝖬θ(x)argmaxypθ(y|x1,,xn).\mathsf{MHM}_{\theta}(x)\leftarrow\arg\max_{y}p_{\theta}(y|x_{1},\ldots,x_{n}).

C.2 Results of Hopfield with CoT

In this subsection, we present the theoretical results for the modern Hopfield model. First, we show that the Hopfield update rule can achieve an attention mechanism.

Lemma C.1 (The Hopfield update rule can be interpreted as a form of the transformer attention mechanism, page 5 in [59]).

Let NN be the number of stored (key) patterns, represented as row vectors yiy_{i}, and SS be the number of state (query) patterns, represented as row vectors rir_{i}. Define Y=(y1,,yN),R=(r1,,rS).Y=(y_{1},\ldots,y_{N})^{\top},R=(r_{1},\ldots,r_{S})^{\top}. The Hopfield update rule can be expressed as:

Z=softmax(1dkQK)V=softmax(βRWQWKY)YWKWV,\displaystyle Z=\mathrm{softmax}(\frac{1}{\sqrt{d_{k}}}QK^{\top})V=\mathrm{softmax}(\beta\,RW_{Q}W_{K}^{\top}Y^{\top})YW_{K}W_{V}, (4)

where the following hold: X=K=YWK,Ξ=Q=RWQ,V=YWKWV=XWV,X^{\top}=K=YW_{K},\Xi^{\top}=Q=RW_{Q},V=YW_{K}W_{V}=X^{\top}W_{V}, and WKdy×dk,WQdr×dk,WVdk×dv,W_{K}\in\mathbb{R}^{d_{y}\times d_{k}},W_{Q}\in\mathbb{R}^{d_{r}\times d_{k}},W_{V}\in\mathbb{R}^{d_{k}\times d_{v}}, and β=1dk.\beta=\frac{1}{\sqrt{d_{k}}}. Here, dkd_{k} is the dimension of the Hopfield space, dyd_{y} and drd_{r} are the dimensions of the stored and state patterns, and ZZ represents the attention scores.

The middle part of Eq. 4 corresponds to the attention mechanism of the transformer. Specifically, when R=YR=Y and WKWVW_{K}W_{V} are substituted with WVW_{V}, the Hopfield update rule is reduced to transformer self-attention.

Definition C.2 (Word Problem for Group GG, [48]).

For a group GG and nn elements (f1,,fn)(f_{1},\ldots,f_{n}) from GG, the word problem G\mathcal{L}_{G} is defined as the problem of determining whether the product f1f2fnf_{1}\circ f_{2}\circ\cdots\circ f_{n} equals the identity element of GG.

Lemma C.3 (Theorem 3.5 in [48]).

Assuming 𝖳𝖢0𝖭𝖢1\mathsf{TC}^{0}\subsetneq\mathsf{NC}^{1}, Transformer with nn step CoT, logn\log n embedding size, and constant precision can solve the wording problem S5\mathcal{L}_{S_{5}}, S5S_{5} is permutation groups over 5 elements. However, a Transformer with constant depth, polynomial embedding size, and logn\log n precision, but without CoT, cannot solve it.

The proof of Theorem C.4 is a direct consequence of Lemma C.3.

Theorem C.4.

Assuming 𝖳𝖢0𝖭𝖢1\mathsf{TC}^{0}\subsetneq\mathsf{NC}^{1}, modern Hopfield model with nn step CoT, embedding size logn\log n, and constant precision can solve the word problem S5\mathcal{L}_{S_{5}}. However, the modern Hopfield model with a constant depth, polynomial embedding size poly(n)\mathrm{poly}(n), and logn\log n precision, but without CoT, cannot solve it.

Appendix D Graph Connectivity and Tree Isomorphism Problems

D.1 Undirected Graph Connectivity Problem

Definition D.1 (Undirected graph connectivity problem).

The undirected graph connectivity problem determines whether a path connects two specific vertices uu and vv in an undirected graph GG.

Definition D.2 (𝖥𝖫\mathsf{FL}, page 8 on [12]).

The class 𝖥𝖫\mathsf{FL} is the set of languages recognized by a deterministic Turing Machine in space O(logn)O(\log n).

Following the definitions, we have some results.

Fact D.3 (Proposition 4.1 in [12]).

𝖭𝖢1𝖥𝖫\mathsf{NC}^{1}\subseteq\mathsf{FL}

Lemma D.4 (Theorem 3 in [11]).

Undirected graph connectivity is 𝖭𝖢1\mathsf{NC}^{1}-hard for 𝖥𝖫\mathsf{FL} in definition D.2. When the given graph is known to be a disjoint union of cycles, the connectivity problem is 𝖭𝖢1\mathsf{NC}^{1}-complete for 𝖥𝖫\mathsf{FL}.

D.2 Tree Isomorphism Problem

Definition D.5 (Colored trees, page 2 on [34]).

A tree with n nodes is said to be colored if each node in the tree is labeled with a positive integer no greater than n.

Definition D.6 (Isomorphism, page 2 on [34]).

An isomorphism between two colored trees T1T_{1} and T2T_{2} is a bijection between their node sets that satisfies the following conditions:

  • The root of T1T_{1} maps to the root of T2T_{2}.

  • The colors of the nodes are preserved.

  • The structure of the edges is maintained.

We denote T1T2T_{1}\simeq T_{2} if an isomorphism exists between them. These notions are similarly applicable to non-colored trees.

Following the definition, we explore its computational implications.

Definition D.7 (Tree isomorphism problem, page 2 on [34]).

Given two trees T1T_{1} and T2T_{2}, the tree isomorphism problem is to determine whether T1T2T_{1}\simeq T_{2}. If the trees are colored, the problem is referred to as the colored tree isomorphism problem.

Lemma D.8 (Theorem 3.3 in [34]).

In the string representation, the colored tree isomorphism problem and the tree isomorphism problem are 𝖭𝖢1\mathsf{NC}^{1}-complete under DLT\leq^{\mathrm{DLT}}, where DLT\leq^{\mathrm{DLT}} denotes 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME} reducibility.

Appendix E Proofs of Section 5

E.1 Proofs of Lemma 5.1

Lemma E.1 (Lemma 5.1 Restated).

Assuming that ppoly(n)p\leq\operatorname{poly}(n) and the linear affine feature map Φ\Phi is defined as Φ(u):=Wu\Phi(u):=Wu for u,v𝔽pdu,v\in\mathbb{F}_{p}^{d}, where W𝔽pDΦ×dW\in\mathbb{F}_{p}^{D_{\Phi}\times d} and the linear kernel is 𝒦(u,v)=uWWv\mathcal{K}(u,v)=u^{\top}W^{\top}Wv, then the kernelized Hopfield attention matrix AA can be simulated using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with depth 3dstd+2d+dexp3d_{\mathrm{std}}+2d_{\oplus}+d_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

We compute the entry of the kernelized attention matrix AA:

Ai,j:=exp(β(WRiWQ)(WYjWK)).\displaystyle A_{i,j}:=\exp(\beta\cdot(WR_{i}W_{Q})^{\top}(WY_{j}W_{K})).

Using Lemma 4.1, the scalar product

si,j=(WRiWQ)(WYjWK)\displaystyle s_{i,j}=(WR_{i}W_{Q})^{\top}(WY_{j}W_{K})

can be simulated by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit which has poly(n)\operatorname{poly}(n) size and 5(dstd+d)5(d_{\mathrm{std}}+d_{\oplus}) depth.

Next, by Part 1 of Lemma B.5, the product βsi,j\beta\cdot s_{i,j} requires depth dstdd_{\mathrm{std}}.

Finally, by Lemma B.6, the exponential function exp(βsi,j)\exp(\beta\cdot s_{i,j}) can be computed with depth dexpd_{\mathrm{exp}} and size poly(n)\operatorname{poly}(n).

Summing the depths, the overall depth for Ai,jA_{i,j} is:

dtotal=6dstd+5d+dexp.\displaystyle d_{\mathrm{total}}=6d_{\mathrm{std}}+5d_{\oplus}+d_{\mathrm{exp}}.

As all Ai,jA_{i,j} can be computed in parallel, the attention matrix AA requires a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit which has depth 6dstd+5d+dexp6d_{\mathrm{std}}+5d_{\oplus}+d_{\mathrm{exp}} and size poly(n)\operatorname{poly}(n).

E.2 Proofs of Lemma 5.2

Lemma E.2 (Lemma 5.2 Restated).

Assuming that ppoly(n)p\leq\operatorname{poly}(n). Then we can simulate kernelized Hopfield layer using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with depth 10dstd+8d+dexp10d_{\mathrm{std}}+8d_{\oplus}+d_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

To compute a single layer, we must multiply the matrices D1D^{-1}, AA, YY, and WVW_{V}. First, we compute DD and AA. Using D:=diag(βA1n)D:=\operatorname{diag}(\beta\cdot A1_{n}), DD can be simulated with depth d+dstdd_{\oplus}+d_{\mathrm{std}} and size poly(n)\operatorname{poly}(n) by Part 1 and Part 3 of Lemma B.5.

From Lemma 5.1, computing AA requires depth 6dstd+5d+dexp6d_{\mathrm{std}}+5d_{\oplus}+d_{\mathrm{exp}}.

Next, matrix products AA, YY, and WVW_{V} can be simulated with depth 2(dstd+d)2(d_{\mathrm{std}}+d_{\oplus}) and poly(n)\operatorname{poly}(n) size, as shown in Lemma 4.1. Finally, the computation of D1(AYWV)D^{-1}\cdot(AYW_{V}) can be performed using parallel division, requiring depth dstdd_{\mathrm{std}} and size poly(n)\operatorname{poly}(n), which follows from Part 1 of Lemma B.5.

Combining these results, the total depth is:

dtotal=10dstd+8d+dexp.\displaystyle d_{\mathrm{total}}=10d_{\mathrm{std}}+8d_{\oplus}+d_{\mathrm{exp}}.

Since the operations can run in parallel with size poly(n)\operatorname{poly}(n), the kernelized Hopfield layer is simulated by 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with depth 10dstd+8d+dexp10d_{\mathrm{std}}+8d_{\oplus}+d_{\mathrm{exp}}. ∎

E.3 Proofs of Lemma 5.3

Lemma E.3 (Lemma 5.3 Restated).

Assume that, for each i[m]i\in[m], we can simulate the function fif_{i} in 𝖪𝖧𝖭\mathsf{KHN} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with poly(n)\operatorname{poly}(n) size and constant depth dfd_{f}. Let the precision ppoly(n)p\leq\operatorname{poly}(n). Then we can simulate kernelized Hopfield networks using a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with depth (m+1)df+10mdstd+8md+mdexp(m+1)d_{f}+10md_{\mathrm{std}}+8md_{\oplus}+md_{\mathrm{exp}} and size bounded by poly(n)\operatorname{poly}(n).

Proof.

By assumption, for each i[m]i\in[m], we are able to simulate fif_{i} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit of dgd_{g} depth and poly(n)\operatorname{poly}(n) size. From Lemma 4.3, each 𝖪𝖧𝗈𝗉i\mathsf{KHop}_{i} has depth 10dstd+8d+dexp10d_{\mathrm{std}}+8d_{\oplus}+d_{\mathrm{exp}}.

To compute 𝖪𝖧𝖭(R)\mathsf{KHN}(R), we need circuits for f0,f1,,fmf_{0},f_{1},\ldots,f_{m} and 𝖪𝖧𝗈𝗉1,,𝖪𝖧𝗈𝗉m\mathsf{KHop}_{1},\ldots,\mathsf{KHop}_{m}. The total circuit depth becomes:

(m+1)dg+10mdstd+8md+mdexp.\displaystyle(m+1)d_{g}+10md_{\mathrm{std}}+8md_{\oplus}+md_{\mathrm{exp}}.

The circuit size remains poly(n)\operatorname{poly}(n), proving that the kernelized Hopfield network is computable under these conditions. ∎

E.4 Proofs of Theorem 5.4

Theorem E.4 (Theorem 5.4 Restated).

Assume that, for each i[m]i\in[m], we can simulate the function fif_{i} in 𝖪𝖧𝖭\mathsf{KHN} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform threshold circuit with constant depth dfd_{f} and size poly(n)\operatorname{poly}(n). If precision ppoly(n)p\leq\operatorname{poly}(n), hidden dimension dO(n)d\leq O(n), and number of layers mO(1)m\leq O(1), then we can simulate kernelized Hopfield networks 𝖪𝖧𝖬\mathsf{KHM} by a 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤\mathsf{DLOGTIME}-uniform circuit family within 𝖳𝖢0\mathsf{TC}^{0}.

Proof.

With m=O(1)m=O(1), Lemma 5.3 ensures that the circuit depth for 𝖪𝖧𝖬(R)\mathsf{KHM}(R) is:

(m+1)dg+10mdstd+8md+mdexp=O(1),\displaystyle(m+1)d_{g}+10md_{\mathrm{std}}+8md_{\oplus}+md_{\mathrm{exp}}=O(1),

with size poly(n)\operatorname{poly}(n). Thus, the kernelized Hopfield network can be implemented using a uniform circuit family within 𝖳𝖢0\mathsf{TC}^{0}. ∎