The Parallelism Tradeoff: Limitations of Log-Precision Transformers
Abstract
Despite their omnipresence in modern NLP, characterizing the computational power of transformer neural nets remains an interesting open question. We prove that transformers whose arithmetic precision is logarithmic in the number of input tokens (and whose feedforward nets are computable using space linear in their input) can be simulated by constant-depth logspace-uniform threshold circuits. This provides insight on the power of transformers using known results in complexity theory. For example, if (i.e., not all poly-time problems can be solved using logarithmic space), then transformers cannot even accurately solve linear equalities or check membership in an arbitrary context-free grammar with empty productions. Our result intuitively emerges from the transformer architecture’s high parallelizability. We thus speculatively introduce the idea of a fundamental parallelism tradeoff: any model architecture as parallelizable as the transformer will obey limitations similar to it. Since parallelism is key to training models at massive scale, this suggests a potential inherent weakness of the scaling paradigm.
1 Introduction
This work aims to characterize the computational model implicit in transformer neural networks (Vaswani et al., 2017), which form the basis of recent breakthroughs in large language models such as BERT Devlin et al. (2019), T5 Raffel et al. (2020), and GPT-3 Brown et al. (2020). What computational primitives can the transformer’s components implement, and what problems can the full system solve in aggregate? These questions are important for interpreting transformers in a principled way, understanding potential limitations of their reasoning capabilities, and building trust in deployed transformer-based systems.
Early theoretical work on transformers established their Turing completeness, albeit with assumptions like infinite precision and arbitrarily powerful feedforward subnets (Pérez et al., 2019; Dehghani et al., 2019). On the other hand, a strand of more recent work uses techniques from circuit complexity theory to derive strong limitations on the types of problems transformers can solve given restrictions on the form of attention allowed in the transformer. Specifically, Hahn (2020) and Hao et al. (2022) showed transformers restricted to hard attention are very limited: they can only solve problems in a weak complexity class (non-uniform ) that doesn’t even contain basic problems like majority of bits. Merrill et al. (2022) extended this to a more general class of “saturated attention” transformers with a floating point datatype, and showed a larger class of problems (non-uniform ) as an upper bound. This motivates analyzing a setting that strikes a middle ground: Can we characterize transformers whose precision and feedforward nets’ computational power are realistically bounded, but where attention is also realistically expressive?
An important practical limitation of these prior results is the “non-uniform” nature of the considered circuit classes, which makes these classes non-realizable and the findings difficult to interpret. This is because non-uniform and , while highly limited in computation, also contain some problems that are not even decidable, i.e., for which there doesn’t exist any exact algorithm. Thus, non-uniform classes cannot be directly compared with standard algorithmic complexity classes such as , , etc. This motivates our second key question: Can we derive uniform upper bounds on transformers?
We show that one can achieve both of these goals by making the modest assumption that all values in the transformer have precision (where is the number of input tokens), and, similarly, that transformer’s subnetworks are computable in space. Log precision is enough to represent the positional encodings at the input layer of the transformer, and to encode pointers to all other positions in the sequence at later transformer layers. Assuming log precision across all layers captures the idea that the hidden representations contain a constant number of hidden states whose precision (16 or 32 bits) is small relative to the length of the input (2048 in GPT-3). On long sequences, the precision will not be enough to losslessly encode the full input sequence into a single vector. Instead, the processing of the sequence must somehow be distributed in each layer and performed in parallel.
Upper Bound on Transformers.
Our main contribution is proving that log-precision transformers can be simulated by uniform constant-depth threshold circuits. Thus, such transformers can only solve problems in uniform . This characterization is strikingly weak compared to the Turing-completeness of infinite-precision transformers. Since we believe log precision is more realistic for practical transformers than infinite precision, these results point to the conclusion that transformers are not Turing-complete in practice.
In contrast to past results, our upper bound on transformers is a uniform circuit class, enabling direct comparison of log-precision transformers to many natural complexity classes. These connections reveal specific problems that define the upper limits of log-precision transformers’ capabilities, as discussed further in § 2.
Intuitively, our upper bound says that log-precision transformers are computationally shallow, and that this shallowness can be understood to emerge from their parallelizability. Transformers’ inherent parallelism is useful for training them efficiently at massive scale, but may limit the complexity of the computations they can express. We introduce the term parallelism tradeoff to capture this idea, which represents a potential fundamental weakness of the current paradigm of scaling language models. Formally characterizing reasoning capabilities relevant to language models and understanding whether they likely fall outside upper bounds implied by the tradeoff would clarify the practical implications of this limitation of scaling.
It could also be that the limitations of parallelism are not a curse but a blessing, if they constrain the hypothesis space in a way useful for learning. We have no evidence that this is true, but mention it as an alternate interpretation of the results that could be clarified in future work.
Instruction Following and Advice Transformers.
We also consider an instruction following setting (Brown et al., 2020) where the transformer is provided the description of a task along with an input on which to execute the instruction. We construct a practically parameterizable transformer that can execute instructions perfectly if they are provided in the form of circuits. This complements recent work that studies transformers’ ability to follow other forms of instructions such as regular expressions (Finlayson et al., 2022).
Based on the fundamental property that transformers can correctly evaluate any given circuit on a given input, we introduce the notion of advice transformers akin to advice taking Turing machines. We show that transformers can recognize any (non-uniform) language if provided appropriate poly-size advice.
In summary, our findings provide new insights on both the abilities and the limitations of transformers, and bring out bounded precision, threshold computations, and parallelism as key notions for understanding the implicit computational model of transformers in practice.
Roadmap.
Before diving into technical details, we discuss in § 2 the implications of our results on both fundamental as well as practical abilities of transformers. § 3 provides a brief primer on circuits as a model of computation. It then discusses a way of serializing a circuit into a string; we later show how to generate such serializations using a resource-bounded algorithm, which is the key to proving containment of transformers in uniform circuit classes. § 4 defines our formal model of bounded-precision transformers. § 5 derives our first formal bound on log-precision transformers. This bound involves non-uniform circuit families, similar in spirit to prior results in this area. § 6 proves our more technical main result: the first uniform circuit complexity upper bound for transformers (specifically, uniform ). Finally, § 7 provides a lower bound on transformers, introduces the notion of an Advice Transformer, and connects these to the machine learning problems of Instruction Learning and Following.
2 Implications of Our Findings
Before diving into technical details, we discuss the general implications of our findings on the abilities and limitations of transformers. We will focus here on our main result (Thm. 2), which shows that log-precision transformers are in the complexity class logspace-uniform .
The Parallelism Tradeoff.
One interpretation of complexity classes such as , , and is sets of poly-time solvable problems that are parallelizable to a very high degree—they can be solved in parallel in constant time with enough parallel processors. This gives some intuitive explanation of our result: log-precision transformers end up in because they were designed to be highly parallelizable. Since parallelism is an important property of today’s dominant paradigm of training models at massive scale, this points to the conclusion that any massively scaled up model—transformer or otherwise—will likely obey restrictions similar to the ones derived here for log-precision transformers. There is thus an important tradeoff between the massive parallelizability of today’s networks and their representation power.
What Transformers Can/Cannot Compute.
Our result places log-precision transformers in the complexity class logspace-uniform . This has immediate implications on the kinds of problems such transformers can and cannot accurately solve.
Consider any problem that is complete for a complexity class that contains logspace-uniform . By definition of completeness, every problem log-precision transformers can solve perfectly is efficiently reducible to and is thus no harder than . This implies that—despite their massive size—the computation performed by such transformers is, for instance, no harder than solving basic -complete problems like graph connectivity: the problem of checking whether there is a path between two nodes in an undirected graph (Lewis and Papadimitriou, 1982; Reingold, 2008).
By the same token, if is strictly larger than logspace-uniform , then such transformers cannot perfectly solve . Thus, log-precision transformers cannot perfectly solve the following reasoning problems:
-
•
Linear equalities: find s.t. 111Assuming logspace-uniform . Follows because these problems are -complete Greenlaw et al. (1991).
-
•
Universal context-free recognition222Takes both a grammar and a string as input and return whether the grammar generates the string. Jones and Laaser (1976) demonstrate -completeness.
-
•
Propositional satisfiability (SAT)333Assuming logspace-uniform . Follows because SAT is -complete (cf. Biere et al., 2009).
-
•
Horn-clause satisfiability (HORN-SAT)
-
•
AI planning Bylander (1991)
- •
This highlights the limits of practical transformers with limited-precision arithmetic, indicating that they are far from being universal or all-powerful as suggested by some prior studies.
One important caveat about these negative results is that they are asymptotic in nature—they apply for “large enough” input size . It’s possible for log-precision transformers to solve such problems easily when is small. Further, these negative results are about exact solutions, but they often also extend beyond this when formal hardness-of-approximation results are known.
Limitations of Our Formal Model.
Prior formal characterizations of transformers either make unrealistically strong assumptions Pérez et al. (2019); Dehghani et al. (2019) or place unrealistic restrictions Hahn (2020); Hao et al. (2022); Merrill et al. (2022). In contrast, we make only one assumption—namely, all intermediate values in the transformer are limited to bits, where is the number of input tokens. We next discuss some implications of this assumption and what our findings mean for practical transformers.
As mentioned above, our bounds are asymptotic in nature and thus apply when is sufficiently large. In practice, transformers use fixed precision at each computation node, which is more restrictive than precision growing with the input sequence length , as bits. However, this constant could be large and thus, for relatively small , our results do not rule out practical transformers solving difficult problems. Our results, however, do show that as grows sufficiently large, log-precision transformers are fundamentally limited to problems within and cannot accurately solve various commonly studied problems mentioned earlier under “What Transformers Cannot Compute”. Extending our analysis to small will help close the gap to practice.
Our formal model is based on a binary classification view of transformers. However, our results apply directly to multi-class classification as well and can be extended to generation problems by viewing, for instance, next word prediction in NLP as a multi-class classification problem. However, if the transformer decoder is allowed to condition on its previous output in a generation problem, then this would violate our formal setup.
2.1 Potential Applications
Extracting Circuits from Transformers.
Elhage et al. (2021) propose extracting circuits555Their sense of “circuit” is not exactly the formal sense we use in this paper, though the goal of capturing transformers’ implicit computational mechanism is the same. that capture the computational structure of transformers. Our results suggest threshold circuit families are a good formalism for expressing mechanisms extracted from transformers. Constructively converting transformers to threshold circuits is beyond the scope of the current paper, although we hope to explore this in more detail in future work.
Testing Separation Candidates in Complexity Theory.
Thm. 2 also motivates a paradigm for quickly testing complexity theory conjectures. If a problem is believed to separate and , a transformer can be trained on problem instances. If the transformer generalizes perfectly to harder instances than it was trained on, this gives an empirical hint that the problem is in , providing evidence against the conjecture.
3 Circuit Computation
Let be the set of finite binary strings. For , let be its length. We refer to a function from to as a boolean function. Boolean functions can implement arithmetic operations if we define a semantics for binary strings as numbers. We will treat the intermediate values in a transformer as binary strings, and the internal operations as boolean functions.
Circuits are a model of computation for computing boolean functions of fixed-length binary strings.666For a mini-tutorial on circuit complexity theory and its relevance to transformers, see Merrill et al. (2022). Formally, a circuit is a directed acyclic computation graph. The leaf nodes represent binary variables and their negations. The internal nodes represent functions in some set , and the directed edges represent the flow of function outputs into inputs of other functions. One or more nodes in the circuit are marked such that their value is the output of the circuit.
Definition 1.
For a set of functions , a -circuit is a directed acyclic computation graph where the internal nodes have labels from .
Complexity Measures.
The size of a circuit is the total number of gates in it, including negation. The depth of a circuit is the length of the longest path from any input node to any output node.
Circuit Families.
A circuit family generalizes a circuit to take variable-length binary strings as input. Formally, a circuit family is a sequence of circuits for . A circuit family implicitly recognizes a formal language defined as follows:
Definition 2.
A circuit family recognizes if, for all , if and only if .
We now define classes of languages by constraining the complexity of the circuit families needed to recognize them:
Definition 3.
Let non-uniform be the set of such that is recognizable by a poly-size, constant-depth -circuit family.
For , a threshold gate takes input bits and returns whether . We define analogously. For example, .
Definition 4.
Let be the set of such that is recognizable by a poly-size, constant-depth -circuit.
The gates , , and are all just special cases of thresholds, so we can imagine circuits to have access to these as well. Thus, circuits can implement circuits.
Circuit Serialization.
We identify a circuit with its serialization in a formal language that identifies each node’s label and adjacency list. We will adopt a specific grammar for concreteness, but our construction can be adapted to other string representations of circuits.
We define a circuit serialization as a traversal of a circuit ordered by some topological sort. In this serialization, leaf nodes (variables) are represented by the string X. An internal node (gate) is represented in Polish notation by the function it computes (AND, OR, or NOT) followed by a list of pointers to its arguments. Each argument of gate encodes (in a unary) a zero-indexed pointer to the -th gate in the circuit, where . The final node is interpreted as the circuit output.
To serialize -circuits, we use the following grammar, where the parameter is passed through nonterminals to track the index of the gate in left-to-right order:
In the rule, we enforce that so that arguments must be pointers to already defined gates. As an example of this serialization language, the circuit for is represented as777Spaces here (and in the grammar) are added for readability. We will ignore these spaces when passing circuit serializations as inputs to a transformer in § 7.
X X X NOT &1 OR & &111 &11
By convention (cf. § 3), negations in circuits are usually taken to occur at the beginning of the circuit, rather than after or nodes.888We can apply De Morgan’s laws to force any circuit to have this property. Our serialization grammar does not enforce this property, but of course any circuit with this property can be serialized by our grammar.
It is a bit more complicated to serialize threshold circuits. Formally, a threshold circuit serialization is generated by the following grammar:
In the rewrite rule for , is the arity of the gate, and is its threshold. The span 1k after can be interpreted semantically as a unary encoding of the parameter for a threshold gate, padded by 0’s to the number of total arguments of gate . For simplicity, we imagine gates are represented as unary gates. Thus, the circuit would be represented as
X X <= 00 &1 >= 10 & &11
We say a threshold circuit serialization is in prefix form if all inputs (X) come before all threshold gates (<= or >=), as is the case in this example.
Uniformity.
The circuit families we have defined above are non-uniform, meaning that we do not enforce that the circuits processing different input sizes must be related in any way. In degenerate cases, non-uniform circuit families can solve undecidable problems999Consider the unary language such that Turing machine (under some arbitrary enumeration) halts. This problem is in non-uniform since we can hard-code the right answer for each in . because they have infinite description length, making them a physically unrealizable model of computation. Complexity theorists have thus introduced uniform circuit families. Uniform circuit families are a realizable model of computation with relations to classes in computational complexity and formal language theory.
Intuitively, in a uniform circuit family, the circuits for different input sizes must be “somewhat similar” to each other. We formalize this (cf. Arora and Barak, 2009) by saying that there exists a resource-constrained Turing machine that maps the input to a serialization of circuit .
Definition 5.
A language is -space uniformly computable by a circuit model iff there exists a Turing machine that, for all , uses space to map to an -circuit recognizing on inputs of size .
This notion of uniformity is more general than the standard notion in that the input size is a function of the problem complexity . The reason for this is that we will apply uniformity to subcomputations with different input sizes within a larger computation of input size . The standard notion of uniformity corresponds to .
Furthermore, we will refer to a circuit family as uniform if it is uniformly computable with (cf. Arora and Barak, 2009). We can define uniform versions of and by adopting the previous definitions exactly, but also enforcing uniformity. For the rest of the paper we will clarify whether we mean the uniform or non-uniform variant of when unclear from context, since both classes will come up.
4 Bounded-Precision Transformers
A transformer (Vaswani et al., 2017) is a neural network architecture made up of a constant number of transformer layers. A transformer layer is a module that computes self-attention over a sequence followed by an elementwise transformation of the output vectors.
4.1 Precision and Space
We will assume that each transformer is resource bounded in terms of the precision of each value it computes and, for some of our results, the space it uses for the computation of key operations such as embedding, attention, and activation. Specifically, we will assume precision , i.e., the values at all layers, as well as the outputs of all key intermediate operations in it (attention, activation, arithmetic operators, etc.), are represented using bits. This is a realistic assumption as, in practice, today’s transformers are typically limited to the 64-bit precision of the underlying hardware. Formally, we define -precision as follows:
Definition 6.
A -ary function is -precision if have size at most bits, and can be computed by a -space-bounded Turing machine.
This says the size of the function input and output are bounded below . Similarly, the intermediate space used by the computation must also be bounded below . Thus, higher precision computations cannot somehow be hidden inside .
Def. 6 naturally applies to functions with bounded arity . We will also need to define precision for the summation operator in the transformer, which adds different floats of size .101010Our proof also goes through if the transformer weights are integers, as is sometimes done (Dettmers et al., 2022). Adding floats can blow up the precision needed to represent their sum. For example, imagine adding the floating points . We obtain , whose mantissa takes bits to represent. In practice, computers do not preserve full precision in such situations: instead, small terms like are discarded. Thus, we define the transformer’s addition operation to be similarly approximate (and thus preserve precision); see § A.
4.2 Transformer Definition
4.3 Attention Heads
The core building block of a transformer is an attention head. We define this at a high level of abstraction as follows:
Definition 7.
A -precision attention head is specified by a binary -precision similarity function .
Let be the input sequence to a -precision attention head, and let be approximate floating-point addition (§ A).
Definition 8.
For all , a -precision attention head computes a vector via
where .
Standard transformer attention heads (Vaswani et al., 2017) are a special case of this definition where is scaled dot-product similarity between keys and queries. Standard transformers also have a linear or affine value function applied to each in the sum over . By its affineness, the value function can, without loss of generality, be removed from the attention head and considered to be part of the transformer layer (i.e., applied to the output of the attention head).
4.4 Transformer Layers
A -precision transformer layer is then a tuple of heads and a function used to combine them.
Definition 9 (-precision transformer layer).
A -precision transformer layer is a tuple , where each is an attention head and is a -precision activation function.
A -precision transformer layer can be understood to define a sequence of vectors in terms of an input sequence of vectors (coming from the previous layer in the transformer) by first computing attention heads in parallel and then combining their output using . The first inputs to will correspond to the attention head outputs, and the additional input is the original input from the previous layer. Recall that is the output of head on input at position . The function computed by a transformer layer can be described formally as follows.
Definition 10 (Transformer layer computation).
For , a -precision transformer layer recurrently computes the output sequence as a function of the inputs , where, for , the -th component is computed according to
4.5 Transformer Encoder
Finally, we define a transformer of depth as a cascade of transformer layers:
Definition 11 (-precision transformer).
A -precision transformer over alphabet is a pair consisting of a -precision position embedding function111111To apply the normal notion of -precision to inputs outside , we imagine elements of are encoded as integers in binary, and natural numbers are represented as integers . Thus, we assume . and a -tuple of -precision transformer layers .
For a position embedding function and , let be the position-wise broadcasted embedding of : for , .
Definition 12 (Transformer computation).
A transformer computes the following function of a string :
We will use to denote the length of , and take the transformer’s depth to be fixed w.r.t. .
The input to the transformer can thus be represented with bits using a binary encoding for the vocabulary. The circuits we construct in subsequent sections to simulate transformers will also have input size . We will assume transformers have log-precision relative to the size of the input, specifically -precision. Since is fixed (typically 30000 in practice), we will think in terms of -precision. Thus, by Def. 6, all of the intermediate functions of such transformers are computable in space and output (at most) these many bits. Note that this is enough precision to represent positional encodings and for each position to point to a constant number of other values, but not enough precision for non-lossy pooling of the entire input into a single value.
Relationship to Practical Transformers.
Our log-precision transformers do not enforce that (Def. 7) and (Def. 9) follow the transformer structure. However, a feedforward net whose primitive operations (e.g., scalar multiplication) are defined over -size numbers can be computed in space. Thus, bounded-precision practical transformers are a special case of our log-precision transformers. This makes our setup appropriate for proving upper bounds on transformers, which is our main contribution.
5 Log-Precision Transformers as Non-Uniform Threshold Circuits
We first show that log-precision transformers can be simulated by non-uniform threshold circuits, before presenting the more technical uniform version of the results in §6. The initial non-uniform result extends the findings of Merrill et al. (2022), who showed that saturated attention transformers121212Saturated attention is uniform attention over a subset of the prior layer nodes. can be simulated in . Here, we remove the simplifying saturated attention assumption and other restrictions on the underlying datatype. Instead, we show that our log-precision assumption is enough to prove that a transformer can be simulated in with any attention function.
Hao et al. observed that any boolean function of bits can be computed by a poly size circuit. We extend this to -bit outputs, which is both more convenient and more efficient than constructing separate boolean circuits:
Lemma 1 (Extended from Hao et al., 2022).
Let be a function. For all and , there exists an AND/OR circuit of size at most and depth that computes on inputs of size .
Proof.
Like Hao et al. (2022), we construct a circuit using a DNF representation of on inputs of size , except we use a combined DNF representation for all output bits of . The DNF formula has at most terms. The circuit has a NOT gate for each input bit, an AND gate for each DNF term, and, for each of the output bits, an OR gate combining the outputs of those AND gates (i.e., DNF terms) for which that bit is . ∎
We now use Lem. 1 to prove the following non-uniform result. We note that the proof goes through even if the notion of -precision (Def. 6) is relaxed to not require computability in space . This requirement will, however, become important for our subsequent result in § 6.
Theorem 1 (Non-uniform).
Any -precision depth- transformer operating on inputs in can be simulated by a threshold circuit family of depth .
Proof.
Let be the input of a -precision transformer. We show by induction that we can construct a composition of constant-depth, poly-size threshold circuits to compute each layer of this transformer. Thus, any constant-depth transformer will be computable by a constant-depth threshold circuit.
In the base case of layer and token , we construct gates representing the constant encoded in binary. We can then compute using Lem. 1, yielding a poly-size depth-3 circuit.
In the inductive case of computing layer for , we note that each vector output of layer has size (at most) bits because of the log-precision assumption.
We first fix a head (Def. 8) to simulate. Applying Lem. 1, we can compute with a poly-size depth-3 circuit, in parallel for all . Since floats with precision can be approximately added in (§ A), we can construct a circuit of depth to compute . Since , and all have bits, we can compute with a poly-size depth-3 circuit;131313This may seem counterintuitive since multiplication of two -precision numbers is outside . However, here we leverage the fact that the precision is . we do this in parallel for all . Next, we again use the fact that approximate addition of floats is in to compute as the approximate sum over with a depth- circuit.
We now simulate a layer (Def. 10) in terms of its constituent heads. Since all arguments of have size , we apply Lem. 1 to compute with a poly-size depth-3 circuit, yielding . We repeat this in parallel for all . This completes the inductive step new to compute all values in the -st layer with a circuit depth of .
Aggregating the circuit over all layers, the overall circuit depth is . ∎
Corollary 1.1 (Non-uniform).
Any log-precision transformer can be simulated by a non-uniform circuit family.141414Here, a circuit family is a constant-depth, poly-size circuit family computing some function . While we define for decision problems in Def. 4, it is standard and well-defined to extend the same term to refer to circuit families computing functions as well (Hesse, 2001).
6 Log-Precision Transformers as Uniform Threshold Circuits
We will now extend the argument from the last section to show that -precision transformers can be simulated by uniform constant-depth threshold circuits by capitalizing on the assumption that and are log-precision, and thus can be computed in space. The overall proof idea is similar, but due to the uniformity condition, the proof becomes substantially more technical. We must not just show the existence of a threshold circuit family computing a transformer, but also show that this circuit family can be generated by a log-space Turing machine.
We first extend Lem. 1 to respect uniformity:
Lemma 2.
Let be a linear-space computable function. There exists a Turing machine that, for all and , uses at most space to map input to a circuit of size at most and depth that computes on inputs of size at most .
Proof.
We give the proof in the form of an algorithm to construct a circuit as a function of and then justify its correctness and space complexity.
Algorithm. We first print nodes representing unnegated and negated input nodes.151515We ignore the initial unnegated input nodes when considering the size of the circuit.
Now, we need to show how to construct nodes corresponding to DNF terms. To this end, we loop over all possible inputs by maintaining the bit binary representation of (initialized with ) and incrementing it by at each step of the loop. We create a new node with arguments, defined as follows. For , we create an argument pointer to (unnegated) node if and to (negated) node otherwise.
Now, we construct nodes computing each of the output nodes. We loop over , constructing a single node for each . We loop over all analogously above to construct a list of arguments. By our linear-space computability assumption and because has bits, we can compute as a subroutine in -space to obtain . If , we print node as an argument of node .
Correctness. We show that this Turing machine maps input to a serialized circuit computing on inputs of size . The first layer simply produces unnegated and negated input values. The second layer then produce all possible DNF terms. Finally, node of the third layer computes the disjunction over all terms such that . Thus, node of the third layer computes .
Log Space. To complete the proof, we justify that uses space. Looping over is accomplished by treating as a binary number initialized to and incrementing it at each step. Thus, the loop pointer for building the DNF terms takes space to store. For building the output nodes, we maintain a similar loop pointer as well as an index , taking space. Thus, the overall algorithm uses space.
Thus, uses space to map to a circuit of size at most and depth that computes on size inputs. ∎
We can leverage this lemma to derive the uniform analog of Thm. 1, as follows.
Theorem 2 (Uniform, main result).
Any -precision depth- transformer operating on inputs in can be simulated by a logspace-uniform threshold circuit family of depth .
Proof.
We will provide a proof by induction over transformer layers that there is a Turing machine operating in space that, on input , outputs a circuit that simulates the transformer’s computation on inputs of size . This circuit is identical to the one in the proof of Thm. 1, and thus has the same circuit depth.
In the base case, we use log space to track a counter maintaining the current token (between and ) throughout the circuit construction. We construct gates encoding the constant in binary. We can then apply Lem. 2 to construct a Turing machine that maps to a constant-depth threshold circuit computing .
In the inductive case, we assume we can output in space a circuit computing every value in the previous layer . We will show that we can, in space, now output a circuit computing every value in layer .
As in Thm. 1, we first fix a head to simulate. Recall (Def. 8) that
By Lem. 2, we can generate a depth- circuit of size at most , where (since the input to is of size ) that computes for specific . We do this sequentially for and , padding each circuit with unused nodes so that each one has size exactly , and the -th node corresponds to the output. Thus, the indices of the output nodes for each of the columns will be for , where is the index of the last output node of the previous layer.
At this point, we use the fact that for , the -precision approximate sum of -precision numbers can be computed by a uniform threshold circuit (§ A). We can thus use a Turing machine as a sub-routine to generate, on input , a threshold circuits, where each has size that computes an gate over items of precision each. We set the inputs of circuit to be nodes for . By construction, this yields the normalizing constants , whose value is located at the node at index for head .
Using -precision arithmetic operator circuits, we can now also generate a circuit to compute for each and , by using index as before for the value of and index for the normalizing constant of head . Here too we use circuits of identical size , making the index of the output nodes of these circuits. Next, we again employ a circuit of size , similar to the computation of , to compute the sum of these values. Finally, we compute by applying via Lem. 2.
Note that this requires keeping only and in memory, each of which takes bits.
We repeat this process for all to compute the entire layer, which finishes the inductive step: if we can output a circuit computing layer in space, then we can do the same for layer . ∎
Because the depth derived in Thm. 2 is constant with respect to , it follows that:
Corollary 2.1 (Uniform, main result).
Any log-precision transformer can be simulated by a uniform circuit family.
7 Lower Bounds for Instruction Following and Advice Transformers
So far, we have shown that uniform is an upper bound for log-precision transformers. Is this upper bound tight, i.e., also a lower bound? While we do not answer this question here, we address a related question as a first step: we construct a transformer that can evaluate circuits on binary inputs, showing that transformers can compute any function when their input is augmented with the right “instructions”.
More formally, we consider the Circuit Value Problem (CVP) (Ladner, 1975), also referred to as the Circuit Evaluation Problem, where the input is a boolean circuit and a string , and the task is to return the value of . This problem is known to be complete for the class under reductions (Ladner, 1975). We will assume is serialized as described in § 3 and prove that log-precision transformers can evaluate any circuit. Note that this is an extension of the typical CVP since the circuit has threshold gates, not just standard AND/OR gates.
It is known that LSTMs cannot evaluate boolean formulae (Merrill, 2020), a special case of the CVP. In contrast, we show that transformers can.
To demonstrate the practicality of our lower bound construction, we will not just prove the existence of transformers that can evaluate circuits but also specify concrete choices for the positional embedding scheme and the class of attention functions that are sufficient to do so.
Fractional Positional Embeddings.
For a vector and scalar , let be the vector appending onto .161616I.e., for , and if . For , let be the one-hot embedding of into . For and , the fractional positional embedding at token is
Saturated Attention.
We imagine is computed via saturated attention (cf. Merrill et al., 2022), which provides a simple model of the types of attention we can expect to be learned in transformers (Merrill et al., 2021). First, queries are computed as , and then keys Define the dot-product attention score . We can then define saturated attention as
After normalization, saturated attention creates a distribution that is uniform over a subset of positions. Thus, it is capable of parameterizing hard attention, uniform attention over the full sequence, and various attention patterns in between.
Simple Pooling Functions.
For simplicity, we assume pooling functions are thresholded linear functions of their inputs. Thus, they could be implemented by a feedforward neural net. Without loss of generality, we let attention heads have a value function, which can be folded into the pooling function from the last layer (see § 4).
Terminology.
We use input node to mean a token of type X and gate node to mean a token of type . We call a token of type & an argument.
We are now ready to present the main result. Our construction below is specific to circuits serialized in prefix form (see §3), but it can be extended to other serializations as well.
Lemma 3.
For all , there exists a transformer with fractional positional embeddings, saturated attention, thresholded linear pooling functions, and depth that, for any threshold circuit of depth serialized in prefix form, maps input to the value .
Proof.
We will construct a pair of two transformer layers that evaluate all the nodes at depth in the threshold circuit, for any . It follows that a transformer of depth can compute the value .
Base Case: Input Nodes. We use an attention layer to attend uniformly over all positions with value returns if and otherwise. This head computes , where is the number of occurrences of X in . A second layer, then, at input node , computes the positional embedding of the token representing input value :
We attend to this position to retrieve . After these layers, each input node stores its value .
We also use the base-case layers to construct an attention head that, at the -th node, counts the fraction of tokens (out of ) that are nodes to the left of the current node. Thus, the column corresponding to node stores the value .
At each gate node , we use two more attention heads to find the index of the next & to the right and then count the fraction of tokens before it that are 1. This head thus computes where is the threshold value of gate and is its arity.
Finally, using the first attention layer, we have each 1 node attend to the first argument symbol & to its left and retrieve its index . Then, in the second attention layer, each argument attends uniformly over all nodes with values . The net effect is for each argument to store , i.e., the pointer it is encoding in unary as &1j.
Inductive Case: Gate Nodes. By our inductive assumption over prior layers, all tokens corresponding to circuit nodes at depth contain their appropriate value. We now construct transformer layers to evaluate gate nodes at depth .
In the first attention layer, each argument token attends to the closest gate node to its left, which is the gate it belongs to. Recall from the base case that argument token & already stores , where is the pointer value it encodes. Each argument token now attends with query to retrieve from node its already computed value.
The second attention layer applies at gate nodes, not arguments. At gate of arity , we set the attention to indicate whether argument belongs to gate node , which holds for exactly arguments. We set the attention value at argument to be the binary value of node , which was retrieved in the previous paragraph. Thus, the attention head computes , where is the number of arguments of node that are . We repeat this for all gate nodes.
At this point, we have both the count of true inputs to gate node () and, from the base case, the threshold parameter of gate (). Thresholding at allows us to decide, based on whether is <= or >=, whether the current gate node should output a or a . Repeating this for all gates at layer completes the inductive step: we can evaluate all gate nodes in this layer. ∎
Theorem 3.
Depth- transformers can solve CVP for depth- circuits.
7.1 Instruction Following
CVP is closely related to instruction learning (Brown et al., 2020) and instruction following tasks (Finlayson et al., 2022). The latter task setup provides a transformer two inputs: a regular expression as an “instruction”, and . The goal of the task is to return whether belongs to the regular language represented by . Viewed from this lens, the circuit evaluation setup asks: Can transformers follow instructions provided in the form of a circuit? As discussed below, our result says the answer is yes for all constant depth threshold circuits. This, to the best of our knowledge, provides the first non-trivial lower bound for transformers in the instruction learning setting.
Formally, an instruction is any description171717Formally, a function description is a fixed-size program to compute that function under some model of computation. of a function of . We say a transformer correctly follows an instruction if, for all , it correctly computes on input . A non-uniform instruction description is a family of length-specific descriptions . We say a transformer correctly follows a non-uniform instruction family if, for all and all , it correctly computes on input . The non-uniform description may take any form. When it forms a circuit family, we refer to it as a instruction description. Since Thm. 3 constructs a transformer that can evaluate any circuit, it follows that:
Corollary 3.1.
There exists a depth- transformer that can correctly follow any depth- instruction description.
Thus, transformers with simple position embeddings, attention, and pooling functions can simulate any instruction provided in the form of a circuit. We note that while it is unknown whether the class of regular languages, considered by Finlayson et al. (2022), is contained in , the other side is known: there are problems computable by circuits that are not computable by a regular language. These include problems involving counting and arithmetic, which are beyond regular languages. Our results thus expand the known kinds of instructions transformers are able to follow, at least with hand-constructed weights.
7.2 Advice Transformers
We can also view circuit evaluation abilities of transformers (Lem. 3) from the lens of advice taking Turing machines which, in addition to their usual input, are also provided an input length dependent (but input independent) advice string. For instance, is the class of problems decidable in polynomial time when the Turing machine is given an advice string of size polynomial in the input length (cf. Arora and Barak, 2009).
In the same vein, let be the class of log-precision, constant-depth transformers with polynomial advice strings. In other words, on an input of size , we allow the transformer to receive an additional bits of input that cannot depend on the standard input. Now let be a circuit family demonstrating that a problem is in non-uniform . Then, by passing the description of as advice for input length , it immediately follows from Lem. 3 that advice transformers can simulate non-uniform :
Corollary 3.2.
Non-uniform .
Since non-uniform even contains some undecidable languages (Arora and Barak, 2009, Claim 6.8), is clearly a very powerful class and a strict superset of , the class of decision problems recognized by transformers (which are all decidable). Thus, a problem in cannot always be solved by a transformer on its own. However, if given a description of how to do so (“advice”) in the form of a circuit, our result shows that a transformer could solve that problem.
8 Conclusion
Answering two open questions from Merrill et al. (2022), we prove log-precision transformers with any (including soft) attention can be simulated by uniform constant-depth threshold circuits. This establishes thresholded addition as a fundamental operation for understanding the computational model of transformers: any log-precision transformer can be re-expressed as a polynomial number of threshold gates stacked to a constant depth. This result also establishes potential limits on the computational power of log-precision transformers; e.g., if , transformers cannot compute all poly-time functions. They are certainly very far from being universal. The intuition at the heart of this result is that forcing a model to be highly parallelizable likely sacrifices its expressiveness. Since parallelism seems essential to pretraining any massive model at scale, any large language model—transformer or otherwise—may suffer from a similar tradeoff.
Acknowledgments
The authors are grateful for the valuable feedback from the anonymous reviewers and the TACL action editor Dan Gildea. They also thank Paul Beame and colleagues at AI2 including Kyle Richardson, Michal Guerquin, Peter Clark, Tushar Khot, and especially Matthew Finlayson, whose empirical findings about instruction learning inspired § 7. Feedback from Sam Bowman, Arya McCarthy, Roma Patel, and Lena Strobl, and discussions with the FLaNN, ML for Code (MILA), and Foundations of Language Processing (Umeå) research groups helped improve earlier drafts. The authors also appreciate Rahul Santhanam’s feedback. This work was funded in part by NSF award 1922658. William Merrill was supported by an NSF graduate research fellowship and by AI2.
References
- Allender (1999) Eric Allender. 1999. The permanent requires large uniform threshold circuits. Chicago Journal of Theoretical Computer Science.
- Arora and Barak (2009) Sanjeev Arora and Boaz Barak. 2009. Computational Complexity: A Modern Approach. Cambridge University Press.
- Biere et al. (2009) Arin Biere, Marijn Heule, Hans van Maaren, and Toby Walsh. 2009. Handbook of Satisfiability: Volume 185 Frontiers in Artificial Intelligence and Applications. IOS Press.
- Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc.
- Bylander (1991) Tom Bylander. 1991. Complexity results for planning. In Proceedings of the International Joint Conference on Artificial Intelligence.
- Chiu et al. (2001) Andrew Chiu, George I. Davida, and Bruce E. Litow. 2001. Division in logspace-uniform nc1. RAIRO Theor. Informatics Appl., 35:259–275.
- Dehghani et al. (2019) Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. 2019. Universal transformers. In International Conference on Learning Representations.
- Dettmers et al. (2022) Tim Dettmers, Mike Lewis, and Luke Zettlemoyer. 2022. GPT3.int8(): 8-bit matrix multiplication for transformers at scale. In Advances in Neural Information Processing Systems.
- Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
- Elhage et al. (2021) Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. 2021. A mathematical framework for transformer circuits. Transformer Circuits Thread.
- Finlayson et al. (2022) Matthew Finlayson, Kyle Richardson, Ashish Sabharwal, and Peter Clark. 2022. What makes instruction learning hard? An investigation and a new challenge in a synthetic environment. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
- Greenlaw et al. (1991) Raymond Greenlaw, James M. Hoover, and Walter L. Ruzzo. 1991. A compendium of problems complete for P. Technical Report TR91-11, University of Alberta.
- Hahn (2020) Michael Hahn. 2020. Theoretical limitations of self-attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156–171.
- Hao et al. (2022) Yiding Hao, Dana Angluin, and Robert Frank. 2022. Formal language recognition by hard attention transformers: Perspectives from circuit complexity. Transactions of the Association for Computational Linguistics, 10:800–810.
- Hesse (2001) William Hesse. 2001. Division is in uniform . In International Colloquium on Automata, Languages, and Programming, pages 104–114.
- Immerman (2012) Neil Immerman. 2012. Descriptive complexity. Springer Science & Business Media.
- Jones and Laaser (1976) Neil D. Jones and William T. Laaser. 1976. Complete problems for deterministic polynomial time. Theoretical Computer Science, 3(1):105–117.
- Ladner (1975) Richard E Ladner. 1975. The circuit value problem is log space complete for P. ACM SIGACT News, 7(1):18–20.
- Lewis and Papadimitriou (1982) Harry R. Lewis and Christos H. Papadimitriou. 1982. Symmetric space-bounded computation. Theoretical Computer Science, 19:161–187.
- Merrill et al. (2021) William Merrill, Vivek Ramanujan, Yoav Goldberg, Roy Schwartz, and Noah A. Smith. 2021. Effects of parameter norm growth during transformer training: Inductive bias from gradient descent. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
- Merrill (2020) William Cooper Merrill. 2020. On the linguistic capacity of real-time counter automata. ArXiv, abs/2004.06866.
- Merrill et al. (2022) William Cooper Merrill, Ashish Sabharwal, and Noah A. Smith. 2022. Saturated transformers are constant-depth threshold circuits. Transactions of the Association for Computational Linguistics, 10.
- Pérez et al. (2019) Jorge Pérez, Javier Marinković, and Pablo Barceló. 2019. On the Turing completeness of modern neural network architectures. In International Conference on Learning Representations.
- Raffel et al. (2020) Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140).
- Reingold (2008) Omer Reingold. 2008. Undirected connectivity in log-space. Journal of the ACM, 55:17:1–17:24.
- Valiant (1979) Leslie G. Valiant. 1979. The complexity of computing the permanent. Theoretical Computer Science, 8:189–201.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
Appendix A Iterated -Precision Float Addition
We interpret a -bit string as a -precision float by taking the first bits181818We assume w.l.o.g. that is even. of as a signed integer encoding the mantissa and the remaining bits of as another signed integer encoding the exponent. A float with mantissa and exponent , denoted , encodes .
Computing the sum of -bit integers (known as iterated addition or simply summation) is well-known to be in uniform Hesse (2001); Chiu et al. (2001). We leverage this fact to show that the same holds for the sum of -precision floats. A subtlety of adding -precision floats is that their sum can require more than bits to represent precisely as a float. For instance, while each of and is representable with a (signed) mantissa of only bits, their exact sum, , requires a mantissa of bits. Hence, -precision transformers must sacrifice some precision when performing summation.
We define float addition by mapping the floats to integers, adding the integers exactly, and then mapping the sum back to a float (with possible loss of precision). Let be the greatest -bit signed integer, and . Let be the greatest value representable by a -precision float. Since the exponent of a float can be negative and represent a fraction, we rescale by when mapping it to an integer :
Definition 13.
The integer mapping of a -bit float is defined as .
Definition 14.
The -truncated float mapping of an integer is defined as where191919For , ; . For , right-shifts by bits
when ; otherwise (i.e., when ), we set to properly handle overflow.
Definition 15 (Iterated -precision float addition).
We define the sum of -precision floats as
We first verify that Def. 14 closely approximates exact addition.
Lemma 4.
Let be a float such that and . Then and differ by a factor of at most .
Proof.
Let , which is well-defined because of the precondition of the lemma. Let .
First consider the easy case where . Then and from Def. 14. Since by Def. 13, it follows that and have exactly the same value.
Now assume . It follows from the precondition of the lemma that there is no overflow when applying Def. 14 to compute . Thus consists of the highest-order bits (including the sign bit) of and , where is the number of bits truncated from to obtain . Let denote the (non-negative) integer formed by the lowest-order bits of that are truncated. Then .
Recall that the value of is . By the above argument, we also have that the value of is within , which is within . Thus, and are within a factor of of each other. ∎
Finally, we show that, with log precision, computing (Def. 14) is in uniform .
Lemma 5.
Let and , where and each is -precision. Then is computable by a constant-depth uniform threshold circuit of size .
Proof.
Let . We first use Lem. 1 to map each to the integer , which has size . For , we pad to bits, and for , we create an -bit integer . We can then compute with a constant-depth uniform threshold circuit of size using the classical construction to sum -bit integers (cf. Immerman, 2012, exercise 5.29). The size of this circuit is also polynomial in by the definition of . Finally, we compute using a constant-depth AND/OR circuit. ∎