Learning Linear Attention in Polynomial Time
Abstract
Previous research has explored the computational expressivity of Transformer models in simulating Boolean circuits or Turing machines. However, the learnability of these simulators from observational data has remained an open question. Our study addresses this gap by providing the first polynomial-time learnability results (specifically strong, agnostic PAC learning) for single-layer Transformers with linear attention. We show that linear attention may be viewed as a linear predictor in a suitably defined RKHS. As a consequence, the problem of learning any linear transformer may be converted into the problem of learning an ordinary linear predictor in an expanded feature space, and any such predictor may be converted back into a multiheaded linear transformer. Moving to generalization, we show how to efficiently identify training datasets for which every empirical risk minimizer is equivalent (up to trivial symmetries) to the linear Transformer that generated the data, thereby guaranteeing the learned model will correctly generalize across all inputs. Finally, we provide examples of computations expressible via linear attention and therefore polynomial-time learnable, including associative memories, finite automata, and a class of Universal Turing Machine (UTMs) with polynomially bounded computation histories. We empirically validate our theoretical findings on three tasks: learning random linear attention networks, key–value associations, and learning to execute finite automata. Our findings bridge a critical gap between theoretical expressivity and learnability of Transformers, and show that flexible and general models of computation are efficiently learnable.
1 Introduction
Transformers have become a ubiquitous tool in a range of learning problems, due to their versatility in generating structured sequences (including text) conditioned on complex inputs (including natural language instructions). A large body of work has attempted to explain the behavior of trained Transformers and characterize their expressive power . Here one central problem—particularly useful for understanding Transformers’ ability to follow instructions in language—is understanding how reliably they can simulate execution of automata like universal Turing machines, which in turn can execute arbitrary programs (i.e., the input “instructions”). While several recent papers have shown that Transformers are expressive enough to implement important models of computation, it remains an open question whether these machines may be effectively learned. Even verifying that a trained model has successfully learned a generalizable computation procedure has remained challenging.
Concretely, existing work has shown positive results on how Transformer-like architectures can realize many algorithmic computations, including simulating universal Turing machines (Li et al., 2024), evaluating sentences of first-order logic (Barceló et al., 2020), and recognizing various formal languages (Strobl et al., 2024). However, these proofs rely on manual constructions of network weights. For questions about learnability, the most comprehensive result to date is due to Luo et al. (2023), who proposed a proof under very strong assumptions about the data distribution (i.e., under complete presentation by enumerating of all Turing machines up to a constant size). While the result is positive in the sense that training a Transformer model on a finite dataset enables generalization to input strings of arbitrary lengths, the bound on the amount of data required for training grows exponentially in the size of the model to be trained.
In this paper, we establish the strong, agnostic PAC-learnability of a family of simplified Transformer models, namely linear attention modules (without a softmax function; Ahn et al., 2024, Katharopoulos et al., 2020). We focus our analysis on single layer linear Transformers (i.e. multi-head linear attention networks, or MHLAs) for regression tasks. An MHLA is parameterized by two matrices for each of heads as such . One layer MHLA computes .
Despite its simplicity, this model class retains significant expressive power: we show that it can realize a restricted but expressive class of universal Turing machines with bounded computation history size. Our results imply that Transformer models can efficiently (in polynomial time) learn these machines with polynomial sample complexity. Moreover, we show that under checkable conditions, minimizing the empirical risk is guaranteed to recover a model that correctly simulates arbitrary new input machines.
We first show that the computation performed by MHLAs can be reformulated as an elementwise product between two larger matrices , where and is a fixed cubic polynomial function of . Consequently, optimizing over the class of -head MHLA models is equivalent to optimizing over the class of rank- matrices . Furthermore, in the full-rank space of matrices, optimization of can be performed via linear regression with time polynomial in the inverse target error and size of the dataset. Finally, decomposing an optimal via SVD recovers an MHLA model with no more than heads that is then guaranteed to compete against the best MHLA parameters—establishing our agnostic learning result (the learned model competes against the best choice of parameters in the hypothesis class).
Next we define a certificate of identifiability for MHLAs—an efficiently checkable condition for any dataset that, if true, ensures every empirical risk minimizer in the class of MHLAs computes the same function on all possible inputs. More specifically, let be the second moment of the flattening of the data in the -feature space: . We show that, if is full rank, it is guaranteed that MHLA is identifiable with respect to the data. We call this second moment condition “certifiable identifiability”. When combined with existing results on realizability, this leads to a polynomial-time algorithm for learning computations expressible as linear attention, with checkable certificates of identifiability. Given a dataset satisfying checkable identifiability, if a computational procedure can fit the dataset and can be realized by a MHLA, then all empirical risk minimizers will be equivalent to that procedure. This applies to any function realizable by single-layer linear Transformers.
In the experimental section, we validate our theoretical findings beyond their initial scope. In Section 6.1, we train multiple models using stochastic gradient descent on a dataset generated by a single linear attention network’s output. Our results demonstrate that multi-head linear attention outperforms both single-layer linear attention and multi-layer linear attention, achieving comparable results to our polynomial-time algorithm. In Section 6.2, we show that our proposed certificate correlates with generalization error, even for models trained using stochastic gradient descent. In our final set of experiments (Section 6.3), we examine the data requirements for learning a specific Universal Turing Machine (UTM) capable of executing discrete finite automata, in which we observe sub-exponential data requirements.
In summary, we show:
-
•
We provide a polynomial time algorithm that given any dataset, finds the best fit parameters for single-layer linear transformers (MHLAs) and generalizes with polynomial data i.e strong agnostic PAC learning (Sections 2.1 and 3).
- •
-
•
MHLAs are an expressive model of computation that includes universal Turing machines with polynomially bounded computation histories (Section 2.3). Combined with our identifiability results, we conclude that given a certifiably identifiable dataset of Turing machines and computation histories on input words, empirical risk minimization and in particular algorithm 1 will learn the universal Turing machine in a strong sense. That is at test time it will run any TM and input word up to a given size for a bounded number of steps. See section 5.
Our results shed new light on the learnability of instruction following procedures, and indeed the learnability of a broad class of simple attention-based models.
2 Technical Overview
We start with basic definitions of a multi-head linear attention (MHLA) module, a stackable attention-based neural module simplified from the standard Transformer module by removing the softmax activation. MHLA has been a standard subject of study for expressivity and learning theory.
Definition (Multi-Head Linear Attention).
Let be a matrix of input data. Let be a set of parameters where each denotes value and key-query matrices for all heads . We say where is the space of sets of ordered tuples of of matrices. We define multi-head linear attention (MHLA) to be the function ,
(1) |
where is the output of the one layer linear attention. We will primarily be interested in the rightmost column vector output by (e.g., as in auto-regressive language models), which is:
(2) |
where is the th column of .
Note that is a uniform circuit family: it can take inputs of any length dimension and fixed embedding dimension . It is uniform in the sense that there is a polynomial time algorithm that maps from the parameters to the circuit that processes inputs of length . We will occasionally abuse terminology and refer to MHLA as a function rather than as a family of functions.
2.1 Polynomial-time learnability
Our main result is that MHLA is learnable in polynomial time. Colloquially, Algorithm 1 returns an MHLA that attains the global minimum of the training loss, and requires as few as samples to achieve generalization error with probability . Here our algorithmic guarantees do not require the data to be “realizable” (that is, there need be no underlying MHLA that generates the data).
Theorem 2.1 (Learnability of Linear Attention).
Let be a dataset drawn i.i.d. from a distribution where each , . Here the embedding dimension is fixed across the dataset, whereas can be different for each datapoint. Let be the maximum sequence length for , and let be the space of pairs of value and key-query matrices for any . Then there is an algorithm (Algorithm 1) that runs in time and that, given input–output pairs , returns for such that with probability ,
(3) |
with sample complexity .
Below we describe the high-level intuition behind this proof; a formal statement is given Appendix A. Note additionally that, if we are purely concerned with guaranteeing that we can find a global minimum of the training loss, we may remove the i.i.d. assumption: Algorithm 1 is always within error of the training loss. This is also detailed in Appendix A. Specific issues related to generalization over autoregressive sequences rather than i.i.d. data are handled in the UTM learning result: see Section C.2.
The main idea behind Algorithm 1 is to construct a feature mapping from the data covariates to a feature space of dimension . The map is defined as:
(4) |
Here, we index the rows of by and the columns by all tuples such that . At a high level, Algorithm 1 is a kernel method defined by the feature mapping . The learned kernel predictor (a regressor) can be mapped back onto a set of parameters for a MHLA with no more than heads via SVD.
2.2 Identifiability
Our algorithmic result also sheds light on a closely related question about generalization in MHLAs: Is there an efficiently checkable condition on any given dataset, such that empirical risk minimization is guaranteed to learn the true data-generating process and generalize even to out-of-distribution examples? Without any qualifiers, “out of distribution” generalization is ill-defined. Nevertheless, if a model class is identifiable—if the empirical risk minimizers of the class all compute the same function on all inputs—we can at least know that the empirical risk minimization algorithm produces models with the same behavior (in- and out-of-distribution) on held-out data. Furthermore, if we have a specific computational procedure (e.g., a Universal Turing Machine) that can fit the data and can be realized by some setting of model parameters, then, assuming the dataset is identifiable, all empirical risk minimizers will be equivalent to that procedure. (For a more formal definition of realizability see Definition Definition).
We show that, as a direct implication of our algorithmic result, it is possible to produce such an efficiently checkable condition (a certificate) on the data that guarantees every empirical risk minimizer in a family of MHLAs computes the same function. Let be the second moment of the flattening of the data, denoted , in feature space:
(5) |
Then if is full rank, it is guaranteed that MHLA is identifiable with respect to the data.
Lemma 2.1 (Certificate of Identifiability—Informal).
Let dataset be realizable (see Definition Definition) by an -head MHLA for any . Let be the uniform family of polynomials for defined as in Algorithm 2, and for convenience write for (there is one for each length- input). Finally, define to be the second moment of the data features:
(6) |
Then if the eigenvalue , we say that is certifiably identifiable with respect to . That is, for every pair of empirical risk minimizers
(7) |
i.e., the two models have the same outputs on all inputs.
Corollary 1.
There is a polynomial such that for any pair of parameters we have if and only if .
Clearly MHLAs with very different parameters can compute the same function, due to simple symmetries like reciprocal scaling of the and matrices. The polynomial defines the equivalence class of parameters that compute the same function. For a formal statement of Lemma 2.1 see Lemma 4.1. For handling of errors for approximate empirical risk minimization see Lemma 4.4. Moreover, the certificate given by Algorithm 2 is not the only choice of feature mapping that would certify identifiability; for a general lemma on certifiable identifiability see Lemma B.1. One way to interpret Corollary 1 is that two MHLA models parameterized by and compute the same function if and only if they are the same linear function in feature space (akin to matching coefficients in polynomial regression), which in turn is true if for the polynomial given in Lemma 4.1. Comparing distance between the coefficients in -space is essentially the only meaningful metric of distance that is agnostic to the choice of dataset.
Finally, we answer a few natural questions related to identifiability which we briefly summarize here. Firstly, perfectly noisy inputs yield identifiable data under weak assumptions on the moments of the noise (see Lemma 4.2). Secondly, the model class of MHLA with at least heads is certifiably identifiable from the second moment condition alone, and does not require realizability of the data (see Lemma 4.3). Finally, we empirically verify these certificates predict the training behavior of SGD for MHLA for the problem of learning key–value memories (see Figure 2).
2.3 Realizability of Universal Automata in MHLA
We also include an application of our theory on learnability and identifiability to the problem of learning a universal Turing machine (UTMs) with polynomially bounded computation length. We prove such a UTM is expressible via MHLA in Lemma 2.2, and show that for certifiably identifiable data the learned MHLA generalizes to any TM and input word in Lemma 5.2.
Lemma 2.2 (UTM Expressibility).
Let be the set of Turing machines and words with number of states, size of alphabet, size of input, and number of steps in computation history bounded by respectively. For any , let be the computation history of the UTM on . Let the autoregressive computation history (see Definition Definition) of on input be denoted . Then there exists a set of parameters for and embedding dimension , such that for all , the TM computation history at time step is equivalent to the autoregressive computation history at time step where i.e . Furthermore, this can be achieved with 2 bits of precision.
Our construction bears similarities to (Pérez et al., 2019; Hahn, 2020; Merrill & Sabharwal, 2023; Merrill et al., 2022; 2021; Liu et al., 2022; Feng et al., 2023); the high-level idea is write down every letter in the computation history of on . If we use orthogonal vectors to encode every letter, state, and positional embedding we arrive at a natural construction involving a few basic primitives copy, lookup, and if-then-else. For details see discussion section C and Proof C.1
We now proceed to a more detailed discussion of the main technical result Theorem 2.1.
3 Polynomial Algorithm for Learning MHLA
We first show that, given any dataset , there exists a learning algorithm that can recover the optimal parameter of an MHLA with a fixed latent dimension , in the sense of empirical risk minimization.
(8) |
(9) |
(10) |
See 2.1
Proof Idea:
First we write down the loss, and observe that a one-layer attention network is a quadratic polynomial in of input features .
(11) |
Here
(12) |
Now we relax this objective by replacing with an unconstrained matrix . Where is a rank- matrix, we allow to be a general matrix, so this relaxation is guaranteed to have a smaller loss. Furthermore, the loss can be optimized via ordinary least squares. Finally, if we apply SVD to we obtain a set of left and right singular vectors scaled by square root the magnitude of the singular value. Here the scaled left singular vectors correspond to and the scaled right singular vectors correspond to for . Since the rank of is no greater than the resulting MHLA satisfies . The sample complexity follows from classical results in VC theory (Kearns & Vazirani, 1994). For full proof see Appendix A.
4 Certificate for identifiability of linear attention
We begin by defining identifiability of a model class with respect to a dataset.
Definition (Identifiability).
Let . Let denote a model class which is a uniform circuit family parameterized by parameters . Let be a loss function and be the set of empirical risk minimizers:
(13) |
We say model class is identifiable with respect to the dataset if for all , and for all pairs of empirical risk minimizers we have and compute the same function, i.e., they agree on all inputs (are the same uniform circuit family):
(14) |
In establishing conditions for identifiability, it will be useful to refer to another condition relating models to datasets.
Definition (Realizability).
Let be an MHLA parameterization. We say a dataset is realizable by a parameterization if .
The definition of realizability can be modified to include independent noise at the expense of adding some terms to our analyses. See Lemma 4.4 for details.
Next, we prove that for the model class MHLA there is an efficiently checkable condition (certificate) of the data that guarantees the model class is identifiable with respect to . Our results follow by reinterpreting the results of Section 3 with a focus on data conditions that uniquely determine the optimal regressor. In this section we denote the mapping from data to feature space to be and the mapping from parameters to feature space to be which are analogous to the and of Section 3. The overall structure is the following:
- 1.
-
2.
In Corollary 2, we establish that there exists a polynomial mapping parameter space to feature space such that if and only if for any pair of parameters . In other words the mapping determines the equivalence class of parameters that compute the same function. We crucially use this fact in our empirical demonstration where we measure functional distance of models with learned parameters vs. ground truth parameters by comparing their distance in feature space.
We instantiate the feature mapping and parameter mapping polynomial as follows.
Lemma 4.1 (Certificate of Identifiability).
Let dataset be a realizable dataset. Let be a family of polynomials for defined as follows. We index the entries of by taking the Kronecker product between all sets of pairs (for all ) with with all . We define as in Algorithm 2 to be
(15) |
Then if , we have that is identifiable with respect to .
Next we construct a mapping that partitions parameter space into equivalence classes of parameters that compute the same function. This is akin to matching coefficients in polynomial regression. While not necessary to prove identifiability, this mapping defines a meaningful notion of “distance” between different transformer parameters by constructing a feature space in which computationally equivalent models have the same representation. We denote the ’th row of to be and define it as follows.
Corollary 2.
Let be a collection of polynomials such that is defined as follows. Each is indexed by pairs for and defined to be
(16) |
Let the polynomial with output dimension be . Then for any pair of parameters we have if and only if .
We give an overview of a few results using our certifiable identifiability machinery:
-
1.
Data that is drawn from independent noise is certifiably identifiable. If the data matrices are drawn with each entry being standard normal noise, then for is identifiable with respect to the data. The statement holds beyond standard normals to distributions satisfying weak moment conditions. See Lemma 4.2
-
2.
If the model class is MHLA with more than heads, then even if the data is not realizable, as long as the minimum eigenvalue of is greater than zero, MHLA remains identifiable with respect to . See Lemma 4.3.
-
3.
This identifiability condition is robust, and can be generalized to -approximate empirical risk minimizers. See Lemma 4.4.
More formally:
Lemma 4.2 (Independent input noise yields identifiability).
Let be a realizable dataset. Let be drawn from a distribution where the -th entry of denoted by is drawn i.i.d. from a distribution over for all and . Let the second and fourth moment of be denoted and respectively. Let and . Then for is identifiable with respect to . That is to say, any population risk minimizers
(17) |
When specialized to the case of Multi Head Linear Attention with more than heads we can avoid the realizability assumption entirely. This is because the class of MHLA with an arbitrary number of heads is linear in the feature space given in Lemma 4.1.
Lemma 4.3 (Identifiability without realizability for MHLA with arbitrarily many heads).
Let dataset be any dataset drawn i.i.d from a distribution . Let be defined as in Lemma 4.1. Then if then for for any is identifiable with respect to the data . That is,
(18) |
for all pairs of empirical risk minimizers .
We also add a quantitative version of identifiability with precise treatment of issues related to error. (For a corresponding statement of realizability with noise see Lemma B.2.)
Lemma 4.4 (Identifiability with Error).
Let be the set of -approximate empirical risk minimizers.
(19) |
Then we have for any that for all inputs
(20) |
Proof of all of the above statements is given in Appendix B.
5 Application to Learning Universal Turing Machines
We apply our algorithmic and identifiability machinery to show that an important computational procedure is representable and learnable as an MHLA: namely, a restricted class of universal Turing machines (UTMs) with bounded computation history. We must first generalize our previous MHLA definition to enable multi-step computation:
Definition (Autoregressive MHLA).
Let be an input matrix in dimension . We define the iterative process of -step autoregressive MHLA as follows: starting from , let the next token be:
(21) |
and, for all , let be the concatenation:
(22) |
Next we define the computation history of an autoregressive model analogously to the computation history of a Turing machine.
Definition (Autoregressive Computation History).
We refer to as the computation history of the -step autoregressive MHLA. We denote the -th step of the computation history as .
We will often use the notation to denote the last tokens of . Often, will be the embeddings corresponding to a word in a language , in which case we will use the notation and interchangeably. For pedagogical discussion on how to map embeddings to letters in an alphabet, see Section D
Although the theory derived in this paper applies to all functions expressible by MHLAs, we are particularly interested in the task of learning universal Turing machines (UTMs). Let be an alphabet. Let be a set of states that includes a start, accept, and reject state respectively. Let be a transition function that takes an alphabet and state symbol and maps to a state transition, an output symbol, and a head movement left or right. Typically there is also a tape alphabet for which the input alphabet is a subset.
Definition (Accept TM).
Let be a TM. Let be all strings in the alphabet . Then let be the language .
The UTM constructed in Turing’s 1936 paper recognizes . In practice, we are most often interested in the behavior of TMs that run in polynomial time, and focus below on implementing a universal simulator for this restricted class:
Definition.
(Polynomially Bounded Universal Turing Machine) In general, a UTM is a recognizer for the language . That is if is in , the UTM accepts, else, the UTM rejects or does not halt. Let be the language of input pairs for TM and word such that decides in polynomial time. Here, we consider UTM to be the polynomial time decider for .
To define what it means for an autoregressive MHLA to perform the same computation as a TM, our main idea is to construct parameters for MHLA such that it executes the computation history of TM on input . Let the UTM computation history at step include the contents on the tape after transition steps of the Turing machine , the current state , and the current head position . Here is the number of tokens at timestep . Then, there is a single-layer MHLA capable of simulating a UTM:
See 2.2
We include the full proof for the existence of in the appendix. For simplicity, we adopt a naive embedding scheme that represents different letters in an alphabet as orthogonal unit vectors. This makes it easy to contrive embedding schemes that incorporate arbitrary polynomial-sized circuits which could compute whether . Moreover, we adopt positional encodings that are simply orthogonal unit vectors. Thus, in order to give each of tokens a unique ID, we would require dimensional positional embeddings.
This can be combined with the learnability results above to yield a specialized result for UTMs:
Lemma 5.1 (Learning a UTM).
Let in dimension be the MHLA parameters in Lemma 2.2. Let be pairs of TM’s and words of maximum length drawn i.i.d. from a distribution . Let . For each TM/word pair let be the -step autoregressive computation history of on . Let be the dataset where . Then Algorithm 1 applied to input returns for such that with probability
(23) |
for sample complexity . Then with probability over the randomness in the data, the probability over that the -step autoregressive computation history and differ is upper bounded by
(24) |
Finally, if the dataset is certifiably identifiable, then generalization holds out-of-distribution. For proof see Section C.2.
Lemma 5.2 (Learning UTM from Certifiably Identifiable Data).
Let be a dataset satisfying for being the expressibility parameters of Lemma 2.2 for the set of TM’s/words . If is certifiably identifiable with , then there is a time algorithm that outputs a set of parameters such that for all TM’s and input words in , we have
(25) |
The step of the autoregressive computation history of is equal to the ’th step of the computation history of on .
6 Experiments
The theoretical analysis establishes three key results rephrased in the context of empirical validation:
-
•
An overparameterized family of linear attention networks can learn linear attention in polynomial time and samples (Theorem 2.1).
-
•
In the realizable setting, there are sufficient and checkable conditions under which empirical risk minimization recovers the equivalence class of the ground truth parameter values (Lemma 4.1).
-
•
Linear attention networks can a restricted class of universal Turing machines with polynomial hidden size, using polynomially bounded computation histories for state tracking (Lemma 5.1).
In our experiments, we validate these theoretical predictions in practical settings where Transformers are trained using stochastic gradient descent (SGD), as follows:
-
1.
One interpretation of Theorem 2.1 is that relaxing MHLA learning into an “easy” linear regression problem corresponds to adding extra attention heads, and suggests that adding extra heads might provide optimization benefits even when learning MHLA models in their native form. We investigate the role of over-parameterization in multi-head and multi-layer linear attention networks. For random data generated from linear attention networks, we observe that adding more heads achieves faster convergence of training loss than adding more layers. This suggests that while depth is important for expressiveness, the number of heads is important for optimization (Figure 1).
-
2.
We empirically verify the certificate of identifiability provided by Lemma 4.1 on datasets for associative memory (Bietti et al., 2023; Cabannes et al., 2024) with different choices of embeddings, demonstrating convergence to the equivalence class of the true parameters when and converging to spurious solutions when (Figure 2).
-
3.
We test the practical data requirements for learning universal DFA executors, testing our polynomial complexity predictions Lemma 5.1. We provide evidence that the sample requirement for learning DFA execution of state, alphabet size , and length words is polynomial in the relevant parameters (Figure 3).
6.1 Do extra heads help independent of the learning algorithm?


Algorithm 1 returns a head MHLA which competes with the optimal model on the training data. If the data is generated by a single-head linear attention, our method can be viewed as learning with an over-parameterized model. This raises the question: Are other forms of over-parameterization equally effective in learning linear attention networks? To address this, we train three types of over-parameterized models with SGD on data generated from a single-layer linear attention network: (1) multi-layer linear attention networks, (2) multi-head linear attention networks, (3) a full Transformer layer. The results are presented in Figure 1.
Experimental Setup:
We initialize a single-layer linear attention network with parameters and , sampled from a Gaussian distribution . Input sequences are sampled from , where , is the maximum number of time steps, and is the dataset size. We generate outputs by running the ground-truth network auto-regressively: , creating our dataset .
In addition to learning with Algorithm 1, we train three types of models on this data using SGD:
-
•
Multi-head linear attention as in Equation 1.
-
•
Multi-layer linear attention with a single head.
-
•
An ordinary Transformer network (Vaswani et al., 2017) with softmax attention, multi-layer perceptron blocks, and layer normalization.
For detailed hyperparameters and optimization procedures, please refer to Section D.1.
Multi-head attention outperforms both multi-layer attention and the full transformer:
We observe that multi-head attention scales effectively with an increasing number of heads, resulting in improved performance. Notably, for input dimensions, using heads (Figure 1(a)) yields the best performance and is comparable to Algorithm 1, approaching floating-point error precision. In contrast, multi-layer attention models show diminishing returns and performs worse than single-layer attention. Interestingly, adding more layers can sometimes degrade performance. The full transformer model, which incorporates softmax attention, MLP layers and the layer normalization, does not significantly outperform the single-layer linear attention model on this task.
These findings suggest that the type of over-parameterization matters significantly in learning linear attention networks. Interestingly, that multi-head architectures appear to be particularly effective—aligned with the structure of Algorithm 1.
6.2 Does certifiable identifiability predict generalization?


In Lemma 4.1, we developed a certificate that provides a sufficient condition for identifiability. However, it gives a sufficient, not necessary, condition for generalization. To assess the practical relevance of this certificate, we conducted an empirical analysis of convergence in cases where the condition is not satisfied. The results of this analysis are presented in Figure 2.
Associative Memory
Associative Memory (Bietti et al., 2023; Cabannes et al., 2024) is a task of looking up a value in a table with a query. As a single head one-layer linear attention it can be represented with ground truth parameters where .
The data is drawn as follows: let be random variables corresponding to keys in a lookup table, let be random variables corresponding to values in a lookup table, let be a random variable corresponding to a query to the lookup table, and be some random noise, with:
(26) |
We set the output vector to be
(27) |
Mixture of distributions:
We generate two datasets, one that has identifiable and one that is nonidentifiable with . The identifiable dataset is generated with and drawn i.i.d . The query is chosen to be one of the uniformly at random. The unidentifiable dataset is drawn such that forms a random unitary matrix i.e for all and for all . Similarly, is also drawn from a randomly generated unitary matrix. We draw new random unitary matrices for each datapoint, where is again chosen to be one of the uniformly at random. We set dimensions for both datasets, and draw samples for each dataset. We mix the two datasets together with a mixing probability ranging from 95% unidentifiable to 100% unidentifiable. In this manner we generate a spread of datasets with different values for that tend to zero.
Certifiable Identifiability for Algorithm 1:
For each dataset, we run algorithm 1 which returns . We compare to the ground truth in feature space via the distance metric
(28) |
Here, is the polynomial given in Lemma 4.1. Recall from Corollary 2 that defines the equivalence class of parameters that compute the same function, i.e., if and only if . On each dataset, we measure the certificate value on the x-axis vs. on the y-axis. In Figure 2, we see that as the certificate value increases, decreases, indicating that and compute the same function.
Certifiable Identifiability for MHLA:
Our notion of certifiable identifiability in Lemma 4.1 applies to any empirical risk minimizer. Therefore, it applies to popular optimizers like SGD and Adam if they achieve the minimum of the loss, which is in our synthetic case equal to zero. In Figure 2(b), we train MHLA models via SGD with and heads. For identifiable data with minimum eigenvalue , we see that the learned parameters and ground truth parameters are the same in feature space. However, for unidentifiable data with minimum eigenvalue , learned parameters and ground truth parameters are far apart in feature space and therefore compute different functions.
6.3 Learning the Computation History of Deterministic Finite Automata
Universal automata (like the universal Turing machine discussed in Section C.2) receive descriptions of other automata as input, and simulate them to produce an output. Here we empirically evaluate the ability of MHLA models to perform universal simulation of deterministic finite automata (DFAs). We limit our study to DFAs with a maximum number of states (), alphabet size (), and input length (). While recent work on in-context learning (Akyürek et al., 2024) has focused on inferring DFA behavior from input–output examples, here, we aim to simulate DFAs given explicit descriptions of their state transitions as input—a task somewhat analogous to instruction following in large scale language models.
The construction in Lemma 5.1 shows that a linear attention layer can output the polynomially bounded computation history of any TM (and therefore any DFA). Our construction requires embedding size linear with maximum length of computation history, number of states and alphabet size. Therefore, we predict the data requirements are polynomial in each of and .

Dataset
Our dataset consists of strings containing three components: the input DFA’s transition function , the input word and the computation history which is the sequence of states visited in the DFA as it decides if is in its language. The first two components are the input to the model, while the computation history is the target output. We adopt the following schema for representing and :
We encode each input-output relation in the transition function as a sequence of three tokens where . We also include two parantheses to separate each triplet of tokens for a total of five tokens for each input-output relation. The total description length of is then . We encode word of length as a sequence of tokens. Finally, we encode the computation history as the sequence of state transitions the DFA visits when deciding if is in its language. Here we designate as the start state, and let . Each state transition is again represented by a triplet . We train an autoregressive Transformer model using cross-entropy loss to predict the computation history tokens given the transition function and word. Please refer to Section D.2 for hyperparameter details.
Results
In Figure 3, we vary each of the parameters , and , while the other two parameters are fixed to a constant (in this case we fix them to be 4). Then, on the vertical axis, we display the minimum number of tokens (number of examples times the word length) required to get 99% accuracy on the next token prediction. Plots are suggestive of a sub-exponential dependence on DFA complexity.
7 Related Work
7.1 Formal Expressivity of Transformers
A large body of work has been trying to tackle the problem of quantifying what algorithmic tasks can a Transformer do, in terms of various kinds of circuit families (Pérez et al., 2019; Edelman et al., 2022b; Hahn, 2020; Merrill & Sabharwal, 2023; Merrill et al., 2022; 2021; Liu et al., 2022; Feng et al., 2023). In particular, researchers have studied how Transformers can realize specific DSLs (Weiss et al., 2021), logic expressions (Dong et al., 2019; Barceló et al., 2020; 2024), Turing machines (Dehghani et al., 2018; Giannou et al., 2023; Pérez et al., 2021), formal language recognition (Hao et al., 2022; Chiang et al., 2023), as well as automata and universal Turing machines (Liu et al., 2022; Li et al., 2024). However, while these works primarily focus on determining the types of problems whose solutions a Transformer can express, they often overlook the crucial question of how these solutions can be learned from data. Moreover, there is limited discussion on the sufficiency of the dataset itself—whether the data available can identify the underlying “true” function or algorithm that we aim to capture.
7.2 Learning Transformers
We break down the literature on learning transformers. First, there is the literature on statistical learnability, where the focus is on the amount of data required to learn without considering whether there is a tractable algorithm for learning (Edelman et al., 2022a; Wei et al., 2021; Zhang et al., 2024; Trauger & Tewari, 2023).
Second, there are learnability results for single head transformers for data distributions under a variety of assumptions. In particular, Zhang et al. (2023) provide learnability results for in-context linear regression; Jelassi et al. (2022) show that data with spatial structure can be learned; the work of Tian et al. (2023) analyzes SGD training dynamics for a toy model for data; and the work of Oymak et al. (2023) studies the prompt attention model.
Third, the literature on provable guarantees for learning multi head attention is rather sparse. Fu et al. (2023) give learnability results in a regime where attention matrices are fixed and only the projection matrices are trained. The work of Tarzanagh et al. (2024) show connections between single layer attention optimization to SVM learning. Under a good gradient initialization condition, overparameterization condition, and a condition on the scores of optimal tokens the global convergence of gradient descent to a particular SVM problem can be established. Deora et al. (2023) analyze a setting of learning multi head attention with gradient descent under their Assumption 2. In the words of the authors ”these conditions are related to the realizability condition, which guarantees obtaining small training error near initialization”, which they instantiate with the separability of the data in an NTK space and a proximity of initialization to realizable parameters. Interestingly, they find that multi head attention has benign optimization properties. Finally, Chen & Li (2024) study learning for multi head attention for well structured data that is drawn independent Bernoulli or Gaussian. They provide an extensive discussion of lower bounds for learning multi head attention.
Acknowledgments
We gratefully acknowledge support from NSF grants IIS-2214177, IIS-2238240, CCF-2112665 and DMS-2134108; from AFOSR grant FA9550-22-1-0249; from ONR MURI grant N00014-22-1-2740; and from ARO grant W911NF-23-1-0034; from the OpenPhilanthropy Foundation; from MIT Quest for Intelligence; from the MIT-IBM Watson AI Lab; from ONR Science of AI; from Simons Center for the Social Brain; and from an Alexander von Humboldt fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors.
References
- Ahn et al. (2024) Kwangjun Ahn, Xiang Cheng, Minhak Song, Chulhee Yun, Ali Jadbabaie, and Suvrit Sra. Linear attention is (maybe) all you need (to understand transformer optimization), 2024. URL https://arxiv.org/abs/2310.01082.
- Akyürek et al. (2024) Ekin Akyürek, Bailin Wang, Yoon Kim, and Jacob Andreas. In-context language learning: Architectures and algorithms, 2024. URL https://arxiv.org/abs/2401.12973.
- Barceló et al. (2020) Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan-Pablo Silva. The logical expressiveness of graph neural networks. In ICLR, 2020.
- Barceló et al. (2024) Pablo Barceló, Alexander Kozachinskiy, Anthony Widjaja Lin, and Vladimir Podolskii. Logical languages accepted by transformer encoders with hard attention. 2024.
- Bietti et al. (2023) Alberto Bietti, Vivien Cabannes, Diane Bouchacourt, Herve Jegou, and Leon Bottou. Birth of a transformer: A memory viewpoint, 2023. URL https://arxiv.org/abs/2306.00802.
- Cabannes et al. (2024) Vivien Cabannes, Berfin Simsek, and Alberto Bietti. Learning associative memories with gradient descent, 2024. URL https://arxiv.org/abs/2402.18724.
- Chen & Li (2024) Sitan Chen and Yuanzhi Li. Provably learning a multi-head attention layer, 2024. URL https://arxiv.org/abs/2402.04084.
- Chiang et al. (2023) David Chiang, Peter Cholak, and Anand Pillay. Tighter bounds on the expressivity of transformer encoders. arXiv preprint arXiv:2301.10743, 2023.
- Dehghani et al. (2018) Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. Universal transformers. arXiv preprint arXiv:1807.03819, 2018.
- Deora et al. (2023) Puneesh Deora, Rouzbeh Ghaderi, Hossein Taheri, and Christos Thrampoulidis. On the optimization and generalization of multi-head attention, 2023. URL https://arxiv.org/abs/2310.12680.
- Dong et al. (2019) Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou. Neural logic machines. In ICLR, 2019.
- Edelman et al. (2022a) Benjamin L. Edelman, Surbhi Goel, Sham Kakade, and Cyril Zhang. Inductive biases and variable creation in self-attention mechanisms, 2022a. URL https://arxiv.org/abs/2110.10090.
- Edelman et al. (2022b) Benjamin L Edelman, Surbhi Goel, Sham Kakade, and Cyril Zhang. Inductive biases and variable creation in self-attention mechanisms. In International Conference on Machine Learning, pp. 5793–5831. PMLR, 2022b.
- Feng et al. (2023) Guhao Feng, Yuntian Gu, Bohang Zhang, Haotian Ye, Di He, and Liwei Wang. Towards revealing the mystery behind chain of thought: a theoretical perspective. arXiv preprint arXiv:2305.15408, 2023.
- Fu et al. (2023) Hengyu Fu, Tianyu Guo, Yu Bai, and Song Mei. What can a single attention layer learn? a study through the random features lens, 2023. URL https://arxiv.org/abs/2307.11353.
- Giannou et al. (2023) Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D Lee, and Dimitris Papailiopoulos. Looped transformers as programmable computers. arXiv preprint arXiv:2301.13196, 2023.
- Hahn (2020) Michael Hahn. Theoretical limitations of self-attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156–171, 2020.
- Hao et al. (2022) Yiding Hao, Dana Angluin, and Robert Frank. Formal language recognition by hard attention transformers: Perspectives from circuit complexity. Transactions of the Association for Computational Linguistics, 10:800–810, 2022.
- Jelassi et al. (2022) Samy Jelassi, Michael E. Sander, and Yuanzhi Li. Vision transformers provably learn spatial structure, 2022. URL https://arxiv.org/abs/2210.09221.
- Katharopoulos et al. (2020) Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention, 2020. URL https://arxiv.org/abs/2006.16236.
- Kearns & Vazirani (1994) Michael J. Kearns and Umesh Vazirani. An Introduction to Computational Learning Theory. The MIT Press, 08 1994. ISBN 9780262276863. doi: 10.7551/mitpress/3897.001.0001. URL https://doi.org/10.7551/mitpress/3897.001.0001.
- Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Li et al. (2024) Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of thought empowers transformers to solve inherently serial problems. arXiv preprint arXiv:2402.12875, 2024.
- Liu et al. (2022) Bingbin Liu, Jordan T Ash, Surbhi Goel, Akshay Krishnamurthy, and Cyril Zhang. Transformers learn shortcuts to automata. arXiv preprint arXiv:2210.10749, 2022.
- Loshchilov & Hutter (2018) Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam, 2018. URL https://openreview.net/forum?id=rk6qdGgCZ.
- Luo et al. (2023) Zhezheng Luo, Jiayuan Mao, Joshua B Tenenbaum, and Leslie Pack Kaelbling. On the expressiveness and generalization of hypergraph neural networks. In Learning on Graphs Conference, 2023.
- Merrill & Sabharwal (2023) William Merrill and Ashish Sabharwal. The parallelism tradeoff: Limitations of log-precision transformers. Transactions of the Association for Computational Linguistics, 11:531–545, 2023.
- Merrill et al. (2021) William Merrill, Yoav Goldberg, and Noah A Smith. On the power of saturated transformers: A view from circuit complexity. arXiv preprint arXiv:2106.16213, 2021.
- Merrill et al. (2022) William Merrill, Ashish Sabharwal, and Noah A Smith. Saturated transformers are constant-depth threshold circuits. Transactions of the Association for Computational Linguistics, 10:843–856, 2022.
- Oymak et al. (2023) Samet Oymak, Ankit Singh Rawat, Mahdi Soltanolkotabi, and Christos Thrampoulidis. On the role of attention in prompt-tuning, 2023. URL https://arxiv.org/abs/2306.03435.
- Pérez et al. (2019) Jorge Pérez, Javier Marinković, and Pablo Barceló. On the turing completeness of modern neural network architectures. In ICLR, 2019.
- Pérez et al. (2021) Jorge Pérez, Pablo Barceló, and Javier Marinkovic. Attention is turing complete. The Journal of Machine Learning Research, 22(1):3463–3497, 2021.
- Strobl et al. (2024) Lena Strobl, William Merrill, Gail Weiss, David Chiang, and Dana Angluin. What formal languages can transformers express? a survey. Transactions of the Association for Computational Linguistics, 12:543–561, 2024.
- Tarzanagh et al. (2024) Davoud Ataee Tarzanagh, Yingcong Li, Christos Thrampoulidis, and Samet Oymak. Transformers as support vector machines, 2024. URL https://arxiv.org/abs/2308.16898.
- Tian et al. (2023) Yuandong Tian, Yiping Wang, Beidi Chen, and Simon Du. Scan and snap: Understanding training dynamics and token composition in 1-layer transformer, 2023. URL https://arxiv.org/abs/2305.16380.
- Touvron et al. (2023) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
- Trauger & Tewari (2023) Jacob Trauger and Ambuj Tewari. Sequence length independent norm-based generalization bounds for transformers, 2023. URL https://arxiv.org/abs/2310.13088.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
- Wei et al. (2021) Colin Wei, Yining Chen, and Tengyu Ma. Statistically meaningful approximation: a case study on approximating turing machines with transformers. CoRR, abs/2107.13163, 2021. URL https://arxiv.org/abs/2107.13163.
- Weiss et al. (2021) Gail Weiss, Yoav Goldberg, and Eran Yahav. Thinking like transformers. In International Conference on Machine Learning, pp. 11080–11090. PMLR, 2021.
- Zhang et al. (2023) Ruiqi Zhang, Spencer Frei, and Peter L. Bartlett. Trained transformers learn linear models in-context, 2023. URL https://arxiv.org/abs/2306.09927.
- Zhang et al. (2024) Yufeng Zhang, Boyi Liu, Qi Cai, Lingxiao Wang, and Zhaoran Wang. An analysis of attention via the lens of exchangeability and latent variable models, 2024. URL https://arxiv.org/abs/2212.14852.
Appendix A Proof of Main Theorem
See 2.1
Proof.
First we write down the loss.
(29) |
Observe that the one layer attention network is a quadratic polynomial in .
(30) |
Here
(31) |
Now we relax the objective where we replace with an unconstrained matrix . Another way to put it is that is rank- but can be a general matrix. Because the space of general rank matrices is larger, we have written down a relaxation guaranteed to have a smaller loss. Furthermore the loss can be optimized via ordinary least squares.
(32) |
Thus the optimum of the regression with respect to the data achieves optimum of the loss to error in time . The sample complexity to achieve error is then with probability over the data distribution. Furthermore, if we take the SVD of where we absorb the singular values into the left and right singular vectors we have for . Let and
(33) |
as desired. ∎
Appendix B Proofs from Identifiability Section
First, we start with a general lemma (Lemma B.1) which states a sufficient condition for identifiability of any model class that can be written as an inner product of a polynomial of parameters with a polynomial feature mapping . If the data is realizable by the model class and is full rank then the model class is identifiable with respect to .
The following is the certificate of identifiability written in an abstract form involving polynomials to map parameters to feature space and polynomials to map data to feature space. The proof does not require the model to be an MHLA, but we state it in MHLA terms for the sake of concreteness.
Lemma B.1 (General Certificate of Identifiability).
Let dataset be a dataset realizable by . Let be a collection of polynomials mapping the parameters to a feature space of fixed dimension . Let be a uniform family of polynomials such that . Let and satisfy
(34) |
for all for all . Then if , we have
(35) |
for all empirical risk minimizers . That is, all empirical risk minimizers compute the same function.
Proof.
We construct a map such that if and only if . Then we show that any empirical risk minimizer and the ground truth satisfy .
In more detail, we construct some polynomials and family of polynomials such that
(36) |
We construct a linear model class that takes as parameters and data . such that
(37) |
Let be defined as
(38) |
Let be defined as
(39) |
Observe that for all , we have . Here we use the fact that is realizable by the ground truth . Therefore if we show that is unique, i.e comprised of a single element then is also unique. Therefore, is the same function for any
To show is unique, all we need is that the second moment of the features is positive definite (the covariance has a minimum eigenvalue bounded away from zero). ∎
Next we prove the main certifiable identifiability lemma by instantiating the polynomials and from Lemma B.1. See 4.1
Proof.
First we construct a polynomial and for such that
(40) |
We begin by rewriting . We index the first entries of by all pairs for and all .
(41) |
We define the entries of from as follows.
(42) |
Similarly, we define be be the following features. and .
(43) |
and
(44) |
Thus we rewrite as
(45) |
Here we introduce the notation to denote the set of all pairs for . We have constructed a polynomial such that for any in the same equivalence class , we have . Furthermore, if there exists such that then OLS returns a unique solution for . Since the data is realizable, we conclude for all . ∎
Next we present the proof that realizability is not necessary to identify the function learned by MHLA with more than heads. See 4.3
Proof.
We know from [lemma main algorithm] there exists a surjective map that takes into . This implies that for all there exists a right inverse function satisfying given by SVD. Therefore, i.e optimizing over does no better than optimizing over . To prove this consider the contrary that there exists and there is no that achieves the same empirical risk as . However, is such a , and we have a contradiction. The key point is that we avoid the assumption of realizability and replace it with surjectivity of the polynomials . ∎
Finally we prove that data drawn from independent noise is certifiably identifiable. A subtlety in the proof is that we use a somewhat different set of polynomials than Lemma 4.1 as we center and normalize our features, which still satisfies the assumptions of the general certificate Lemma B.1 See 4.2
Proof.
We give the entries of the following naming convention. Let the terms and pairs . Terms that involve and are referred to as ’singles’.
(46) |
We give entries of the following form the name ”singles to singles”
(47) |
For the case of drawn with each entry i.i.d we can proceed via case work.
Case 1: Pairs to Pairs, and
-
1.
Subcase 1: and :
(48) -
2.
Subcase 2: and :
(49)
Case 2: Singles to Singles, and
-
1.
Subcase 1: and :
(50) -
2.
Subcase 2: and :
(51)
Case 3: Singles to Pairs, and
-
1.
Subcase 1: :
(52)
Finally for the feature we have on the main diagonal and everywhere else.
Therefore we’ve concluded that is a block diagonal matrix because the blocks are near zero. All that remains is to verify that the diagonal blocks are full rank.
-
1.
Pairs to Pairs: is full rank with min eigenvalue
-
2.
Singles to Singles: is full rank with min eigenvalue .
∎
Finally we provide a simple error bound for approximate empirical risk minimizers to demonstrate the robustness of the conclusions in Lemma 4.1. See 4.4
Proof.
(53) |
Here the first equality follows from the linearization exhibited in Lemma B.1. The first inequality is cauchy schwarz. In the second inequality we apply a crude upper bound that no more than 6’th degree polynomials that are products of three squares of entries in are involved in .
(54) |
The last inequality comes from the fact that are approximate empirical risk minimizers. Therefore we know
(55) |
which implies
(56) |
which concludes the proof. ∎
Lemma B.2 (Identifiability with Error and Noise in Realizability).
Let be a dataset such that for i.i.d and bounded. Let be the set of -approximate empirical risk minimizers.
(57) |
Let . Then we have for any that for all inputs
(58) |
Proof.
The proof follows directly from Lemma 4.4 but we incorporate the terms as is standard in analyses of linear regression. ∎
Appendix C Programs Expressible as Fixed Depth Linear Transformer
In this section we build out examples of programs that can be expressed as fixed depth linear transformers. Expressibility results can be carried out in a variety of equivalent ways. The main takeaway, is that the computation history of TM on word , when written down ”step by step” can be captured by next token prediction of linear attention. This is because the key-query-value naturally implements a table lookup sometimes referred to as ”associative memory” or ”in context linear regression” in the linear case.
The notion of an Autoregressive MHLA Program is useful for condensing the proofs of expressibility. We write such programs in an object oriented syntax with each token representing an object with multiple attributes. Attributes can be updated and looked up from other objects using a generalized lookup akin to associative memory.
Lemma C.1.
For any program written in the form of algorithm 6, there exists corresponding MHLA parameters such that .
Proof.
We set some matrices to implement lookup tables. For any function of for sets and there is a canonical representation of the input domain as orthogonal unit vector and output domain as another set of orthogonal unit vectors . Therefore, there is a matrix that maps input vectors to output vectors satisfying for for all and .
For functions and we associate matrices and respectively.
Then we form as follows. Let be the matrix that is all zeros with in the rows associated with and the columns associated with . Let be the matrix that is all zeros with in the rows associated with and the columns associated with .
In each layer we have multiple heads, each one performs the lookup operation for each pair of attributes in the class. ∎
C.1 Construction of UTM
Now we proceed with our construction of an Autoregressive MHLA-Program for UTM. The UTM requires a small number of operations captured by an Autoregressive MHLA-Program.
We define an embedding function that takes as input a TM and word such that
Definition (Embedding).
Let be a TM over state space , alphabet , transition function . Then
(59) |
Let be ”positional encodings” that assign unique id’s for every letter in the word .
(60) |
Then we define Embedding(M,x) to be
(61) |
Henceforth we will write the construction in the syntax of an Autoregressive MHLA-Program instead of matrices with blocks of zeros and token embeddings to save space.
See 2.2
The construction is given in the language of Autoregressive MHLA-Programs in algorithm 6 which provides the instruction set for writing the next letter in the computation history onto the output tape.
Proof.
Proof Idea: A few elementary operations can be captured by a MHLA-program which can be composed to output the computation history of on . We begin by introducing some notation for the ”Lookup” operation which we build into copy, move, and if-then which are all the operations required to construct the UTM.
General Lookup: For each lookup there are three objects that are involved. Let Token be the ”source” which is always the rightmost token. An attribute from the source object known as AttrSource is linearly transformed to form a ”query”. Lookup involves a table which is used to match an AttrKey to look up an AttrValue from an object that we denote the ”target”. Note, that if the obj[i] has an AttrKey that is zero, it is the same as not being in the table. In the pseudocode algorithm 6 these zero attributes are denoted as ”None”.
Given a query, we copy the associated AttrValue from the lookup table and update AttrDest in an object NextToken which we denote the ”destination”. Multiple lookup operations can be performed in parallel by many heads with each head responsible for a single lookup.
To output each letter of the computation history, we increase the number of tokens by a constant . We refer to the set of contiguous tokens involved in the computation of a single letter as a ”block”. Here block[i] = . We construct a different set of heads to act on each token and enforce that the nonzero rows that each block of tokens occupy are disjoint. Furthermore, within a block, the states of each token occupies a disjoint set of rows except when they are used to construct a table. Tables are the only case where we want tokens to occupy the same rows. In this manner the following abstraction can be made.
At the beginning of each block starting with obj[r], we can lookup attributes from anywhere in OBJ that we want to load into different attributes in obj[r]. Then we can apply any sequence of if-then statements involving the attributes of obj[r] to update the attributes (or create new attributes). To run the UTM we need a few simple primitives denoted Lookup and If-Then.
Construction of Primitives: We write down the construction by constructing a sufficient set of primitives Lookup and If-Then. We also include Copy which is a special case of Lookup that is used frequently.
Lookup:
When the transforms and are the identity we denote the lookup operation for table where we query an attribute of to update the attribute of obj[r+1] as
Copy:
A special case of lookup is copy, where we need to copy attributes from tokens that are at an offset for . This can be done by setting to permute the positional encoding by positions. Then the query matches the key that is the positional encoding of the target object. Let be target and destination attributes. We denote the copy operation of the attribute of the obj at offset from into the attribute of the destination object to be .
If-Then:
We write down an If-Then Program algorithm 4 and a corresponding Autoregressive MHLA-Program algorithm 5 to implement If-Then. An If-Then program looks up whether an attribute is equal to any of attributes then we set attribute to respectively. This is achieved by copying the attributes and into dummy attributes and for all in for a series of consecutive tokens. This creates a table with key and value . Then we use attribute as the query, which looks up the corresponding value which we use to update an attribute .
∎
C.2 Proofs For Learning UTM
See 5.1
Corollary 3.
In particular, for sample complexity , by Lemma 2.2, we have with probability over the randomness in the data that the probability that the step of the computation history of is equal to is
(62) |
where . That is, the computation history of the MHLA returned by algorithm 1 is equal to the computation history of on .
Proof.
We have from Theorem 2.1 that algorithm 1 returns such that
(63) |
Then to obtain an error bound on the step computation history, which involves tokens, we just observe that by union bound each step rounds to an incorrect set of tokens with probability less than . Therefore, over steps the error probability is upper bounded by . Equivalently
(64) |
Then proving Corollary 3 is a simple exercise. For a larger sample complexity , by Lemma 2.2, we have that the probability that every token of the autoregressive computation history of is equal to is
(65) |
∎
See 5.2
Proof.
The proof follows from the quantitative version of Lemma 4.4. Using the given that , we conclude that for any that for all inputs
(66) |
If we select a sufficiently small then we can ensure
(67) |
.
The runtime then scales with as desired. ∎
Appendix D Additional Definitions
Definition (Orthogonal Embeddings).
Let Embed be a function . Let be an alphabet and let be a basis of orthogonal unit vectors. Then for each letter in an alphabet , we define where we associate a different unit vector to each letter.
We adopt a naive ”rounding” scheme for converting vectors into tokens. This can be done in a variety of ways, and we choose to simply round the entries of the vector embeddings to the nearest token embedding.
Definition (Rounding).
For any vector , let for . Since we use orthogonal unit vectors for token embeddings we will refer to as a token. We will often refer to a matrix as being equivalent to a series of tokens to mean for all .
(68) |
D.1 Training details of attention networks
We use Adam Kingma & Ba (2014) optimizer to train linear attention model Equation 1 and the full Transformer Vaswani et al. (2017) models.
hyper parameter | search space |
---|---|
input dimension | [2, 4, 8, 16] |
number of heads | [1, 2, 4, 8, 16] |
number of layers | [1, 2, 4] |
learning rate | [0.01, 0.001] |
batch size | [32, 64] |
optimizer | AdamW Loshchilov & Hutter (2018) |
D.2 Training details in DFA Execution
We use the Llama variant of the Transformer arhitecture from Touvron et al. (2023). We run each setting with number of training examples with the following different values . The other hyper parameters are given in the below table.
hyper parameter | search space |
---|---|
input dimension | [2048] |
number of heads | [16] |
number of layers | [4] |
learning rate | [0.00025] |
epochs | 100 |
optimizer | AdamW Loshchilov & Hutter (2018) |