A Polynomial Degree Bound on Equations for Non-rigid Matrices and Small Linear Circuits
Abstract
We show that there is an equation of degree at most for the (Zariski closure of the) set of the non-rigid matrices: that is, we show that for every large enough field , there is a non-zero -variate polynomial of degree at most such that every matrix which can be written as a sum of a matrix of rank at most and a matrix of sparsity at most satisfies . This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer and Landsberg [GHIL16] and improves the best upper bound known for this problem down from [KLPS14, GHIL16] to .
We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices such that the linear transformation represented by can be computed by an algebraic circuit with at most edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded.
Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [SV15] to construct low degree “universal” maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low degree annihilating polynomial completes the proof.
As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [KI04].
1 Introduction
1.1 Equations for varities in algebraic complexity theory
Let be a (not necessarily irreducible) affine variety and let denote its ideal.111For completeness, we provide the formal (standard) definitions for these notions in Section 1.4.. A non-zero polynomial is called an equation for . An equation for may serve as a “proof” that a point is not in , by showing that .
A fundamental observation of the Geometric Complexity Theory program is that many important circuit lower bounds problems in algebraic complexity theory fit naturally into the setting of showing that a point lies outside a variety [MS01, BIL+19]. In this formulation, one considers to be the closure of a class of polynomials of low complexity, and is the coefficient vector of the candidate hard polynomial.
Let . The quantity can be thought of as a measure of complexity for the geometry of the variety . The quantity is a very coarse complexity measure. A recent line of work regarding algebraic natural proofs [FSV18, GKSS17] suggests to study the arithmetic circuit complexity of equations for varieties that correspond to polynomials with small circuit complexity. Having growing like a polynomial in is a necessary (but not a sufficient) condition for a variety to have an algebraic natural proof for non-containment.
1.2 Rigid matrices
A matrix is -rigid if cannot be written as a sum where and contains at most non-zero entries. Valiant [Val77] proved that if is -rigid for some constants then cannot be computed by arithmetic circuits of size and depth , and posed the problem of explicitly constructing rigid matrices with these parameters, which is still open. It is easy to prove that most matrices have much stronger rigidity parameters: over algebraically closed fields a generic matrix is -rigid for any target rank .
Let be an algebraically closed field. Let denote the set of matrices which are not -rigid. Let denote the Zariski closure of . A geometric study of was initiated by Kumar, Lokam, Patankar and Sarma [KLPS14]. Among other results, they prove that for every , . A slightly improved (but still exponential) upper bound was obtained by Gesmundo, Hauenstein, Ikenmeyer and Landsberg [GHIL16], who also conjectured that for some , grows like a polynomial function in . The following theorem which we prove in this note confirms this conjecture.
Theorem 1.1.
Let , and let be a field of size at least . For every large enough , there exists a non-zero polynomial , of degree at most , which is a non-trivial equation for matrices which are not (, )-rigid. That is, for every such matrix , .
In fact, the conjecture of [GHIL16] was slightly weaker: they conjectured that is polynomial in for every irreducible component of . As shown by [KLPS14], the irreducible components are in one-to-one correspondence with subsets of of size corresponding to possible supports of the sparse matrix .
As we observe in 2.3, it is somewhat simpler to show that each of these irreducible components has an equation with a polynomial degree bound. However, since the number of such irreducible components is exponentially large, it is not clear if there is a single equation for the whole variety which is of polynomially bounded degree. We do manage to reverse the order of quantifiers and prove such an upper bound in Theorem 1.1. This suggests that the set of non-rigid matrices is much less complex than what one may suspect given the results of [KLPS14, GHIL16].
1.3 Circuits for linear transformations
The original motivation for defining rigidity was in the context of proving lower bounds for algebraic circuits [Val77]. If is an -rigid matrix, for any , then the linear transformation represented by cannot be computed by an algebraic circuit of depth and size .
Every algebraic circuit computing a linear transformation is without loss of generality a linear circuit. A linear circuit is a directed acyclic graph that has inputs labeled and output nodes. Each edge is labeled by a scalar . Each node computes a linear function in defined inductively. An internal node with children, , connected to it by edges labeled , computes the linear function , where is the linear function computed by , . The size of the circuit is the number of edges in the circuit.
It is possible to use similar techniques to those used in the proof of Theorem 1.1 in order to prove a polynomial upper bound on an equation for a variety containing all matrices whose corresponding linear transformation can be computed by an algebraic circuit of size at most (even without restriction on the depth). Note that this is nearly optimal as any such linear transformation can be computed by a circuit of size . More formally, we show the following.
Theorem 1.2.
Let be a field of size at least . For every large enough , there exists a non-zero polynomial , of degree at most , which is a non-trivial equation for matrices which are computed by algebraic circuit of size at most .
Our proofs are based on a dimension counting arguments, and are therefore non-constructive and do not give explicit equations for the relevant varieties. It thus remains a very interesting open problem to provide explicit low-degree equations for any of the varieties considered in this paper. Here “explicit” means a polynomial which has arithmetic circuits of size .222Although one may consider other, informal notions of explicitness which could nevertheless be helpful. The question of whether such equations exists has a win-win flavor: if they do, this can aid in explicit constructions of rigid matrices, and on the other hand, if all equations are hard, we have identified a family of polynomials which requires super-polynomial arithmetic circuits. Assuming the existence of a polynomial time algorithm for polynomial identity testing, we are able to make this connection formal.
Let denote the set of strings which describe arithmetic circuits (say, over ) which compute the zero polynomial. It is well known that . Kabanets and Impagliazzo [KI04] proved that certain circuit lower bounds follow from the assumption that . As a corollary to Theorem 1.2, we are able to prove theorem of a similar kind.
Corollary 1.3.
Suppose . Then at least one of the following is true:
-
1.
There exists a family of -variate polynomials of degree over , which can be computed (as its list of coefficients, given the input ) in , which does not have polynomial size constant free arithmetic circuits.
-
2.
there exists a family of matrices, constructible in polynomial time with an oracle (given the input ), which requires linear circuits of size .
A constant free arithmetic circuit is an arithmetic circuit which is only allowed to use the constants .
A different way to interpret 1.3 is as saying that at least one of the following three lower bound results hold: either , or (at least) one of the two circuit lower bounds stated in the corollary. We emphasize that the result holds under any (even so-called white box) derandomization of .
Our statement is similar to, but incomparable with the result of Kabanets and Impagliazzo [KI04] who proved that if then either the permanent does not have polynomial size constant free arithmetic circuits, or .
Since -rigid matrices have linear circuit of size , the last item of 1.3 in particular implies a conditional construction of -rigid matrices (it is also possible to directly use Theorem 1.1 instead of Theorem 1.2 to deduce this result). Unconditional constructions of rigid matrices in polynomial time with an oracle were recently given in [AC19, BHPT20]. However, the rigidity parameters in these papers are not enough to imply circuit lower bounds (furthermore, even optimal rigidity parameters are not enough to imply lower bounds for general linear circuits).
Since it is widely believed that , the answer to which of the last two items of 1.3 holds boils down to the question of whether there exists an equation for non-rigid matrices of degree and circuit size . If determining if a matrix is rigid is -hard (as is known for some restricted ranges of parameters [MS10]), it is tempting to also believe that the equations should not be easily computable, as they provide “proof” for rigidity which can be verified in randomized polynomial time. However, it could still be the case that those equations that have polynomial size circuits only prove the rigidity of “easy” instances.
1.4 Some basic notions in algebraic geometry
For completeness, in this section we define some basic notions in algebraic geometry. A reader who is familiar with this topic may skip to the next section.
Let be an algebraically closed field. A set is called an affine variety if there exist polynomials such that . For convenience, in this paper we often refer to affine varieties simply as varieties.
For each variety there is a corresponding ideal which is defined as
Conversely, for an ideal we may define the variety
Given a set we may similarly define the ideal . The (Zariski) closure of a set , denoted , is the set . In words, the closure of is the set of common zeros of all the polynomials that vanish on . It is also the smallest variety with respect to inclusion which contains . By construction, is a variety, and a polynomial which vanishes everywhere on is also vanishes on .
Over , it is instructive to think of the Zariski closure of as the usual Euclidean closure. In fact, for the various sets we consider in this paper (which correspond to sets of “low complexity” objects, e.g., non-rigid matrices or matrices which can be computed with a small circuit), it can be shown that these two notions of closure coincide (see, e.g., Section 4.2 of [BI17]).
A variety is called irreducible if it cannot be written as a union of varieties that are properly contained in . Every variety can be uniquely written as a union of irreducible varieties. The varieties are then called the irreducible components of .
2 Degree Upper Bound for Non-Rigid Matrices
In this section, we prove Theorem 1.1. A key component of the proof is the use of the following construction, due to Shpilka and Volkovich, which provides an explicit low-degree polynomial map on a small number of variables, which contains all sparse matrices in its image. For completeness, we provide the construction and prove its basic property.
Lemma 2.1 ([SV15]).
Let be a field such that . Then for all , there exists an explicit polynomial map of degree at most such that for any subset of size , there exists a setting such that is identically zero on every coordinate , and equals in coordinate for all .
Proof.
Arbitrarily pick distinct , and let be their corresponding Lagrange’s interpolation polynomials, i.e., polynomials of degree at most such that if and otherwise (more explicitly, ).
Let , and finally let
It readily follows that given as in the statement of the lemma, we can set for to derive the desired conclusion. The upper bound on the degree follows by inspection. ∎
As a step toward the proof of Theorem 1.1, we show there is a polynomial map on much fewer than variables with degree polynomially bounded in such that its image contains every non-rigid matrix. In the next step, we show that the image of every such polynomial map has an equation of degree .
Lemma 2.2.
There exists an explicit polynomial map , of degree at most , such that every matrix which is not rigid lies in its image.
Proof.
Let and let denote disjoint tuples of variables each.
Let be a symbolic matrix whose entries are labeled by the variables , and similarly let be a symbolic matrix labeled by . Let be the degree 2 polynomial map defined by the matrix multiplication .
Suppose now is a non-rigid matrix, i.e., for of rank and which is -sparse. Decompose for matrix and matrix . Let denote the support of . For convenience we may assume (otherwise, pad with zeros arbitrarily). Let denote the setting for in which maps to , and let denote the non-zero entries of . Then
To complete the proof of Theorem 1.1, we now argue that the image of any polynomial map with parameters as in 2.2 has an equation of degree at most .
Proof of Theorem 1.1.
Let denote the subspace of polynomials over in variables of degree at most . Let denote the subspace of polynomials over in variables of degree at most . Let be as in Lemma 2.2, and consider the linear transformation given by , where denotes the composition of the polynomial with the map , i.e., (indeed, observe that since and , it follows that ).
We have that , whereas by the choice of , so that there exists a non-zero polynomial in the kernel of , that is, such that .
It remains to be shown that for any non-rigid matrix , . Indeed, let be a non-rigid matrix. By Lemma 2.2, there exist such that . Thus, , as . ∎
Remark 2.3.
If the support of the sparse matrix is fixed a-priori to some set of cardinality at most , then it is easier to come up with a universal map from such that every matrix whose rank can be reduced to at most by changing entries in the set is contained in the image of . Just consider , where is a matrix such that for all , if , then and is zero otherwise. Here, each is a distinct formal variable. Combined with the dimension comparison argument we used in the proof of Theorem 1.1, it can be seen that there is a non-zero low degree polynomial such that . This argument provides a (different) equation of polynomial degree for each irreducible component of the variety of non-rigid matrices.
Remark 2.4.
It is possible to use the equation given in Theorem 1.1, and using the methods of [KLPS14], to construct “semi-explicit” -rigid matrices. These are matrices whose entries are algebraic numbers (over ) with short description, which are non-explicit from the computational complexity point of view. However, such constructions are also known using different methods (see Section 2.4 of [Lok09]).
3 Degree Upper bound for Matrices with a Small Circuit
In this section, we prove Theorem 1.2. Our strategy, as before, is to observe that all matrices with a small circuit lie in the image of a polynomial map on a small number of variables and small degree. Circuits of size can have many different topologies and thus we first construct a “universal” linear circuit, of size , that contains as subcircuits all linear circuits of size . Importantly, will affect the degree of but not its number of variables. We note that this construction of universal circuits is slightly different from similar constructions in earlier work, e.g., in [Raz10]; the key difference being that a naive use of ideas in [Raz10] to obtain the map seems to incur an asymptotic increase in the number of variables of , which is unacceptable in our current setting.
3.1 A construction of universal map for small linear circuits
We now define a map which is “universal” for size linear circuits, i.e., it contains in its image all matrices whose corresponding linear transformation can be computed by a linear circuit of size at most .
Let . We first define a universal graph for size . has a set of input nodes labeled and a set of designated output nodes. In addition, is composed of disjoint sets of vertices , each contains vertices.
Each vertex , for , has as its children all vertices for all . It is clear than every directed acyclic graph with edges (and hence at most vertices, and depth at most ) can be (perhaps non-uniquely) embedded in as a subgraph.
We now describe the edge labeling. Let be the number of edges in and let denote the -th edge, . The edge is labeled by the -th coordinate of the map given in 2.1.
Thus, the graph with this labeling computes a linear transformation (over the field ) in the variables . More explicitly, the -th entry of the matrix representing this linear transformation is given by the sum, over all paths from to the -th output node, of the product of the edge labels on that path. This entry is a polynomial in , so that we can think of as a polynomial map from to .
Lemma 3.1.
The map defined above contains in its image all matrices whose corresponding linear transformation can be computed by a linear circuit of size at most . The degree of is at most .
Proof.
Let be a matrix whose linear transformation is computed by a size circuit . The graph of can be embedded as a subgraph in the graph constructed above (if the embedding is not unique, pick one arbitrarily). Let be the edges of this subgraph, and let be their corresponding labels in . By the properties of the map given in 2.1, it is possible to set the tuple of variables to field elements such that the -th coordinate of equals if for some the otherwise. Observe that under this labeling of the edges, the circuit computes the same transformation as the circuit . Hence .
To upper bound the degree of , note that each edge label in is a polynomial of degree , and each path is of length at most . ∎
3.2 Low degree equations for small linear circuits
Analogous to the proof of Theorem 1.1, we now observe via a dimension counting argument that the image of the polynomial map has a equation of degree at most . This would complete the proof of Theorem 1.2.
Proof of Theorem 1.2.
As before, let denote the subspace of polynomials over in variables of degree at most . Let denote the subspace of polynomials over in variables of degree at most . Let be the map given by 3.1 for so that , and the degree of is at most . Now, consider the linear transformation given by .
Once again, we compute that , whereas , so that there exists a non-zero polynomial in the kernel of , that is, such that .
By 3.1, if has a circuit of size , it is in the image of , so that . ∎
4 Degree Upper Bound for Three Dimensional Tensors
Another algebraic object which is closely related to proving circuit lower bounds is the set of three dimensional tensors of high rank. A three dimensional tensor of rank at least implies a lower bound of on an arithmetic circuit computing the bi-linear function associated with the tensor. Our arguments also provide polynomial degree upper bounds for the set of tensors of (border) rank at most .
Lemma 4.1.
Let be any field. There is a polynomial map of degree at most such that for every dimensional tensor of rank at most lies in its image.
Proof.
This follows immediately from the definition.
Indeed, let . Let be disjoint tuples of variables. Let be a tensor of rank at most over the ring defined as follows.
From the definition of , it can be readily observed that for every tensor of rank at most , there is a setting of the variables in respectively such that . Moreover, each of the coordinates of is a polynomial of degree equal to three in the variables in . Let be the degree three polynomial map which maps the variables and to the coordinates of . ∎
We now argue that for every polynomial map given by 4.1 has an equation of not too large degree.
Theorem 4.2.
Let be any field. There exists a non-zero polynomial , of degree at most , which is a non-trivial equation for three dimensional tensors of rank at most .
Proof.
As before, let denote the subspace of polynomials over in variables of degree at most and let denote the subspace of polynomials over in variables of degree at most . Let be the map given by 4.1. Now, consider the linear transformation given by .
Observe that , whereas , so that there exists a non-zero polynomial in the kernel of , that is, such that .
By 4.1, if is a tensor of rank at most , then it is in the image of , and thus . ∎
The arguments here also generalize to tensors in higher dimensions. In particular, the following analog of 4.1 is true.
Lemma 4.3.
Let be any field. Then, for all , there is a polynomial map of degree at most such that for every dimensional tensor of rank at most lies in its image.
Combining this lemma with a dimension comparison argument analogous to that in the proof of Theorem 4.2 gives the following theorem. We skip the details of the proof.
Theorem 4.4.
For every field and for all , there exists a non-zero polynomial on variables and degree at most , which is a non-trivial equation for dimensional tensors of rank at most .
5 Applications to Circuit Lower Bounds
In this section we prove 1.3. The strategy of the proof is simple: the proof of Theorem 1.2 implies a algorithm which produces a sequence of polynomials which are equations for the set of matrices with small linear circuits. If those equations require large circuits, we are done, and if not, then there exists an equation with small circuits which (assuming ) can be found using an -oracle. Using, once again, the assumption that , we can also find deterministically a matrix on which the equation evaluates to non-zero, which implies the matrix requires large linear circuits.
There are some technical difficulties involved with this plan which we now describe. The first problem is that even arithmetic circuits of small size can have large description as bit strings, due to the field constants appearing in the circuits. To prevent this issue, we only consider constant free arithmetic circuits, which are only allowed inputs labeled by (but can still compute other constants in the circuit using arithmetic operations).
The second problem is that, in order to be able to find a non-zero of the equation in the last step of the algorithm (using the mere assumption that ), we need not only the size of the circuit but also its degree to be bounded by . Of course, by Theorem 1.2 the exists such a circuit, but we need to be able to prevent a malicious prover from providing us with a size circuit of exponential degree, and it is not known how to compute the degree of a circuit in deterministic polynomial time, even assuming . To solve this issue, we use an idea of Malod and Portier [MP08], who showed that any polynomial with circuit of size and degree also has a multiplicatively disjoint (MD) circuit of size . An MD circuit is a circuit in which any multiplication gates multiplies two disjoint subcircuits. This is a syntactic notion which is easy to verify efficiently and deterministically, and an MD circuit of size is guaranteed to compute a polynomial of degree at most .
A final technical issue is that the notion of MD circuits does not fit perfectly within the framework of constant free circuits. Therefore we use the notion of “almost MD” circuits, which allow for the case which the inputs to a multiplciation gates are not disjoint, as long as at least one of them is the root of a subcircuit in which only constants appear.
Definition 5.1.
We say a gate in a circuit is constant producing (CP) if in the subcircuit rooted at , all input nodes are field constants.
An almost-MD circuit is a circuit where every multiplication gate either multiplies two disjoint subcircuits, or at least one of its children is constant producing.
Lemma 5.2.
Suppose is an -variate polynomial of degree which has a constant free arithmetic circuit of degree . Then has a constant free almost-MD circuit of size .
Proof.
Let be a constant free arithmetic circuit for . We first homogenize the circuit to obtain a circuit (a homogeneous circuit is a circuit in which every gate computes a homogeneous polynomial [SY10]). Since is homogeneous, all the gates which compute non-zero field constants are CP gates. We then eliminate all gates which compute constants by allowing the edges entering sum gates to be labeled by field scalars, and interpreting a sum gate as computing a linear combination whose coefficients are given by the edge labels. We call this circuit . This step does not maintain constant-freeness. However, every label appearing on the edges of was computed in , so it can be computed by a constant-free arithmetic circuit of polynomial size.
We now do the transformation detailed in [MP08] to to obtain an MD circuit , which has labels on the edges. This step does not produce new constants. Finally, we convert to an almost-MD constant free circuit , by re-computing every label appearing on the edge using a fresh subcircuit for each label, and rewiring the circuit (which will convert the circuit from an MD circuit to an almost MD circuit). These subcircuits are guaranteed to have polynomial size constant free circuits since these constant were all computed in , which keeps the total size . ∎
For circuits which compute low-degree polynomials, the mere existence of an algorithm for the decision version of PIT allows one to construct an algorithm for the search version.
Lemma 5.3.
Suppose . Then there is a polynomial time algorithm that given a non-zero almost-MD arithmetic circuit of size computing an -variate polynomial, finds in time an element such that .
Proof.
We abuse notation by denoting by also the polynomial computed by the circuit . Note that since is almost-MD, the degree of is at most . Thus, there exists such that is a non-zero polynomial in . By iterating over those values from to and using the assumption that , we can find such a value for in time . We then continue in the same manner with the rest of the variables. ∎
As we noted above, the assumption that is almost-MD was used in 5.3 to bound the degree of the circuit. It is also useful because it is easy to decide in deterministic polynomial time whether a circuit is almost-MD. We now complete the proof of 1.3.
Proof of 1.3.
For every , the proof of Theorem 1.2 provides an equation for the set of matrices with small linear circuits. This polynomial can be found by solving a linear system of equations in a linear space whose dimension is . Using standard, small space algorithm for linear algebra [BvzGH82, ABO99], this implies that there exists a fixed algorithm which, on input , outputs the list of coefficients of the polynomial .
Consider now the family . If for any constant there exist infinitely many such that requires circuits of size at least , it follows (by definition) that the algorithm above outputs a family of polynomials with super-polynomial constant-free arithmetic circuits.
We are thus left to consider the case that there exists a constant such that for all large enough , can be computed by circuits of size . By 5.2, we may assume without loss of generality that these circuits are almost-MD circuits. Further suppose . We will show how to construct a matrix in polynomial time with an oracle which requires large linear circuits.
Consider the language of pairs such that there exists a string of length at most such that describes an almost-MD circuit such that is non-zero, and , where is the polynomial map given in the proof of Theorem 1.2.
Assuming , the language is in , and by assumption for every large enough there exists such a circuit. Thus, we can use the oracle to construct such a circuit bit by bit. Finally, using 5.3 we can output a matrix such that .
By the properties of the circuit and the map , does not have linear circuits of size less than . ∎
Many variations of 1.3 can be proved as well, with virtually the same proof. By slightly modifying the language used in the proof, it is possible to prove the same result even under the assumption (recall that ). A similar statements also holds over finite fields of size , in which case the proof is simpler since there are no issues related to the bit complexity of the first constants. Finally, an analog of 1.3 also holds for tensor rank, by using Theorem 4.2 instead of Theorem 1.2: that is, assuming , either there exists a construction of a hard polynomial in , or an efficient construction with an oracle of a 3-dimensional tensor of rank . We remark that for tensors of large rank there are no analogs of [AC19, BHPT20], i.e., there do not exist even constructions with an oracle of tensors with slightly super-linear rank.
References
- [ABO99] Eric Allender, Robert Beals, and Mitsunori Ogihara. The Complexity of Matrix Rank and Feasible Systems of Linear Equations. Comput. Complex., 8(2):99–126, 1999.
- [AC19] Josh Alman and Lijie Chen. Efficient Construction of Rigid Matrices Using an NP Oracle. In Proceedings of the \nth60 Annual IEEE Symposium on Foundations of Computer Science (FOCS 2019), pages 1034–1055. IEEE Computer Society, 2019.
- [BHPT20] Amey Bhangale, Prahladh Harsha, Orr Paradise, and Avishay Tal. Rigid Matrices From Rectangular PCPs. Electronic Colloquium on Computational Complexity (ECCC), 27:75, 2020.
- [BI17] Markus Bläser and Christian Ikenmeyer. Introduction to geometric complexity theory. Lecture notes, 2017.
- [BIL+19] Markus Bläser, Christian Ikenmeyer, Vladimir Lysikov, Anurag Pandey, and Frank-Olaf Schreyer. Variety Membership Testing, Algebraic Natural Proofs, and Geometric Complexity Theory. CoRR, abs/1911.02534, 2019. Pre-print available at arXiv:1911.02534.
- [BvzGH82] Allan Borodin, Joachim von zur Gathen, and John E. Hopcroft. Fast Parallel Matrix and GCD Computations. Inf. Control., 52(3):241–256, 1982.
- [FSV18] Michael A. Forbes, Amir Shpilka, and Ben Lee Volk. Succinct Hitting Sets and Barriers to Proving Lower Bounds for Algebraic Circuits. Theory of Computing, 14(1):1–45, 2018.
- [GHIL16] Fulvio Gesmundo, Jonathan D. Hauenstein, Christian Ikenmeyer, and J. M. Landsberg. Complexity of Linear Circuits and Geometry. Foundations of Computational Mathematics, 16(3):599–635, 2016.
- [GKSS17] Joshua A. Grochow, Mrinal Kumar, Michael E. Saks, and Shubhangi Saraf. Towards an algebraic natural proofs barrier via polynomial identity testing. CoRR, abs/1701.01717, 2017. Pre-print available at arXiv:1701.01717.
- [KI04] Valentine Kabanets and Russell Impagliazzo. Derandomizing Polynomial Identity Tests Means Proving Circuit Lower Bounds. Computational Complexity, 13(1-2):1–46, 2004. Preliminary version in the \nth35 Annual ACM Symposium on Theory of Computing (STOC 2003).
- [KLPS14] Abhinav Kumar, Satyanarayana V. Lokam, Vijay M. Patankar, and Jayalal Sarma. Using Elimination Theory to Construct Rigid Matrices. Computational Complexity, 23(4):531–563, 2014.
- [Lok09] Satyanarayana V. Lokam. Complexity Lower Bounds using Linear Algebra. Foundations and Trends in Theoretical Computer Science, 4(1-2):1–155, 2009.
- [MP08] Guillaume Malod and Natacha Portier. Characterizing Valiant’s algebraic complexity classes. J. Complex., 24(1):16–38, 2008.
- [MS01] Ketan Mulmuley and Milind A. Sohoni. Geometric Complexity Theory I: An Approach to the P vs. NP and Related Problems. SIAM J. Comput., 31(2):496–526, 2001.
- [MS10] Meena Mahajan and Jayalal Sarma. On the Complexity of Matrix Rank and Rigidity. Theory Comput. Syst., 46(1):9–26, 2010.
- [Raz10] Ran Raz. Elusive Functions and Lower Bounds for Arithmetic Circuits. Theory of Computing, 6(7):135–177, 2010.
- [SV15] Amir Shpilka and Ilya Volkovich. Read-once polynomial identity testing. Computational Complexity, 24(3):477–532, 2015. Preliminary version in the \nth40 Annual ACM Symposium on Theory of Computing (STOC 2008).
- [SY10] Amir Shpilka and Amir Yehudayoff. Arithmetic Circuits: A survey of recent results and open questions. Foundations and Trends in Theoretical Computer Science, 5:207–388, March 2010.
- [Val77] Leslie G. Valiant. Graph-Theoretic Arguments in Low-Level Complexity. In Proceedings of the \nth2 International Symposium on the Mathematical Foundations of Computer Science (MFCS 1977), volume 53 of Lecture Notes in Computer Science, pages 162–176. Springer, 1977.