The Computational Complexity of Quantum Determinants
Abstract
In this work, we study the computational complexity of quantum determinants, a -deformation of matrix permanents: Given a complex number on the unit circle in the complex plane and an matrix , the -permanent of is defined as
where is the inversion number of permutation in the symmetric group on elements. The function family generalizes determinant and permanent, which correspond to the cases and respectively.
For worst-case hardness, by Liouville’s approximation theorem and facts from algebraic number theory, we show that for primitive -th root of unity for odd prime power , exactly computing -permanent is -hard. This implies that an efficient algorithm for computing -permanent results in a collapse of the polynomial hierarchy. Next, we show that computing -permanent can be achieved using an oracle that approximates to within a polynomial multiplicative error and a membership oracle for a finite set of algebraic integers. From this, an efficient approximation algorithm would also imply a collapse of the polynomial hierarchy. By random self-reducibility, computing -permanent remains to be hard for a wide range of distributions satisfying a property called the strong autocorrelation property. Specifically, this is proved via a reduction from -permanent to -permanent for points on the unit circle. Since the family of permanent functions shares common algebraic structure, various techniques developed for the hardness of permanent can be generalized to -permanents.
1 Introduction
The study of matrix permanents has played an essential role in theoretical computer science and combinatorics. In a seminal work, Valiant proved that computing matrix permanent is -hard, even if the matrix is over integers, non-negative integers or the set [Val79a]. This is considered to be one of the most influential results in computational complexity theory. These results have been used to justify of the hardness of other computational tasks in graph theory [Bro86, CX21] and matrix analysis [HJ12]. By contrast, matrix determinant is known to be polynomial-time computable from standard tools in elementary linear algebra [M+97].
An interesting observation is that both permanent and determinant of a matrix can be written as a polynomial in matrix elements with phase coefficients, i.e., they are elements on the unit circle in the complex plane. More specifically, both functions can be written as a summation of over permutations on elements for a phase . Here is the inversion number of the permutation , defined as the number of transpositions on adjacent elements applied to restore to its natural order [Yan91]. In the case of permanent, , whereas in the case of determinant, . However, given the expression of both functions in a unified form, what makes their computational complexities so different remains elusive.
A natural idea to explore this question is to study a -deformation of the matrix polynomial, connecting the cases . In mathematics literature, the -deformation on the unit circle or in has been studied, and was given different names including -permanent, -permanent, -determinant, or quantum determinant [Yan91, BL94, Lal98, dF18, DF10, dS18, TAG93]. However, to our knowledge, none of these results discussed the complexity of computing the analogue. In this work, we focus on the unit circle, and study the computational complexity of the function defined as
(1) |
for the symmetric group on elements and a complex number such that . In the rest of the paper, for matrix , we will call the -permanent of .
The -permanent is not the only -analogue of matrix permanents. In particular, immanants [Har85, Bür00, MM11, dRA13, Spi20, Cur21] and fermionants [MM11, BKW19, RA13, dRA13] are two families of functions that have been widely studied. They are both defined with class functions, i.e., for some function invariant under conjugation on the symmetric group, these function can be written in the form
(2) |
For immanants, where is the character of the group representation of . The permanent corresponds to the special case that is the trivial representation, and the determinant corresponds to the sign representation. In a recent breakthrough, Curticapean classified the complexity of computing immanants for partition families. The complexity of immanants is settled down by proving that under plausible assumptions, for every partition family , matrix immanants are efficiently computable if and only if a quantity , which counts the boxes to the right of the first column in the Young diagram of , is unbounded [Cur21]. For fermionants, , where denotes the number of cycles in . On the hardness side, Martins and Moore showed that fermionants are -hard for under Turing reduction, and for , the problem is -hard [MM11]. On the other hand, Björklund, Kaski, and Williams gave an algorithm in time for matrices over a finite field of elements, where is the time complexity of multiplication and division over [BKW19]. These results justify that these functions are computationally intractable in general, except in special cases.
However, to our knowledge, no classification of the complexity of -permanents has been given for any , and none of the aforementioned results seems to obviously imply the hardness for this family, partly because the inversion number is not a class function. In fact, no known complexity result for -deformations of matrix permanents has even been given beyond class functions.
In addition to the fact that the coefficients are not class functions, one may wonder why -permanent (also called quantum determinant) is interesting object to study? First, in representation theory, the standard Lie algebra can be generalized into the so-called -deformed algebra (also known as quantum group [Maj00]). Quantum determinants show up naturally in such contexts. More specifically, consider the -Grassmann algebra which is the associative algebra generated by satisfying the for every , and for . It is straightforward to see that
(3) |
which is by matrix such that . Quantum determinants also serve as an important object in the representation of such -deformed algebra [ER98], and are closely related to the representation theory over different characteristics. Second, it is known that there are many applications of quantum determinants in various areas of physics, including string theory [FK06] and conformal field theory [GRASRA96, PS90]. In quantum information theory, quantum determinants also play an essential role in the Fock representations of -deformed commutation relations [BLW17] or anyonic relation [GM04], generalizing the standard bosonic/fermionic commutation relation [BS91].
Motivated by the above observations, we initiate the study on the computational complexity of -permanents, by giving general hardness results for complex number on the unit circle. In the following sections, we give an overview of our contributions.
1.1 Our Contributions
In this paper, we study the computational complexity of -permanent for an -th primitive root of unity for integer . Our contribution is built upon tools from algebraic number theory. We summarize our result in Table 1. From Section 3 to Section 5, we prove one theorem in each section.
Exact | Approximate within multiplicative error | |
---|---|---|
Worst case | Theorem 3.10 | Theorem 4.2 |
Average case for | constant fraction Theorem 5.3 | Theorem 6.4 |
1.1.1 Worst-case Hardness
First, we study the hardness of computing -permanent of a binary matrix for an -th primitive root of unity . By (1), we can write the -permanent as a polynomial in whose coefficients can be written as multivariate polynomials in the elements of . For , -permanent is in general a complex number which can be written as an integral polynomial in . Since complex number arithmetic can only be computed using a finite precision, we first clarify what it means by an algorithm computing exactly.
While we do not know how to give a sensible definition for every on the unit circle, applying facts of cyclotomic fields yields a natural definition for a primitive root of unity and a binary matrix. More specifically, -permanent of a binary matrix can be written as an integral polynomial in of degree , where is the -th cyclotomic polynomial. This implies that representing as a degree- polynomial, the tuple of coefficients is unique. Thus we define an algorithm which exactly computes to be one that outputs a set of coefficients such that .
Given this definition, we show that for odd prime power , any oracle that computes -permanent implies that .
Theorem 1.1 (Hardness of -permanent, informal).
For odd prime , prime power and a primitive -th root of unity , it is -hard to compute -permanent of -matrices. More explicitly, for every oracle that on input a binary matrix outputs the coefficients of , it holds that
(4) |
The first inequality in (4) follows from the generalized Toda’s theorem [TO92, Tod91]. To prove the second, we rely on the fact that for prime power , the cyclotomic polynomial evaluated on is . This means that the summation of the coefficients is congruent to modulo . Then the second inequality follows from the fact that computing modulo an odd prime is known to be -hard by Valiant [Val79a].
While our definition of exact computation seems to be natural, one may wonder whether the requirement to output all the coefficients correctly would be too strong to allow the existence of efficient classical algorithms. We answer this question by studying hardness of approximating in -norm. In particular, we show that finding a close enough complex number to already suffices to extract all the coefficients using an oracle.
1.1.2 Hardness of Approximation
Next, we show that an oracle that approximates to within polynomial multiplicative error is also -hard.
Theorem 1.2 (Hardness of approximation, informal).
For odd prime , prime power and a primitive -th root of unity , it is -hard to approximate -permanent of -matrices. More explicitly, for every algorithm that on input an binary matrix , outputs an approximation to within multiplicative error , it holds that
(5) |
Theorem 1.2 implies that an efficient simulation of leads to a collapse of the polynomial hierarchy. The first inequality, again, follows from generalized Toda’s theorem [TO92, Tod91]. We then prove the second inequality in the following two steps. First, we show that there is a deterministic algorithm using polynomially many queries to to decrease the error to in -norm. Our algorithm always output a complex number -close to successfully. However, this step alone does not yield a tuple of coefficients that satisfies our definition of exact computation. We overcome this issue by invoking a variant of Liouville’s approximation theorem. From the theorem, we show that two arbitrary algebraic integers, when represented as the minimal polynomial in with coefficients in for must be -far in -distance. Using an oracle that determines whether there is a -close algebraic integer, we can perform a binary search to identify all the coefficients. Finally, we show determining whether there is an algebraic integer that is close enough to our approximation, and whose coefficients are bounded in that can be solved using a non-deterministic Turing machine.
As a bonus, in Theorem 3.11, we give evidence that there is no polynomial algorithm for computing most of the roots of unity.
1.1.3 Average-Case Hardness of Exactly Computing -Permanent
For the average-case hardness of 1-permanent, Aaronson and Arkhipov [AA11] showed that under reasonable assumptions, computing exactly for most of the Gaussian matrices is -hard. Haferkamp, Hangleiter, Eisert, and Gluza [HHEG20] proved the hardness for most of the truncated uniform distribution.
In Section 5, we adapt their techniques to show that for every , if computing -permanent is hard in the worst case, via random self-reducibility, computing -permanent of Gaussian matrices is hard. This proof utilizes the Berlekamp-Welch algorithm and the fact that -permanent is a low-degree polynomial of its matrix elements.
Moreover, we extend the hardness results to a family of probability distributions that satisfy a property, called the strong autocorrelation property. Specifically, a random matrix is said to satisfy the strong autocorrelation property if the total variational distance between shifted or scaled original distribution and the original one is small (see Definition 5.2 for a detailed definition). We show that computing -permanent of matrices drawn from distributions satisfying this property remains to be -hard. The hardness result can thus be applied to, for example, a uniform distribution centered around zero and truncated at some chosen threshold.
1.1.4 Average-Case Hardness of Approximating -Permanent
In Section 6, we show that if one can compute particular points of in time , then one can determine for sampled from a distribution that satisfies the strong autocorrelation property. This implies that given a distribution and , approximating is hard. Our results work for multiplicative and additive errors.
In addition, we also show that approximating is as hard as approximating for any distribution such that are identically distributed. The proof works for as well.
1.2 Related Work
As we described in the introduction, permanent and determinant have completely different computational complexity. Any path that connects both cases is interesting from the point of computational complexity. More generally, one can consider such
(6) |
function and ask the complexity of functions in the form of (6). People have already investigated a few function families including immanants [MM11, Har85, Bür00, dRA13, Spi20, Bür00] and fermionants [MM11, BKW19, RA13, dRA13]. However, our work is completely different from them in the following perspective:
-
•
Unlike immanants and fermionants, -permanent is not a class function. To our best knowledge, the complexity of such matrix function (6) has only been investigated when is a class function. So one can not use the standard immanant results.
-
•
Since -permanent is not a class function, we provide completely new techniques involving algebraic number theory.
-
•
To our best knowledge, All results of approximate hardness in the worst case and average case are investigated for 1-permanent. For , no approximate hardness results are given previously in the literature. In Section 4, Section 5, and Section 6, we provide technical tools which lead to the results of approximate hardness for the general function .
1.3 Organization
The rest of the paper is structured as follows. A preliminary background is presented in Section 2. In Section 3, for worst-case hardness, we show that for primitive -th root of unity for odd prime power , exactly computing -permanent is -hard. In Section 4, we show that efficiently approximating the -permanent of binary matrices implies a collapse of the polynomial hierarchy. In Section 5, based on the worst-case hardness, we give average-case hardness of exactly computing -permanent for a family of distributions satisfying the strong autocorrelation property. In Section 6, we move to the average-case hardness. Finally, in Section 7, we summarize our results and discuss a few open questions related to -permanent.
2 Preliminaries
2.1 Introduction to Algebraic Number Theory
Here we introduce some facts about field extension [Ben87] and algebraic number theory [Lan13, Neu13, Wei98] relevant to our work, although in this paper, we will only use facts about cyclotomic fields.
First, we recall basic definitions in abstract algebra and algebraic number theory. An abelian group is a set of elements closed under its group operation often denoted (for an additive group) or (for a multiplicative group) and every element has an inverse. A commutative ring is an additive group that also has an associative multiplication operation and a multiplicative identity . A field is a ring, and its non-zero elements form an abelian multiplicative group. For example, the set of rational numbers is a field and the set of integers forms a ring.
A field is said to be an extension field (or simply extension), denoted , of field if contains as a subfield. The extension field can be viewed as a vector space over . The degree of the field extension , denoted , is the dimension of as a vector space of . Given a field and some value , let , called adjoined , be the smallest field containing and . The operation of adjoining an element to a field is a field extension. For example, is an extension field of of degree two for since every element of can be written as a vector of two components in . We also denote for indeterminant the ring of polynomials with integer coefficients. For complex number , the set forms a ring.
A number field is an extension field of of finite degree. We recall the definition of number fields and algebraic numbers.
Definition 2.1.
An algebraic number field (or simply number field) is an extension field of the field of rational numbers such that the field extension has finite degree (hence is an algebraic field extension).
Definition 2.2.
A complex number is an algebraic number if there exists a nonzero polynomial with rational coefficients (equivalently, integral) such that is a root of . The set of algebraic numbers forms a subfield of
Definition 2.3.
A complex number is an algebraic integer if there exists a nonzero monic polynomial (whose leading coefficient equals one) with integral coefficients such that is a root of .
The set of algebraic integer forms a subring of . We also have is an algebraic integer if the minimal monic polynomial of over is in [Lan13]. We have a very simple fact about rational field and algebraic number using Gauss’s lemma.
Theorem 2.4 ([Lan13]).
Let be the set of algebraic integers. The intersection of algebraic integers and is , i.e.,
Definition 2.5 (Ring of integers).
The ring of integers of an algebraic number field is the ring of all algebraic integers contained in , i.e., .
The cyclotomic field is a number field obtained by adjoining a primitive root of unity to . In this paper, we denote an th primitive root of unity for integer . Note that a choice of the primitive roots of unity is not unique. Indeed, for , is a primitive root of unity if .
Definition 2.6 (Cyclotomic fields).
The th cyclotomic field is the extension of generated by . More explicitly:
(7) |
While there can be multiple choices of , the field is unique. This follows from a simple observation: Let be two distinct th primitive roots of unity. Since there exists and such that , and thus they are equal. Similarly, we consider the ring
(8) |
For cyclotomic field , the ring of integers is .
Theorem 2.7 ([Lan13]).
For integer and , .
We will need some useful properties about Euler’s totient function [Lan13].
Definition 2.8 (Euler’s totient function [Lan13]).
Euler’s totient function counts the number of positive integers up to a given integer that are relatively prime to . More explicitly, it can be computed through the following formula:
(9) |
where the product is over the distinct prime numbers dividing
We will need the following lower bound of [RS62].
Theorem 2.9 ([RS62]).
For ,
(10) |
simply speaking: the order of is nearly . So this implies that we always have for any for large enough Here is Euler–Mascheroni constant.
For an th primitive root of unity , the th cyclotomic polynomial is the minimal polynomial of over .
Definition 2.10 ([Lan12]).
For integer , the th cyclotomic polynomial is defined as
(11) |
The polynomial has integer coefficients, i.e., .
We include some basic properties of cyclotomic polynomials useful in our work. Since is irreducible over , so it is the minimal polynomial of over , the degree . The minimal polynomial of every element in over has degree at most . The equivalent statement is that the following set is linearly independent over . For ,
(12) |
Theorem 2.11 (Bounds for the coefficients of cyclotomic polynomials [Vau75]).
Let be the th cyclotomic polynomial. The maximum absolute value of coefficient of satisfies
(13) |
where is the divisor function which counts the number of divisors of (including and ) for integer .333For instance, the divisors of are , , , and , and thus .
For , . Since for prime and integer , , we can get the trivial bound . By Theorem 2.11, we can represent using at most only bits.
The following theorem will be useful.
Theorem 2.12 (Liouville’s approximation theorem [LW12]).
Let be an algebraic number with minimal polynomial of degree . Then for any that are relatively prime,
(14) |
where , and the maximum complex modulus of the algebraic conjuguates of in , i.e., .
In particular, since and are relatively prime, applying Theorem 2.12, for every algebraic number whose minimal polynomial is , . For two algebraic integers , , and .
Theorem 2.13 (Fermat’s little theorem [Wei04]).
Let be a prime number. Then for any integer , the number is an integer multiple of . In the notation of modular arithmetic, this is expressed as
(15) |
Theorem 2.14.
For field of characteristic and ,
Proof.
By the binomial theorem,
(17) |
We just need the following Lemma: The prime divides each binomial coefficient for The proof of lemma is simple by definition:
(18) |
The result follows since divides the top but not the bottom. ∎
2.2 Properties of -permanent
Let be the unit circle in the complex plane. For matrix , the -permanent of is defined as
(19) |
where is the symmetric group on elements and is the inversion number of . The inversion number is defined as the number of pairs such that and . It is clear that . This implies that we can write a -permanent as a polynomial in :
(20) |
Note that by (20), -permanent is a polynomial with degree in terms of
Here we list basic properties of -permanent [Yan91, DS] for every matrix . Most of them are exact the same as permanent and determinant.
-
1.
is a multilinear function of the rows and columns.
-
2.
For block triangular , is the product of the -permanent of its diagonal blocks.
-
3.
where is the transpose of .
-
4.
(Expansion Theorem) Let denote the minor of and denote the element of Then
(21) for .
-
5.
For integer matrix and -th root of unity , . Thus, the minimal polynomial of over has leading coefficient and its degree is at most .
We give the explicit formula for the simplest case and : For ,
For ,
2.3 Common Tasks Related to Permanent
We recall the following task defined by Aaronson and Arkhipov [AA11]. We will discuss some of their generalizations in our paper.
Problem 1 ().
Given as input of i.i.d Gaussians (Complex normal distribution) and , estimate to within error to probability at least over .
Conjecture 1.
is -hard. In other words, if is any oracle that solves , then .
Conjecture 2.
There exists a polynomial such that for positive integer , real number ,
(22) |
3 Hardness of -permanent for -matrices
In this section, we investigate the hardness of -permanent. We only consider the case that is root of a unity, i.e., there is positive integer such that , for the following two reasons. First, the set is dense on the unit circle, so for every , there exists an arbitrarily close element in this set. Second, from the computational point of view, as we will see, it is more relevant and convenient to give a definition of exact computation with a rational multiple of . Thus, in this paper, we always assume that is root of a unity, and write for primitive -th root for unity for integer .
We start with the cubic root of unity, showing that computing -permanant for satisfying is -hard. Thus the existence of an algorithm for computing the -permanent implies the collapse of the polynomial hierarchy by Toda’s theorem.
3.1 Cube Roots of Unity
Recall that from algebraic number theory, for and binary matrix , can be written as a polynomial in :
(23) |
where . Furthermore, for , , and thus is an algebraic integer in the number field . We will link the observation to hardness of computing -permanent and the complexity , defined as follows.
Definition 3.1 ().
For any integer , let be the class of decision problems solvable by a polynomial time non-deterministic machine which rejects if the number of accepting paths is divisible by , and accepts otherwise. When , is also known as parity , and denoted .
It is well known due to Valiant that computing is -complete for prime [Val79a]. Interestingly, we can reduce the problem of computing to the problem of computing in the special case .
Theorem 3.2 (Valiant [Val79a]).
The problem of computing for a square matrix is -complete for any prime .
Now we state the theorem for this section.
Theorem 3.3.
For every cube roots of unity , computing and are both -hard.
Proof.
For , the theorem holds due to Valiant [Val79a], and thus it suffices to prove the theorem for . For , we can write as
(24) |
If one can compute , then one can compute by taking the real part of since
(25) |
and is a unit in the ring . The same reasoning holds for by interchanging and .
It was shown in Valiant’s original paper [Val79b] that the permanent modulo can be computed in time , but for all , it is -hard under randomized reductions [CX15, DJ21].
For , we observe that
Thus we can obtain if can be exactly computed. ∎
The generalized Toda’s theorem [TO92] states that a few counting classes are as hard as the polynomial hierarchy.
Theorem 3.4 (Toda theorem [Tod91, TO92]).
For prime , let be one of the counting classes or . Then .
By Toda’s theorem, we immediately have the following theorem.
Theorem 3.5.
For , let be any oracle oracle that on input binary matrix , outputs . Then
(26) |
In other words, if there is an efficient classical simulation of , then the polynomial hierarchy collapses to the second level.
If the polynomial hierarchy is infinite, Theorem 3.5 excludes the existence of an efficient classical algorithm for exactly computing -permanent.
3.2 Primitive th roots of unity for prime power
Next, we turn our attention to any prime number . Since in general can be an irrational number in the complex plane, any truncation in finite precision only results in an approximation. In this section, we focus the hardness of computing exactly, and thus the first question we must clarify is how we define “computing exactly” for binary .
For binary matrix , (see (23)). Thus can be written as a polynomial in of degree at most with integer coefficients. We can define an exact representation of as a set of coefficients.
Definition 3.6 (Representation of an algebraic integer).
A representation of algebraic integer is a tuple of integers satisfying
(27) |
Note that the representation is not unique since is a root of , and thus for any integer .
In general, one can define a representation of algebraic integers for general in Definition 4.5. However, in this section, the Definition 3.6 is sufficient for showing the hardness. Notice that one can also define such representation by a tuple of integers satisfying since one has
An algorithm which computes exactly outputs a tuple of coefficients such that (27) holds. This definition is without loss of generality for our purpose. In Section 4.2, we will construct an algorithm that maps an approximation in the complex plane to an exact representation using an oracle.
Theorem 3.7.
For prime , let be a primitive -th root of unity and be any oracle that on input a binary matrix , outputs a representation of . Then .
Proof.
For a binary matrix , let the output of be . Also, let be defined as in (23). By the correctness of ,
(28) |
Note that it does not necessarily hold that for every , , but the polynomials are identical congruent to the minimal polynomial of . In particular, for prime , since ,
(29) |
where . Since the left side of the equality
(30) |
is an integer, . This means that divides , and
(31) |
The last equality holds from Fermat’s little theorem (Theorem 2.13).
In summary, given access to an oracle which exactly computes for binary , the following algorithm computes :
-
•
Run .
-
•
Output .
∎
By Toda’s theorem Theorem 3.4, we have the following corollary.
Corollary 3.8.
For prime , let denote any oracle which computes exactly for binary matrix . Then
(32) |
We can easily extend our result to a prime power for prime .
Theorem 3.9.
For prime and prime power , let be a primitive -th root of unity and be any oracle that on input a binary matrix , outputs an exact representation of . Then .
Proof.
The idea basically follows from the proof of Theorem 3.9. For and , we have
(33) |
where . We then use Fermat’s little theorem (Theorem 2.13) which imples that , and
(34) |
So then we have (using finite field property Theorem 2.14)
(35) |
which finish the proof. ∎
We end this section by summarizing the classical hardness of for such particular .
Theorem 3.10.
For odd prime and prime power , let be a primitive -th root of unity. Exactly computing -permanent for binary matrices is not in ; otherwise, the polynomial hierarchy collapses. More precisely, for any oracles which computes the -permanent of binary matrices, the following containments hold:
Unfortunately, our method does not generalize to a primitive -th root of unity for divisible by more than one prime. While our result does not rule out the existence of an efficient solver for non-prime powers, it is interesting to see whether there exists a polynomial-time algorithm for any of these points. Since it is not so relevant to our result, we leave the complexity of computing general composite numbers as future work.
For , we notice that our method with oracle which computes -permanent implies one can compute . However, since we do not have the hardness results for computing -permanet Indeed, it is known that permanent modulo admits an efficient algorithm [DJ21, CX15]. Any hardness results for -permanent cannot be implied by the hardness of permanent modulo .
Unfortunately, our method cannot be used to establish hardness for rational numbers with a non-prime-power denominator.
3.3 Hardness of Rational Phases
Recall that for every , the -permanent of a binary matrix is always an integral polynomial in of degree at most . For a primitive -th root of unity , we can further reduce the degree by taking the congruence to the minimal polynomial of the degree of , when . In the previous sections, we have applied these facts to reduce the problem of computing -permanent modulo to -permanent for prime power .
While we do not have a general result for every non-prime-power rational phases, in this section, we consider a special case where . In this case, since the monomials are linearly indepenent, -permanent can be directly obtained from -permanent.
Theorem 3.11.
Let denote any oracle which input are binary matrix and such that , it outputs exactly. Then .
Proof.
Let . Computing yields a tuple of coefficients satisfying
(36) |
where . Since the monimials are linearly dependent for , for every . This implies that . ∎
However, for cases with non-prime-power , neither of the methods seems to work. We leave the hardness of primitive -th roots of unity for fix non-prime-power as an open question.
4 Hardness of Approximating -permanent
In Section 3, we have shown that for prime power for odd prime , exactly computing is as hard as computing . The task is known to be -hard. In this section, we show that approximating for integer and binary matrices is as hard as exactly computing it.
In Section 4.1, we show that approximating to multiplicative error within is as hard as approximating to multiplicative error , up to a polynomial multiplicative overhead. In Section 4.2, we show that with an oracle, exactly computing is as hard as approximating it to error . Combining the results in these two sections, we conclude that an efficient approximation algorithm would also imply a collapse of the polynomial hierarchy.
4.1 Reducing the Error
In this section, we show that for a prime number and an -th primitive root of unity , approximating to multiplicative error is as hard as approximating to multiplicative error .
First we prove the following lemma which will be useful later.
Lemma 4.1.
For integer and , let be complex numbers and satisfying for ,
(37) |
Then it holds that
(38) |
Proof.
We prove the lemma by induction. For , the lemma obviously hold. Suppose that the lemma holds for . Then for , let and . By the inductive hypothesis, . Then
(39) |
The first inequality holds by (37) and . The last inequality holds since . ∎
We can decrease the error of our approximation algorithm from a multiplicative factor to using a polynomial overhead.
Theorem 4.2.
For , prime number , and a primitive -th root of unity , if there is an algorithm which on input , approximates to within a multiplicative factor of , then there exists an algorithm which outputs an esatimate of satisfying
(40) |
using queries to .
Proof.
Let . We describe the algorithm as follows. First decides if . If , then returns a value
(41) |
for . This implies that . Otherwise, . This implies that there is an algorithm which decides if . Furthermore, if , outputs .
It suffices to consider the case . Since is binary-valued, we can write
(42) |
where each is a non-negative integer, provided that is a primitive -th root of unity.
The algorithm finds a sequence of matrices such that each and is a submatrix of obtained by deleting the first column and one of the rows determined as follows: starting from , repeat incrementing until the submatrix obtained by deleting the -th row and the first column satisfies that . Finally set . Since , it is guarantee that there exists such that . Computing the sequence of matrices takes queries.
The algorithm inductively approximates to error using queries to from to and the telescoping product
(43) |
Computing the base case is trivial. Applying Lemma 4.1, this implies that an approximation to error takes queries.
In the following proof, we will only consider the case from to , and the same argument applies to . For simplicity, we only prove the case where is obtained from by deleting the first row and the first column. Other cases (i.e., deleting the -th row of for ) can be applied using the same argument.
We consider the quantity . Note that for a primitive root of unity , both and are algebraic integers. This implies that the ratio is an algebraic number. To see there is a lower bound between two algebraic numbers, we apply a variant of the Liouville’s approximation theorem, stated in Theorem 2.12.
Since and are both algebraic integers, their minimal polynomials are monic. Furthermore, for , , and similarly the same upper bound holds for . Setting and , and , where , and
(44) |
The quantity , where , so its norm square can be approximated to multiplicative error using one call to . To find an estimate of , we recursively find complex numbers such that the following invariant holds:
(45) |
for every . To see how close is to ,
(46) |
Also let
(47) |
Setting the initial guess ,
(48) |
Recall that . Then we have
(49) |
Our goal is to get since it implies that . Thus it suffices to set and .
Now we give the algorithm that finds an that satisfies (45) from . By definition, . We consider the disk
(50) |
and a grid of points inside the disk for integer determined later:
(51) |
The algorithm computes for every and outputs for which is minimized (with ties broken arbitrarily).
By definition, , and thus there exists such that . For such ,
(52) |
Thus for , the ratio is at most . ∎
As a corollary, we recover a result of Aaronson and Arkhipov for -permanent (with slightly worse query complexity) by Theorem 4.2, but the same theorem can apply to a primitive root of unity .
Corollary 4.3 (cf. [AA11, Theorem 28]).
For , if there is an algorithm which on input approximates to within multiplicative factor of , then there is an algorithm which exactly computes using queries to .
Proof.
Since is an integer, we apply Theorem 4.2 with . The algorithm takes queries. ∎
Furthermore, obtaining an approximation to additive error within takes queries.
Corollary 4.4.
For , if there is an algorithm which on input , approximates to within a multiplicative factor of , then there exists an algorithm which outputs an esatimate of satisfying
(53) |
using queries to .
Proof.
To approximate to additive error within , it suffices to approximate to multiplicative error since the norm is at most , and then by the Theorem 4.2 and it takes queries. ∎
Here we see that the proof of approximate hardness relies on the fact that -permanent of given matrix can be computed from the -permanent of its submatrix (see equation (21)). Other matrix functions such as immanants do not have this property.
4.2 A Search-to-Decision Reduction
In the previous section, we have shown how to approximate to multiplicative error for an -th primitive root of unity using an oracle that approximates to multiplicative error . In this section, we show how to search for a representation of from the approximation (see Definition 3.6).
Recall that by (27), for binary matrix , is an element in the ring and can be written as a polynomial in . Since and , we can represent any algebraic integer in as a polynomial of degree at most . In fact, such a representation is unique. A more detailed background can be seen Definition 2.10.
Definition 4.5 (Representation of ).
For integer , the representation of an algebraic integer is a tuple such that
(54) |
As a polynomial in , the representation is the remainder of divided by the cyclotomic polynomial . A representation is said to be bounded in for integers if for every , .
To extract a representation from an approximation, we rely on the following lemma which states that the coefficients can be represented using bits.
Theorem 4.6 (Bounds on the coefficients of cyclotomic polynomials [Vau75]).
For integer , let be the -th cyclotomic polynomial. The maximum absolute value of the coefficients of is at most , where is the divisor function which count the number of divisors of (including and ).444For instance .
For , . Thus we can get the trivial bound . By Theorem 4.6, we can represent using at most only bits. Then the following lemma shows that every element of the representation of can be represented using bits.
Lemma 4.7 ([BP86, Lemma 2]).
Let be two polynomials of degrees . Also let the quotient and the remainder be such that . Then
(55) |
where and if
Corollary 4.8.
For integer and binary , let be the representation of . Then
(57) |
Proof.
Let
(58) |
By Theorem 4.6, the coefficients of the cyclotomic polynomial is at most . By Lemma 4.7 and the fact that is monic, the summation of the coefficients of the quotient satisfies that
(59) |
The representation is the remainder of devided by . Let . We have
(60) |
Thus
(61) |
Since , . ∎
Let denote the maximum of the coefficients in the representation of . By Corollary 4.8, , and thus the coefficients can be represented using bits, provided .
By Liouville’s approximation theorem (Theorem 2.12), the distance of two arbitrary algebraic integers whose representation is bounded in are at least in the complex plane.
Corollary 4.9.
For integer , let be two algebraic integers in whose representation is bounded in . Then it holds that .
Proof.
Since is an additive group, whose minimal polynomial is monic and whose representation is bounded in . By Theorem 2.12 with , , and . These facts give
(62) |
∎
Now let . If we already know a complex number that is -close to an algebraic integer for , then for every other algebraic integer whose representation is also bounded in , . Given the observation, we define the following problem.
Problem 2 (Close algebraic integer problem (caip)).
The close algebraic integer problem is defined as follows: given , where , are non-negative integers (all represented using bits), determine if
-
•
there is an algebraic integer such that , or
-
•
every satisfies ,
where the set consists of algebraic integers in whose representation satisfies for . It is promised that the given instance is in one of the cases.
Indeed, caip can be solved using a non-deterministic Turing machine (NTM).
Theorem 4.10.
.
Proof.
We can use sufficiently many bits (but still polynomial) to represent as a complex number such that
(63) |
For every approximated by satisfying for every and integer .
If the NTM non-deterministically finds satisfying , then . Otherwise, every satisfies . ∎
The problem caip only consider the case where each coefficient . To extend it to general cases where the -th element is in , we can shift to and solves caip with and the -th paremeter being . Thus it suffices to only consider the former setting.
Given an oracle for caip, we can search for a satisfying algebraic integer.
Theorem 4.11.
Let be an oracle which solves caip. Then there exists an algorithm which makes queries to and finds a satisfying algebraic integer, or returns if there is not.
Proof.
The algorithm is basically a binary search for each coefficient.
First runs ; if the output , then returns . Otherwise, there must be a satisfying algebraic integer . Then runs the following steps:
-
1.
Set .
-
2.
For ,
-
(a)
Set and .
-
(b)
Repeat the following steps until :
-
i.
Compute .
-
ii.
If , set and .
-
iii.
Set .
-
i.
-
(c)
Set .
-
(a)
-
3.
Output .
We prove the algorithm is correct for ; for other coefficients, the correctness analysis applies similarly. First, the algorithm must terminate since in each iteration gets decreased by a factor of two. For the correctness, we note that holds after each iteration. In each step, let a satisfying algebraic integer denote , written as
(64) |
before Step 2(b)ii is run. It suffices to show that after Step 2(b)ii, the leading coefficient is no more than by induction. For the base case, since the leading coefficient of is no more than ; the assertion holds. If , then , and ; otherwise, there exists a satisfying such that . Thus after Step 2(b)ii, there is a satisfying algebraic integer whose leading coefficient is no more than . ∎
5 Average-case Hardness
In the context of average-case hardness, Aaronson and Arkhipov [AA11] considered the hardness of -permanent for i.i.d. Gaussian matrices. More recently, Haferkamp, Hangleiter, Eisert, and Gluza [HHEG20] considered the average-case hardness of the truncated uniform distribution.
In this section, we extend their results both to -permanent and to a wide range of distributions, provided that the distribution satisfies a property called the strong autocorrelation property.
Our idea follows from Aaronson and Arkhipov [AA11]: If one can compute a low-degree polynomial for sufficiently many points, then there is an algorithm that recovers the whole polynomial with high probability. More precisely, we use the following algorithm for noisy polynomial interpolation.
Theorem 5.1 (Berlekamp-Welch algorithm [BV]).
Let be a univariate polynomial of degree over any field . Suppose we are given m pairs of elements (with the all distinct), and are promised that for more than values of i. Then there is a deterministic algorithm to reconstruct using field operations.
Now we define the following property, under which a random self-reducibility using the Berlekamp-Welch algorithm can be applied.
Definition 5.2 (Strong autocorrelation property).
Consider a random variable with a real distribution with mean and variance We denote it as We define with mean and variance by shifting and scale the random variable We pick We can define complex random variable , and are independent. We write Now we consider the following three distributions
(65) | ||||
(66) | ||||
(67) |
for some vector . A distribution is said to satisfy the strong autocorrelation property Definition 5.2 if there exists constant independent of , such that
(68) | ||||
(69) |
When is complex Gaussian or truncated uniform distribution both satisfy this criterion from direct calculation [AA11, HHEG20]. In Appendix B, we give a sufficient condition under which almost all well-behaved satisfies the such criterion.
Now we are ready to show that, as permanent, -permanent also has random self-reducibility.
Theorem 5.3 (Random self-reducibility of -permanent).
Assume that there is no classical efficient algorithm for computing -permanent and for all -matrices. For all the following problem is classically hard: Given an matrix drawn from which satisfies the strong autocorrelation property, output with probability at least over .
Proof.
Let be an arbitrary -matrix. We show how to compute in probabilistic polynomial time, given access to an oracle such that
(70) |
We choose a matrix from the distribution and define
(71) |
so that and , and is a univariate polynomial in of degree at most and . Now let
For each call the oracle on input matrix . Then, using the Berlekamp-Welch algorithm (Theorem 5.1), attempt to find a degree- polynomial such that
(72) |
for at least a fraction of . If no such is found, then fail; otherwise, output as the guessed value of . Now we show that the above algorithm succeeds (that is, outputs ) with probability at least over . Provided that holds, it is clear that the success probability can be boosted to (say) by simply repeating the algorithm times with different choices of and then outputting the majority result. To prove the claim, note that for each one can think of the matrix as having been drawn from the distribution
(73) |
Let
(74) |
Then by the triangle inequality together with Definition 5.2 (),
(75) |
We have
(76) |
Now let be the set of all such that . Then by the reverse Markov’s inequality [EG01],
(77) |
So we then just run the above algorithm times to amplify the success probability. After that, with high probability, we have
Then by Theorem 5.1, the Berlekamp-Welch algorithm succeeds; It will output polynomial will be equal to . This proves the claim. ∎
6 Average-case Hardness of Approximation
Since Aaronson and Arkhipov’s seminal work on Boson Sampling [AA11] appeared, approximating -permanent in the average case has been investigated for particular distributions [AA11, HHEG20]. In particular, Aaronson and Arkhipov [AA11] conjectured a problem called Gaussian Permanent Estimation (also denoted ) is -hard. The problem asks to compute to multiplicative error with probability at least over matrix whose each element is sampled independently from the normalized Gaussian distribution. For quantum supremacy from Boson Sampling, the ultimate goal was to prove the additive version of , denoted , is also hard. Based on another conjecture, called the Permanent Anti-Concentration Conjecture (PACC), one can deduce that is as hard as . In the following, we denote the distribution of an i.i.d. Gaussian matrix with zero mean and unit variance.
Problem 3 ().
Given as input of i.i.d. Gaussians (complex normal distribution) and , estimate to within error to probability at least over .
Problem 4 ().
Given as input of i.i.d. Gaussians (complex normal distribution) and , estimate additively (i.e., to within error ), with probability at least over
In this section, we consider the hardness of -permanent for matrices sampled from a general distribution whose each entry is i.i.d. sampled. We first define the multiplicative and the additive versions of Permanent Estimation (PE) for distribution .
Problem 5 ().
Given as input and , estimate to within error to probability at least over .
For convenience, is also denoted . Similarly, we can define an additive version of the above problem.
Problem 6 ().
Given as input and , estimate to within error to probability at least over . For convenience, is also denoted .
From the definition, we have and with . Let For , let , where is a primitive -th root of unity. Let be an oracle, given , outputs the approximations of -permanent of points, i.e., for . More formally, we define and to be oracles that approximate to multiplicative and additive errors, respectively:
Definition 6.1 ().
The oracles and are defined as follows:
-
•
Given is the oracle that outputs a vector such that for every , with probability at least over choices of .
-
•
Given and efficiently computable function , is the oracle that outputs a vector such that for every we have: with probability at least over choices of .
The rest of this section is devoted to showing that if can be classically simulated, then can be solved efficiently, and similarly holds for and in place of and . The proof is similar to Lipton’s proof [Lip91] for the average hardness of permanent over a finite field for . In our case, we can approximate given an approximation of points on the unit circle.
First, we show the following two lemmas for polynomial interpolation. One is for multiplicative error and one is for additive error.
Lemma 6.2.
For integer , let be a degree- complex-valued polynomial and be an algorithm that approximates to additive error on points . There exists an algorithm which, on input , makes calls to and outputs an estimate of to additive error , where is the Vandermonde matrix with nodes .
Proof.
Let be the estimate of on respectively. Let and be the associated column vectors. Since for each , , .
Let be the vector of the coefficients of . Solving the linear equation yields an estimate of the coefficients. For , let be the vector of monomials evaluated on . Our goal is to bound the distance
(78) |
The first inequality holds by Cauchy-Schwarz inequality. The second holds by the definition of matrix norm. The last holds because and . ∎
If we restrict our problem to approximating for polynomials with non-negative coefficients, we can do better. In particular, we show that if approximates for polynomial with non-negative coefficients to multiplicative error , then there is an algorithm that approximates to the multiplicative error as above.
Lemma 6.3.
For integer , let be a degree- polynomial with non-negative coefficients and be an algorithm that approximates to multiplicative error on . Then there exists an algorithm which approximates to multiplicative error , where is the Vandermonde matrix with nodes .
Proof.
Let be the estimate of on and . Solving the linear equation yields . Also let be the approximated polynomial obtained by solving the linear system.
Since has non-negative coefficients, for every . Since ,
(79) |
The first inequality holds by Cauchy-Schwarz inequality. Thus an algorithm which outputs as the approximation with multiplicative error . ∎
Now we are ready to prove Theorem 6.4. By Lemma 6.2 and Lemma 6.3, we separate them into two cases.
Theorem 6.4.
The following statements hold:
-
1.
Suppose is -hard and is non-negative distribution, and the polynomial hierarchy is infinite, then does not admit a classical efficient algorithm, i.e., there is no simulation of running in time .
-
2.
Suppose is -hard and the polynomial hierarchy is infinite, then does not admit a classical efficient algorithm, i.e., there is no classical simulation of running in time ).
Proof.
For (1), we give non-constructive proof. Let . For , let , where is a primitive -th root of unity. We show that the number of points such that is efficiently computable cannot be more than . Suppose toward contradiction this is not the case. Then let be algorithms that runs in time and approximates to precision with probability , for . Applying the algorithm in the proof of Lemma 6.3, we apply the oracle for by the following algorithm:
-
1.
Get ;
-
2.
Solving the linear system , where .
-
3.
Output the summation of elements of as the approximation of .
By the assumption, for each with probability at least , . Since points in is equally spaced, is a unitary matrix, and so is . This implies that the matrix norm . By Lemma 6.3 and union bound, the algorithm outputs an approximation of to multiplicative error with probability at least . By the assumption is hard, we conclude and a collapse of the polynomial hierarchy.
For (2), we use the same , then let be algorithms that runs in time and approximates to precision with probability , respectively for . Applying the algorithm in the proof of Lemma 6.2, we apply the oracle for by the following algorithm:
-
1.
Get .
-
2.
Solving the linear system , where .
-
3.
Output the summation of elements of as the approximation of .
By the assumption, for each with probability at least , . Since points in is equally spaced, is a unitary matrix, and so is . This implies that the matrix norm . By Lemma 6.2 and a union bound, the algorithm outputs an approximation of to additive error with probability at least . By our assumption -hard, we conclude that and the collapse of the polynomial hierarchy. ∎
At first glance, it seems like our result does not work for an i.i.d. Gaussian matrix since it is not positive except with exponentially small probability. However, similarly as in [AA11], we prove that with can be reduced to .
Lemma 6.5.
with is polynomial-time reducible to .
Proof.
Suppose we have a polynomial-time algorithm outputs a good multiplicative approximation to —that is, a such that:
(80) |
with probability at least over by Markov’s inequality ( see Lemma A.1):
(81) |
by the union bound,
Letting and , we have which finishes the proof. ∎
We now introduce the conjecture that generalize the original permanent anti-concentration conjecture.
Conjecture 3 (-permanent anti-concentration conjecture (-PACC)).
There exists a polynomial such that for positive integer , real number and ,
(82) |
In particular, is proved by Aaronson [AA11] and is standard conjecture that widely believed to be True.
Corollary 6.6.
Assume that the -PACC holds. Suppose is -hard and the polynomial hierarchy is infinite, then does not admit a classical efficient algorithm, i.e., there is no simulation of running in time .
Proof.
Suppose admit a classical efficient algorithm in . Then one can solve then solving for points. By Lemma 6.5, it solves with for points which solves . Assuming -PACC [AA11], is polynomial equivalent to Hence there is classical polynomial solves which contradicts our assumption that polynomial hierarchy is infinite. ∎
Finally, we show that approximating is as hard as approximating for any distribution such that are identically distributed. A similar proof idea applies to as well.
Lemma 6.7.
For any distribution such that are identically distributed, if there is an algorithm running in time for approximating to within multiplicative (resp. additive) error with probability , then there is an algorithm which approximates to within multiplicative (resp. additive) error with probability .
Proof.
By the assumption, let be an algorithm that approximates to error with probability .
Since and are identically distributed, for computing , the algorithm runs to obtain an approximation of with probability over . Then since the absolute value of a complex number does not change by taking complex conjugate,
(83) |
Here for multiplicative error
(84) |
For additive error,
(85) |
∎
7 Open Problems and Discussion
In this paper, we studied the computational complexity of quantum determinants, a -deformation of matrix permanents. The -permanent of a matrix is defined and it generalizes both determinant and permanent. It is shown that computing the -permanent is -hard for a primitive -th root of unity for odd prime power . This result implies that an efficient algorithm for computing -permanent would result in a collapse of the polynomial hierarchy. We also showed that an efficient approximation algorithm for -permanent would also imply a collapse of the polynomial hierarchy. The hardness of computing -permanent remains to hold for a wide range of distributions by random self-reducibility. The techniques developed for the hardness of permanent can be generalized to -permanents for both worst and average cases.
Here we include a few interesting questions that are not solved in this paper.
-
•
It is known that there is a polynomial-time algorithm for determining whether for -matrix by determining if there is a perfect matching in a bipartite graph whose adjacent matrix is . Is the property about -permanent efficiently computable for , i.e., is there a polynomial-time algorithm for determining whether ?
- •
-
•
In this work, we have discussed the computational complexity of -permanent for . It is natural to consider other subsets of the complex numbers. For example, what is the computational complexity of -permanent for real ?
-
•
As explained in Section 3.2, the techniques we used for proving the hardness of -permanent can only work for the special case where is a prime power. Thus our result does not rule out the existence of an efficient algorithm for non-prime-power . Is there a polynomial-time algorithm computing -permanent where is not prime power?
-
•
First proposed by Aaronson and Arkhipov, the permanent anti-concentration conjecture is crucial for showing the equivalence of additive and multiplicative approximation of -permanent for Gaussian matrices.
Relation between the permanent anti-concentration conjecture and the -permanent anti-concentration Conjecture 3. Can one prove one from another or use polynomial interpolation to give multiple-to-one reduction?
-
•
Higher moment of when is drawn from complex Gaussian .
Acknowledgement
We thank Scott Aaronson, Mohammad Hafezi, and Dominik Hangleiter for useful discussions. EJ was supported by ARO W911NF-15-1-0397, National Science Foundation QLCI grant OMA-2120757, AFOSRMURI FA9550-19-1-0399, Department of Energy QSA program, and the Simons Foundation. SHH acknowledges the support from Simons Investigator in Computer Science award, award number 510817.
References
- [AA11] Scott Aaronson and Alex Arkhipov. The computational complexity of linear optics. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 333–342, 2011.
- [Ben87] Georgia Benkart. Abstract algebra, by in herstein. The American Mathematical Monthly, 94(8):804–806, 1987.
- [BKW19] Andreas Björklund, Petteri Kaski, and Ryan Williams. Generalized kakeya sets for polynomial evaluation and faster computation of fermionants. Algorithmica, 81(10):4010–4028, 2019.
- [BL94] RB Bapat and AK Lal. Inequalities for the q-permanent. Linear Algebra and its Applications, 197:397–409, 1994.
- [BLW17] Marek Bożejko, Eugene Lytvynov, and Janusz Wysoczański. Fock representations of q-deformed commutation relations. Journal of Mathematical Physics, 58(7):073501, 2017.
- [BP86] Dario Bini and Victor Pan. Polynomial division and its computational complexity. Journal of Complexity, 2(3):179–203, 1986.
- [Bro86] Andrei Z Broder. How hard is it to marry at random?(on the approximation of the permanent). In Proceedings of the eighteenth annual ACM symposium on Theory of computing, pages 50–58, 1986.
- [BS91] Marek Bożejko and Roland Speicher. An example of a generalized brownian motion. Communications in Mathematical Physics, 137(3):519–531, 1991.
- [Bür00] Peter Bürgisser. The computational complexity of immanants. SIAM Journal on Computing, 30(3):1023–1040, 2000.
- [BV] ODYSSEAS BAKAS and MARCO VITTURI. The polynomial method lecture 4 applications in technology.
- [Cur21] Radu Curticapean. A full complexity dichotomy for immanant families. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 1770–1783, 2021.
- [CX15] Radu Curticapean and Mingji Xia. Parameterizing the permanent: Genus, apices, minors, evaluation mod 2k. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 994–1009. IEEE, 2015.
- [CX21] Radu Curticapean and Mingji Xia. Parameterizing the permanent: Hardness for -minor-free graphs. arXiv preprint arXiv:2108.12879, 2021.
- [DF10] CM Da Fonseca. The -permanent of a tridiagonal matrix, orthogonal polynomials, and chain sequences. Linear algebra and its applications, 432(5):1258–1266, 2010.
- [dF18] Carlos M da Fonseca. The -permanent revisited. arXiv preprint arXiv:1804.02231, 2018.
- [DJ21] Samir Datta and Kishlaya Jaiswal. Parallel polynomial permanent mod powers of 2 and shortest disjoint cycles. arXiv preprint arXiv:2106.00714, 2021.
- [dRA13] Nicolas de Rugy-Altherre. Determinant versus permanent: salvation via generalization? the algebraic complexity of the fermionant and the immanant. arXiv preprint arXiv:1309.2156, 2013.
- [DS] EDUARDO MARQUES DE SA. On q-permanent expansions and a theorem on cycle surgery.
- [dS18] Eduardo Marques de Sá. Noncrossing partitions, noncrossing graphs, and q-permanental equations. Linear Algebra and its Applications, 541:36–53, 2018.
- [EG01] Bennett Eisenberg and BK Ghosh. A generalization of markov’s inequality. Statistics & probability letters, 53(1):59–65, 2001.
- [EM18] Lior Eldar and Saeed Mehraban. Approximating the permanent of a random matrix with vanishing mean, 2018.
- [ER98] Pavel Etingof and Vladimir Retakh. Quantum determinants and quasideterminants. arXiv preprint math/9808065, 1998.
- [FK06] Jürg Fröhlich and Thomas Kerler. Quantum groups, quantum categories and quantum field theory. Springer, 2006.
- [GM04] Gerald A Goldin and Shahn Majid. On the fock space for nonrelativistic anyon fields and braided tensor products. Journal of mathematical physics, 45(10):3770–3787, 2004.
- [GRASRA96] César Gómez, Martí Ruiz-Altaba, Germán Sierra, and Marti Ruiz-Altaba. Quantum groups in two-dimensional physics, volume 139. Cambridge University Press Cambridge, 1996.
- [Har85] Werner Hartmann. On the complexity of immanants. Linear and Multilinear Algebra, 18(2):127–140, 1985.
- [HHEG20] Jonas Haferkamp, Dominik Hangleiter, Jens Eisert, and Marek Gluza. Contracting projected entangled pair states is average-case hard. Physical Review Research, 2(1):013010, 2020.
- [HJ12] Roger A Horn and Charles R Johnson. Matrix analysis. Cambridge university press, 2012.
- [Lal98] AK Lal. Inequalities for the q-permanent. ii. Linear algebra and its applications, 274(1-3):1–16, 1998.
- [Lan12] Serge Lang. Cyclotomic fields I and II, volume 121. Springer Science & Business Media, 2012.
- [Lan13] Serge Lang. Algebraic number theory, volume 110. Springer Science & Business Media, 2013.
- [Lip91] Richard Lipton. New directions in testing. Distributed computing and cryptography, 2:191–202, 1991.
- [LW12] Claude Levesque and Michel Waldschmidt. Approximation of an algebraic number by products of rational numbers and units. Journal of the Australian Mathematical Society, 93(1-2):121–131, 2012.
- [M+97] Meena Mahajan et al. Determinant: Combinatorics, algorithms, and complexity. Chicago Journal of Theoretical Computer Science, 1997(5), 1997.
- [Maj00] Shahn Majid. Foundations of quantum group theory. Cambridge university press, 2000.
- [MM11] Stephan Mertens and Cristopher Moore. The complexity of the fermionant, and immanants of constant width. arXiv preprint arXiv:1110.1821, 2011.
- [Neu13] Jürgen Neukirch. Algebraic number theory, volume 322. Springer Science & Business Media, 2013.
- [Nez21] Sepehr Nezami. Permanent of random matrices from representation theory: moments, numerics, concentration, and comments on hardness of boson-sampling. arXiv preprint arXiv:2104.06423, 2021.
- [PS90] V Pasquier and H Saleur. Common structures between finite systems and conformal field theories through quantum groups. Nuclear Physics B, 330(2-3):523–556, 1990.
- [R+76] Walter Rudin et al. Principles of mathematical analysis, volume 3. McGraw-hill New York, 1976.
- [RA13] Nicolas de Rugy-Altherre. Determinant versus permanent: salvation via generalization? In Conference on Computability in Europe, pages 87–96. Springer, 2013.
- [RS62] J Barkley Rosser and Lowell Schoenfeld. Approximate formulas for some functions of prime numbers. Illinois Journal of Mathematics, 6(1):64–94, 1962.
- [Spi20] Dylan Spivak. Immanants and their applications in quantum optics. PhD thesis, 2020.
- [TAG93] Hiroyuki TAGAWA. q-analogues of determinants and symmetric chain decompositions. Tokyo Journal of Mathematics, 16(2):311–320, 1993.
- [TO92] Seinosuke Toda and Mitsunori Ogiwara. Counting classes are at least as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 21(2):316–328, 1992.
- [Tod91] Seinosuke Toda. Pp is as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 20(5):865–877, 1991.
- [Val79a] Leslie G Valiant. Completeness classes in algebra. In Proceedings of the eleventh annual ACM symposium on Theory of computing, pages 249–261, 1979.
- [Val79b] Leslie G Valiant. The complexity of computing the permanent. Theoretical computer science, 8(2):189–201, 1979.
- [Vau75] RC Vaughan. Bounds for the coefficients of cyclotomic polynomials. Michigan Mathematical Journal, 21(4):289–295, 1975.
- [Wei98] Edwin Weiss. Algebraic number theory. Courier Corporation, 1998.
- [Wei04] Eric W Weisstein. Fermat’s little theorem. https://mathworld. wolfram. com/, 2004.
- [Yan91] Kung-Wei Yang. q–determinants and permutations. The Fibonacci Quarterly, 29:160–163, 1991.
Appendix A Moments of -permanents and Anti-Concentration Conjecture
Anti-concentration not only plays an important rule in the Boson Sampling [AA11] but also has its own interests in random matrix theory [EM18, Nez21]. In this section, we discuss the moments of -permanents drawn from i.i.d complex Gaussian and anti-concentration conjecture of -permanent.
First, we explicitly calculate the first moment of . Interestingly, the result does not depend on .
Lemma A.1.
For ,
Proof.
By direct calculation,
(86) |
∎
In fact, if we replace with a random matrix whose each entry is independently sampled from any distribution of zero mean and unit variance, the second moment is . Now we want to calculate . We first recall the computation of in [AA11].
Lemma A.2.
For , .
Proof.
By direction calculation,
(87) |
where
(88) |
If , .
To compute , we first provide some intuition: if and , where each is a cycle of length greater than 1. Then either has and has the identity, or has the identity and has . This implies that and there are choices of . For arbitrary , we can “shift” each permutation by multiplying to the right and reducing to the above simpler case. Since and have the same cycle type, the above argument is symmetric with respect to swapping the roles of and (i.e., we can start with and get the same result).
More formally, if is of cycle type with , we may write in the form
(89) |
Then if , it holds that
(90) |
where
(91) |
Clearly for every such that is of type , since there are choice of and is uniquely determined once is fixed.
Furthermore, for each fixed point , , and thus . Therefore, we have shown that
(92) |
and
(93) |
where is the number of fixed points, and is the number of cycles in . For ,
(94) |
Here we have applied the fact that . ∎
However, we can not get the explicit formula for the because when , the inversion number can be nonzero: namely . But we can still get the following result:
Lemma A.3.
For ,
The proof is simple since and must be a positive number. So we have
(95) |
More generally, we have the following inequality.
Lemma A.4.
For and ,
Due to the lack of efficient formula for computing permanent, showing anti-concentration conjecture of permanent is hard. As we show in the paper, There is no efficient formula for general -permanent. So we believe that showing the anti-concentration conjecture of -permanent is also a hard problem. However, the higher moment of -permanent is less than permanent (but with the same first moment). This gives us a hint that anti-concentration is also true for -permanent. We have the following similar as .
Theorem A.5 (Weak Anti-Concentration of -Permanent).
For
(96) |
Proof.
By Chebyshev’s inequality using and , the bound immediately follows. ∎
Appendix B Strong Autocorrelation Property
Consider a random variable with a real distribution with mean and variance We denote it as We define with mean and variance by shifting and scale the random variable We denote such distribution . We pick such that
(97) | ||||
(98) | ||||
(99) |
We define
(100) |
Lemma B.1.
If are both continuous when and are both continuous when . Then,
where are independent of for every .
Proof.
We first prove the first inequality:
In order to bound the difference, we use the mean value theorem on variable . Notice that . For all we have:
(101) |
Here is defined to be . The existence is guaranteed due to the continuity of and the following lemma.
Lemma B.2 (Extreme value theorem [R+76]).
A continuous function from a non-empty compact space to a subset of the real numbers attains a maximum and a minimum.
So finish the proof of the first inequality.
The second inequality follows using again the triangle inequality:
Here we use the same trick (notice , without loss of generality, we can assume )
(102) |
Here is defined to be . ∎
We can define complex random variable , and are independent. So then It will be helpful to think of each complex coordinate as two real coordinates and is a vector in .
Corollary B.3.
If we consider the complex version composed by real and with same condition:
(103) | ||||
(104) | ||||
(105) |
We have:
In the final line, we use - and -norm inequalities. Notice that for the -norm, we imagine as an vectors but for -norm, we pack them into .
Notice that in [AA11], one can use rotational invariance to get the tighter bound for Gaussian matrices. But it is enough for our purpose.