Randomly punctured Reed–Solomon codes achieve list-decoding capacity over linear-sized fields
Abstract
Reed–Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed–Solomon codes can optimally achieve list-decoding capacity.
A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed–Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed–Solomon codes over an exponentially large field size , where is the block length of the code. A natural question is whether Reed–Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed–Solomon codes are list-decodable to capacity with field size . We show that Reed–Solomon codes are list-decodable to capacity with linear field size , which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size and code length cannot be bounded by an absolute constant.
Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size and near-optimal alphabet size , where is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size was not known to be achievable with any linear code over a constant alphabet size (even non-constructively), and it was also not known to be achievable for random linear codes over any alphabet size.
Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices. With our proof, which maintains a hypergraph perspective of the list-decoding problem, we include an alternate presentation of ideas from Brakensiek, Gopi, and Makam that more directly connects the list-decoding problem to the GM-MDS theorem via a hypergraph orientation theorem.
1 Introduction
An (error-correcting) code is simply a set of strings (codewords). In this paper, all codes are linear, meaning our code is a space of vectors over a finite field , for some prime power . A Reed–Solomon code [RS60] is a linear code obtained by evaluating low-degree polynomials over . More formally,
(1) |
The rate of a code is , which, for a Reed–Solomon code, is . Famously, Reed–Solomon codes are optimal for the unique decoding problem [RS60]: for any rate Reed–Solomon code, for every received word , there is at most one codeword within Hamming distance of ,111The Hamming distance between two codewords is the number of coordinates on which they differ. and this error parameter is optimal by the Singleton bound [Sin64].
In this paper, we study Reed–Solomon codes in the context of list-decoding, a generalization of unique-decoding that was introduced by Elias and Wozencraft [Eli57, Woz58]. Formally, a code is -list-decodable if, for every received word , there are at most codewords of within Hamming distance of .
It is well known that the list-decoding capacity, namely the largest fraction of errors that can be list-decoded with small lists, is [GRS22, Theorem 7.4.1]. Specifically, for , there are (infinite families of) rate codes that are list-decodable for a list-size as small as . On the other hand, for , if a rate code is list decodable, the list size must be exponential in the code length . Informally, a code that is list-decodable up to radius with list size , or even list size where is the code length, is said to achieve (list-decoding) capacity.
The list-decodability of Reed–Solomon codes is important for several reasons. Reed–Solomon codes are the most fundamental algebraic error-correcting code. In fact, all of the prior explicit constructions of codes achieving list-decoding capacity are based on algebraic constructions that generalize Reed–Solomon codes, for example, Folded Reed–Solomon codes [GR08, KRZSW18], Multiplicity codes [GW13, Kop15, KRZSW18], and algebraic-geometric codes [DL12, GX12, GX13, HRZW19]. Thus, it is natural to wonder whether and when Reed–Solomon codes themselves achieve list-decoding capacity. Additionally, all Reed–Solomon codes are optimally unique-decodable, so (equivalently) they are optimally list-decodable , making them a natural candidate for codes achieving list-decoding capacity. Further, capacity-achieving Reed–Solomon codes would potentially offer advantages over existing explicit capacity-achieving codes, such as simplicity and potentially smaller alphabet sizes (which we achieve in this work). Lastly, list-decoding of Reed–Solomon codes has found several applications in complexity theory and pseudorandomness [CPS99, STV01, LP20].
For all these reasons, the list-decodability of Reed–Solomon codes is well-studied. As rate Reed–Solomon codes are uniquely decodable up to the optimal radius given by the Singleton Bound, the Johnson-bound [Joh62] automatically implies that Reed–Solomon codes are -list-decodable for error parameter and list size . Guruswami and Sudan [GS99] showed how to efficiently list-decode Reed–Solomon codes up to the Johnson radius . For a long time, this remained the best list-decodability result (even non-constructively) for Reed–Solomon codes.
Since then, several results suggested Reed–Solomon codes could not be list-decoded up to capacity, and in fact, not much beyond the Johnson radius . Guruswami and Rudra [GR06] showed that, for a generalization of list-decoding called list-recovery, Reed–Solomon codes are not list-recoverable beyond the (list-recovery) Johnson bound in some parameter settings. Cheng and Wan [CW07] showed that efficient list-decoding of Reed–Solomon codes beyond the Johnson radius in certain parameter settings implies fast algorithms for the discrete logarithm problem. Ben-Sasson, Kopparty, and Radhakrishnan [BKR10] showed that full-length Reed–Solomon codes () are not list-decodable much beyond the Johnson bound in some parameter settings.
Since then, an exciting line of work [RW14, ST20, GLS+22, FKS22, GST22, BGM23, GZ23] has shown the existence of Reed–Solomon codes that could in fact be list-decoded beyond the Johnson radius. These works all consider combinatorial list-decodability of randomly punctured Reed–Solomon codes. By combinatorial list-decodability, we mean that the code is proved to be list-decodable without providing an algorithm to efficiently decode the list of nearby codewords. By randomly punctured Reed–Solomon code, we mean a code where are chosen uniformly over all -tuples of pairwise distinct elements of . Several of these works [RW14, FKS22, GST22] proved more general list-decoding results about randomly puncturing any code with good unique-decoding properties, not just Reed–Solomon codes.
In this line of work, a recent breakthrough of Brakensiek, Gopi, and Makam [BGM23] showed, using notions of “higher-order MDS codes” [BGM22, Rot22], that Reed–Solomon codes can actually be list-decoded up to capacity. In fact, they show, more strongly, that Reed–Solomon codes can be list-decoded with list size with radius , exactly meeting the generalized Singleton bound [ST20], resolving a conjecture of Shangguan and Tamo [ST20]. However, their results require randomly puncturing Reed–Solomon codes over an exponentially large field size , where is the block length of the code.
A natural question is how small we can take the field size in a capacity-achieving Reed–Solomon code. Brakensiek, Dhar, and Gopi [BDG22, Corollary 1.7, Theorem 1.8] showed that the exponential-in- field size in [BGM23] is indeed necessary to exactly achieve the generalized Singleton bound for — under the additional assumptions that the code is linear and MDS. These assumptions were removed in followup work [AGL24], which also generalized the result to all — but smaller field sizes remained possible if one allowed a small slack in the parameters. Recently, an exciting work of Guo and Zhang [GZ23] showed that Reed–Solomon codes are list-decodable up to capacity, in fact up to (but not exactly at) the generalized Singleton bound, with alphabet size .
1.1 Our results
1.1.0.0.1 List-decoding Reed–Solomon codes.
Building on Guo and Zhang’s argument, we show that Reed–Solomon codes are list-decodable up to capacity and the generalized Singleton bound with linear alphabet size , which is evidently optimal up to the constant factor. Our main result is the following.
Theorem 1.1.
Let , and be a prime power such that . Then with probability at least , a randomly punctured Reed–Solomon code of block length and rate over is average-radius list-decodable.
As in previous works like [BGM23, GZ23], Theorem 1.1 gives average-radius list-decodability, a stronger guarantee than list-decodability: for any distinct codewords and any vector , the average Hamming distance from to is at least . Taking in Theorem 1.1, it follows that Reed–Solomon codes achieve list-decoding capacity even over linear-sized alphabets.
Corollary 1.2.
Let and be a prime power such that . Then with probability at least , a randomly punctured Reed–Solomon code of block length and rate over is average-radius list-decodable.
The alphabet size in [GZ23] is . Our main contribution is improving their alphabet size from quadratic to linear. As a secondary improvement, we also bring down the constant factor from to . We defer the proof overview of Theorem 1.1 to Section 3.1 after setting up the necessary notions in Section 2.
In our proof of Theorem 1.1, we maintain a hypergraph perspective of the list-decoding problem, which was introduced in [GLS+22]. Section 2.2 elaborates on the advantages of this perspective, which include (i) more conpact notations, definitions, and lemma statements, (ii) our improved constant factor of , (iii) an improved alphabet size in our random linear codes result below (Theorem 1.3), and (iv) an alternate presentation of ideas from Brakensiek, Gopi, and Makam [BGM23] that more directly connects the list-decoding problem to the GM-MDS theorem [DSY14, Lov18, YH19] via a hypergraph orientation theorem (see Appendix A).
1.1.0.0.2 List-decoding random linear codes.
A random linear code of rate and length over is a random subspace of of dimension . List-decoding random linear codes is well-studied [ZP81, Eli91, GHSZ02, GHK11, Woo13, RW14, RW18, LW20, MRRZ+20, GLM+21, GM22, PP23] and is an important question for several reasons. First, finding explicit codes approaching list-decoding capacity is a major challenge, and random linear codes provide a stepping stone towards explicit codes: a classic result says that uniformly random codes achieve list-decoding capacity [Eli57, Woz58], and showing list-decodability of random linear codes can be viewed as a derandomization of the uniformly random construction. Mathematically, the list-decodability of random linear codes concerns a fundamental geometric question: to what extent do random subspaces over behave like uniformly random sets? In coding theory, list-decodable random linear codes are useful building blocks in other coding theory constructions [GI01, HW18]. Lastly, the algorithmic question of decoding random linear codes is closely related to the Learning With Errors (LWE) problem in cryptography [Reg09] and Learning Parity with Noise (LPN) problem in learning theory [BKW03, FGKP06].
The list-decodability of random linear codes is more difficult to analyze than uniformly random codes, because codewords do not enjoy the same independence as in random codes. Thus the naive argument that shows that random linear codes achieve list-decoding capacity [ZP81] gives an exponentially worse list size of than for random codes ( is the gap to the “-ary capacity”, , where is the -ary entropy function). Several works have sought to circumvent this difficulty [Eli91, GHSZ02, GHK11, Woo13, RW14, RW18, LW20, GLM+21] improving the list-size bound to , matching the list-size of uniformly random codes.
However, these results are more relevant for smaller alphabet sizes , and approaching the alphabet-independent capacity of is less understood. In this setting, uniformly random codes are, with high probability, list-decodable to capacity with optimal alphabet size 222This follows from the list-decoding capacity theorem [Eli57, Woz58]. Over -ary alphabets, the list-decoding capacity is given by , which is larger than when . and optimal list size .333For codes over smaller alphabets, the list size , where is the gap to capacity, is believed to be optimal, but a proof is only known for large radius [GV10]. However, for approaching the alphabet independent capacity, the list size is known to be optimal by the generalized Singleton bound [ST20]. However, it was not known whether random linear codes (or, in general, more structured codes) could achieve similar parameters. In particular, both of the following questions were open (as far as we are aware).
-
•
Are rate random linear codes -list-decodable with high probability? Previously, this was not known for any alphabet size , even alphabet size growing with the length of the code. Previously, the best list size for random linear codes list-decodable to radius was at least [GHK11, RW18].444[GHK11] appears to give a list-size bound of , and [RW18] appears to give a list size bound that is at least , and we need
-
•
Do there exist any linear codes (even non-constructively) over constant-sized (independent of ) alphabets that are -list-decodable?
Using the same framework as the proof of Theorem 1.3, we answer both questions affirmatively. We show that, with high probability, random linear codes approach the generalized Singleton bound, and thus capacity, with alphabet size close to the optimal.
Theorem 1.3.
For all , a random linear code over alphabet size and sufficiently large is with high probability -average-radius-list-decodable.
By taking , we see that random linear codes achieve capacity with optimal list size and near optimal alphabet size .
Corollary 1.4.
For all , a random linear code over alphabet size and sufficiently large is with high probability -average-radius-list-decodable.
The techniques developed in this work for the proof of Theorem 1.1 are important for obtaining the strong alphabet size guarantees of Theorem 1.3. One could also have adapted the proof of Guo and Zhang, but doing so in the same natural way would only yield an alphabet size of (see Section 4.4 for discussions). Further, our use of the hypergraph machinery, which gives a secondary improvement over [GZ23] in constant factor in the alphabet size in Corollary 1.2, gives the primary improvement in the alphabet size in Corollary 1.4 from to .
1.1.0.0.3 Alphabet size lower bounds.
Above, we saw that random linear codes achieve list-decoding capacity with optimal list-size and near-optimal alphabet size. A natural question, asked by Guo and Zhang, is how large the alphabet size needs to be for capacity-achieving Reed–Solomon codes. We showed that suffices, and by the list-decoding capacity theorem [Eli57, Woz58], we cannot have better than an exponential-type dependence on for subconstant .
For approaching capacity with constant , Ben-Sasson, Kopparty, and Radhakrishnan [BKR10] showed that, for any , there exist full-length Reed–Solomon codes that are not list-decodable much beyond the Johnson bound with list-sizes . Thus in order to achieve list-decoding capacity, one needs in some cases. However, while full-length Reed–Solomon codes could not achieve capacity, perhaps it was possible that Reed–Solomon codes over field size, say or even , could achieve capacity in all parameter settings. We observe that, as a corollary of [BKR10], such a strong guarantee is not possible. We show that, for any , there exist a constant rate and infinitely many field sizes such that all Reed–Solomon codes of length and rate over are not list-decodable to capacity with list size . The proof is in Appendix B.
Proposition 1.5.
Let for some positive integer . There exists infinitely many such that any Reed–Solomon code of length and rate is not -list-decodable.
Follow up work
The techniques in our paper have already been influential. In follow-up work, Brakensiek, Dhar, Gopi, and Zhang [BDG+23b] used our argument to prove that Algebraic Geometry (AG) codes achieve list-decoding capacity over constant-sized alphbaets. They prove this by combining our techniques with a generalized GM-MDS theorem, proved by Brakensiek, Dhar, Gopi [BDG23a].
2 Preliminaries
2.1 Basic notation
For positive integers , let denote the set . The Hamming distance between two vectors is the number of indices where . For a finite field , we follow the standard notation that denotes the ring of multivariate polynomials with variables over , and denotes the field of fractions of the polynomial ring . By abuse of notation, we let or to denote the sequence , and we let, for example, to denote . Given a matrix over the field of fractions and field elements , let denote the matrix over obtained by setting in .
2.2 Hypergraphs and connectivity
In this work, we maintain a hypergraph perspective of the list-decoding problem, which was introduced in [GLS+22]. We describe a bad list-decoding instance with a hypergraph where the bad codewords identify the vertices and the evaluation points identify the hyperedges (Definition 2.1). While prior works described a bad list-decoding instance by sets indicating the agreements of the codewords with the received word, this hypergraph perspective gives us several advantages:
-
1.
The constraints imposed by a bad list-decoding configuration yield a hypergraph that is weakly-partition-connected. This is a natural notion of hypergraph connectivity, which is well-studied in combinatorics [FKK03b, FKK03a, Kir03] and optimization [JMS03, FK09, Fra11, CX18], and which generalizes a well-known notion (-partition-connectivity) for graphs [NW61, Tut61].555The notion of weakly-partition-connected sits between two other well-studied notions: -partition-connected implies -weakly-partition-connected implies -edge-connected [Kir03]. Each of these three notions generalizes an analogous notion on graphs. On graphs, -partition-connected and -weakly-partition-connected are equivalent. This connection allows us to have more compact notation, definitions, and lemma statements.
- 2.
-
3.
For similar reasons, for random linear codes, the hypergraph perspective saves a factor of in the alphabet size exponent, improving from to in Theorem 1.3.
-
4.
With the hypergraph perspective, we can give a new presentation of the results in [BGM23] and more directly connect the list-decoding problem to the GM-MDS theorem [DSY14, Lov18, YH19], as the heavy-lifting in the combinatorics is done using known results on hypergraph orientations. This is done in Appendix A.
A hypergraph is given by a set of vertices and a set of (hyper)edges, which are subsets of the vertices . In this work, all hypergraphs have labeled edges, meaning we enumerate our edges by distinct indices from some set, typically , in which case we may also think of as a tuple . Throughout this paper, the vertex set is typically for some positive integer . The weight of a hyperedge is , and the weight of a set of hyperedges is simply .
All hypergraphs that we will consider in this work are agreement hypergraphs for a bad list-decoding configuration. See Figure 1 for an illustration.
Definition 2.1 (Agreement Hypergraph).
Given vectors , the agreement hypergraph has a vertex set and a tuple of hyperedges where .
A key property of hypergraphs that we are concerned with is weak-partition-connectivity.
Definition 2.2 (Weak Partition Connectivity).
A hypergraph is -weakly-partition-connected if, for every partition of the set of vertices ,
(2) |
where is the number of parts of the partition, and is the number of parts of the partition that edge intersects.
To give some intuition for weak partition connectivity, we state two of its combinatorial implications. First, if a graph is -weakly-partition-connected, then it is -edge-connected [Kir03], which, by the Hypergraph Menger’s (Max-Flow-Min-Cut) theorem [Kir03, Theorem 1.11], equivalently means that every pair of vertices has edge-disjoint (hyper)paths between them.666In general the converse is not true. Second, suppose we replace every hyperedge with an arbitrary spanning tree of its vertices (which we effectively do in Definition 2.6). The resulting (non-hyper)graph is -partition-connected,777In (non-hyper)graphs, -partition-connectivity and -weak-partition-connectivity are equivalent. which, by the Nash-Williams-Tutte Tree-Packing theorem [NW61, Tut61], equivalently means there are edge-disjoint spanning trees (this connection was used in [GLS+22]).
The key reason we consider weak-partition-connectivity is that a bad list-decoding configuration yields a -weakly-partition-connected agreement hypergraph.
Lemma 2.3 (Bad list gives -weakly-partition-connected hypergraph. See also Lemma 7.4 of [GLS+22]).
Suppose that vectors are such that the average Hamming distance from to is at most . That is, . Then, for some subset with , the agreement hypergraph of is -weakly-partition-connected.
Lemma 2.3 follows from the following result about weakly-particion-connected hypergraphs
Lemma 2.4.
Let be a hypergraph with at least two vertices and with total edge weight , for some positive integer . Then there exists a subset of at least two vertices such that the hypergraph is -weakly-partition-connected.
Proof.
Let be an inclusion-minimal subset with such that
(3) |
By assumption, satisfies (3), so exists (note that singleton subsets of satisfy (3) with equality). Let be the hypergraph with edge set . By minimality of , for all , we have . Now, consider a non-trivial partition of where for all (as otherwise (2) trivially follows). We have
(4) |
This holds for all partitions of , so is -weakly-partition-connected. ∎
Proof of Lemma 2.3.
Consider the agreement hypergraph of . The total edge weight is
(5) |
By Lemma 2.4, there exists a subset of at least two vertices such that — which is exactly the agreement hypergraph of — is -weakly-partition-connected. ∎
Remark 2.5.
The condition is needed later so that the reduced intersection matrix (defined below) is not a matrix, in which case the matrix does not help establish list-decodability.
2.3 Reduced intersection matrices: definition and example
As in [GZ23], we work with the reduced intersection matrix, though our proof should work essentially the same with a different matrix called the (non-reduced) intersection matrix, which was considered in [ST20, GLS+22, BGM23].
Definition 2.6 (Reduced intersection matrix).
The reduced intersection matrix associated with a prime power , degree , and a hypergraph is a matrix over the field of fractions . For each hyperedge with vertices , we add rows to . For , we add a row of length defined as follows:
-
•
If , then
-
•
If and , then
-
•
Otherwise, .
We typically omit and and write as is typically understood.
Example 2.7.
Recall the example edges of the agreement hypergraph in Figure 1.
The edges from contribute the following length rows to its reduced intersection matrix:
(6) |
Here is a “Vandermonde row”, and denotes the length- vector . Note that each edge contributes rows to the agreement matrix, and in particular does not contribute any rows.
Reduced intersection matrices arise by encoding all agreements from a bad list-decoding configuration into linear constraints on the message symbols (the polynomial coefficients). These constraints are placed into one matrix that we call the reduced intersection matrix. The following lemma implies that, if every reduced intersection matrix arising from a possible bad list-decoding configuration has full column rank when , the corresponding Reed–Solomon code is list-decodable.
Lemma 2.8 (RIM of agreement hypergraphs are not full column rank).
Let be an agreement hypergraph for , where are codewords of , not all equal to each other. Then the reduced intersection matrix does not have full column rank.
Proof.
By definition,
(7) |
where are the vectors of coefficients of the polynomials that generate codewords . Since these vectors are not all equal to each other, does not have full column rank. ∎
Remark 2.9 (Symmetries of reduced intersection matrices).
From this definition, it should be clear that we can divide the variables into at most classes such that variables in the same class are exchangeable with respect to the reduced intersection matrix : if and are the same hyperedge, then swapping and yields the same reduced intersection matrix (up to row permutations). This observation, which was alluded to in [GZ23], turns out to be crucial in our argument that allows us to improve the alphabet size in [GZ23] from quadratic to linear.
Remark 2.10.
The pairwise distinctness requirement in the definition of average-radius-list-decodability (see Section 1.1) is nonetheless crucial in the proof of Theorem 1.1, despite the weaker requirement in Lemma 2.8. That is because we will eventually apply Lemma 2.8 on the subcollection of codewords given from Lemma 2.3, which can potentially be arbitrary. The guarantee that this subcollection of codewords is not all equal to each other would then follow from pairwise distinctness of the codewords in the original list.
2.4 Reduced intersection matrices: full column rank
The following theorem shows that reduced intersection matrices of -weakly-partition-connected hypergraphs are nonsingular when viewed as a matrix over . This was essentially conjectured by Shangguan and Tamo [ST20] and essentially established by Brakensiek, Gopi, and Makam [BGM23], who conjectured and showed, respectively, nonsingularity of the (non-reduced) intersection matrix under similar conditions. By the same union bound argument as in [ST20, Theorem 5.8], Theorem 2.11 already implies list-decodability of Reed–Solomon codes up to the generalized Singleton bound over exponentially large fields sizes, which is [BGM23, Theorem 1.5]. For completeness, and to demonstrate how the hypergraph perspective more directly connects the list-decoding problem to the GM-MDS theorem, we include a proof of Theorem 2.11 in Appendix A.
Theorem 2.11 (Full column rank. Implicit from Theorem A.2 of [BGM23]).
Let and be positive integers and be a finite field. Let be a -weakly-partition-connected hypergraph with hyperedges and at least vertices. Then has full column rank over the field .
Remark 2.12.
We note that, [BGM23] assumes throughout their paper that the alphabet size is sufficiently large, but Theorem 2.11 follows from the weaker “ sufficiently large” version: For any fixed field size , take to be a sufficiently large power of . Then, by the “ sufficiently large” version of Theorem 2.11, matrix has full column rank over the field . Hence, the determinant of some square full-rank submatrix of is a nonzero polynomial in . The entries of can all be viewed as polynomials over , so the corresponding full-rank submatrix of has a determinant that is a nonzero polynomial in — symbolically, the determinants are the same polynomials, as and have the same characteristic. Hence, the matrix has full column rank over the field .
2.5 Reduced intersection matrix: row deletions
As in [GZ23], we consider row deletions from the reduced intersection matrix. The goal of this section is to establish Lemma 2.14, that the full-column-rank-ness of reduced intersection matrices are robust to row deletions.
Definition 2.13 (Row deletion of reduced intersection matrix).
Given a hypergraph and set , define to be the submatrix of obtained by deleting all rows containing a variable with .
The next lemma appears in a weaker form in [GZ23]. It roughly says that, given a reduced intersection matrix with some constant factor “slack” in the combinatorial constraints, we can omit a constant fraction of the rows without compromising the full-column-rank-ness of the matrix. Our version of this lemma saves roughly a factor of compared to the analogous lemma [GZ23, Lemma 3.11]. The reason is that the -weakly-partition-connected condition is more robust to these row deletions (by a factor of roughly ) than the condition in [GZ23]. As such, our proof is also more direct.
Lemma 2.14 (Robustness to deletions. Similar to Lemma 3.11 of [GZ23]).
Let be a -weakly-partition-connected hypergraph with . For all sets with , we have that is nonempty and has full column rank.
Proof.
By definition of the reduced intersection matrix , the matrix with row deletions is the matrix , where is the hypergraph obtained from by deleting for . By Theorem 2.11, it suffices to prove that is -weakly-partition connected. Indeed, consider any partition of . We have
(8) |
as desired. The first inequality holds because is -weakly-partition-connected, and, trivially, any edge touches at most parts of . ∎
3 Proof of list-decodability with linear-sized alphabets
3.1 Overview of the proof
By Lemma 2.8 and Lemma 2.3, every bad list-decoding configuration admits a weakly-partition-connected agreement hypergraph whose reduced intersection matrix does not have full column rank. Thus, to prove Theorem 1.1, it suffices to show that, with high probability, every such reduced intersection matrix has full column rank. The main technical lemma for this section is the one stated below. Our main result, Theorem 1.1, follows by applying Lemma 2.3 and Lemma 2.8 with Lemma 3.1, and taking a union bound over all possible agreement hypergraphs.
Lemma 3.1.
Let be a positive integer and . For each -weakly-partition-connected hypergraph with , we have, for ,
(9) |
At the highest level, the proof of Lemma 3.1 follows the same outline as [GZ23]. For every sequence of evaluation points for which does not have full column rank, we show that there is a certificate consisting of distinct indices in (Lemma 3.8), which intuitively “attests” to the failure of the matrix to be full column rank. We then show that, for any certificate , the probability that has certificate is exponentially small. (More precisely, it will at most be . See Corollary 3.12). We then show that there are not too many certificates (Corollary 3.10), and then union bound over the number of possible certificates to obtain the desired result (Lemma 3.1).
Our argument differs from [GZ23] in how we choose our certificates. The argument of [GZ23] allowed for up to certificates. Our argument instead only needs many certificates, which is much smaller when (the parameter regime of interest here) and overall allows us to save a factor of in the alphabet size. Our savings comes from leveraging that there are at most different “types” of hyperedges (see Remark 2.9), and thus at most different types of variables in the reduced intersection matrix . This observation was alluded to in [GZ23].888Guo and Zhang [GZ23] write “It is possible that achieving an alphabet size linear in n would require establishing and exploiting other properties of intersection matrices or reduced intersection matrices, such as an appropriate notion of exchangeability.” We found this prediction to be insightful and true. With this observation in mind, we assume, without loss of generality, that the edges of are ordered by their respective type (we can relabel the edges of , which effectively permutes the rows of ).
Our method of generating a certificate for the evaluation sequence (Algorithm 2) is similar to that of [GZ23] at a high level—with each certificate , we associate a sequence of submatrices of (Algorithm 1) that are entirely specified by as follows: since evaluating forces to not be full rank, then so will all of its submatrices. Thus if we sequentially ’reveal’ , then at some point, becomes singular exactly when we set — in fact, is defined as such, so that we select in that order, but we emphasize that can be computed from without knowing . Conditioned on being non-singular with , the probability that becomes singular when setting is at most : is uniformly random over at least field elements, and the degree of in the determinant of is at most (and the determinant is nonzero by definition). Running conditional probabilities in the correct order, we conclude that the probability that a particular certificate is generated is at most , just as in [GZ23].
Whereas [GZ23] pick any matrix that is obtained after removing the variables , we do a more deliberate choice of matrices by leveraging the symmetries of (Remark 2.9). First, we ensure that we can keep a “bank” of unused variables of each of the types. Then, starting with a full column rank submatrix of devoid of all variables in the “bank,” we start sequentially applying the evaluations . Whenever turns singular, we find that the evaluation is what ’caused’ it to become singular. We then go to the “bank” to find a variable of the same type as and “re-indeterminate” by replacing all instances of in with . That way, we ensure that is, in a sense, “reused.” Furthermore, we ensure , so that the matrix is now nonsingular, so we can keep going. Of course, if we end up reaching the end (i.e. is full column rank), then in fact, is full column rank, and so the evaluations were ‘good’ after all.
Otherwise, if the evaluations were ‘bad’, then the submatrix couldn’t have reached the end, and that can only happen if some specific type was completely exhausted from the bank. However, given the size of our initial bank, that must have meant that must have been “re-indeterminated” at least times. When that happens, we collect the indices that we gathered from this round, remove them from , and repeat the process again with a refreshed bank. Since we only need indices, then we end up doing at most rounds. Because each round yields a strictly increasing sequence of indices of length at least , then we up getting a certificate consisting of at most strictly increasing runs of total length , of which there are at most .
To be more concrete, when we generate the submatrix , we ensure that any variable appearing in has the same type as variables that are not in (but still in ). This creates a “bank” of variables of each type. Then, if was the point that made singular, we can get by replacing all copies of with some that is of the same type and in the “bank.” Since variables and are of the same type, they have analogous rows in the reduced intersection matrix , so this new matrix is still a submatrix of . Therefore, we can pick up where we left off with but with instead. That is, will in fact be full rank when we apply the evaluations . Thus the next index on which turns singular will be strictly greater than . We then repeat the process in , replacing with some that is in the “bank” and of the same type, getting , and so on. We can continue this process for steps because of the size of the bank of each type, so we get an increasing run of length in our certificate. After we run out of some type in our bank, we remove the used indices from and repeat the process again with a refreshed bank. This continues for times only, as we only need indices in the end.
We now finish the proof of Theorem 1.1, assuming Lemma 3.1. The rest of this section is devoted to proving Lemma 3.1.
Proof of Theorem 1.1, assuming Lemma 3.1.
By Lemma 2.3, if is not average-radius list-decodable, then there exists a vector and pairwise distinct codewords with such that the agreement hypergraph is -weakly-partition-connected. By Lemma 2.8, the matrix is not full column rank. Now, the number of possible agreement hypergraphs is at most . Thus by the union bound over possible agreement hypergraphs with Lemma 3.1, we have, for ,
(10) |
as desired. Here, we used that . ∎
3.2 Setup for proof of Lemma 3.1
We now devote the rest of this Section to proving Lemma 3.1.
3.2.0.0.1 Types.
For a hypergraph , the type of an index (or, by abuse of notation, the type of the variable , or the edge ) is simply the set . There are types, and by abuse of notation, we identify the types by the numbers in an arbitrary fixed order with a bijection . We say a hypergraph is type-ordered if the hyperedges are sorted according to their type: . Since permuting the labels of the edges of preserves the rank of (it merely permutes the rows of ), we can without loss of generality assume in Lemma 3.1 that is type-ordered.
3.2.0.0.2 Global variables.
Throughout the rest of the section, we fix a positive integer , parameter , and , a type-ordered -weakly-partition-connected hypergraph with . We also fix
(11) |
3.3 and : Basic properties
As mentioned at the beginning of this section, we design an algorithm, Algorithm 2, that attempts to generate a certificate for evaluation points . It uses Algorithm 1, a helper function that generates the associated square submatrices of . Below, we establish some basic properties of these algorithms.
First, we establish that the matrices outputted by GetMatrixSequence are well-defined.
Lemma 3.2 (Output is well-defined).
For all sequence of indices , if is the output of the function , then are well-defined.
Proof.
Next, we observe that GetMatrixSequence is an “online” algorithm.
Lemma 3.3 (Online).
Furthermore, is a deterministic function of , and it computes “online”, meaning depends only on for all (and is always the same matrix). In particular, is a prefix of .
Proof.
By definition and Lemma 3.2. ∎
Definition 3.4 (Refresh index).
Our first lemma shows that the new indices we are receiving from are in fact new.
Lemma 3.5 (New Variable).
Proof.
To show that the probability of a particular certificate is small (Lemma 3.11, Corollary 3.12), we crucially need that are pairwise distinct. The next lemma proves that this is always the case.
Lemma 3.6 (Distinct indices).
For any sequence of evaluation points , the output of is a sequence of pairwise distinct indices.
Proof.
By definition of at Line 2 of , variable must be in , so suffices to show that never contains any variable for . We induct on . If is a refresh index, this is true by definition. If not, let be the largest refresh index less than . By induction, are not in , so we just need to show (the new index replacing in at Line 1) is not any of . It is not any of because none of those indices are in by definition. It is not any of for , because is in , but is not, by Lemma 3.5 . ∎
3.4 Bad evaluation points admit certificates
Here, we establish Lemma 3.8, that if some evaluation points make not full column rank, then outputs a certificate. To do so, we first justify our matrix constructions, showing that the matrices in are in fact submatrices of .
Lemma 3.7 ( gives submatrices of ).
For all sequence of indices , if is the output of , then are submatrices of .
Proof.
We proceed with induction on . First, if is a refresh index, then is a submatrix of by definition. In particular, is a submatrix of , so the base case holds. Now suppose is not a refresh index and is a submatrix of . Matrix is defined by replacing all copies of with . To check that is a submatrix of , it suffices to show that
-
(i)
for each row of containing , replacing all copies of with gives another row of , and
-
(ii)
the variable does not appear in .
The first item follows from the fact that indices and are of the same type, so (i) holds by definition of types and (see also Remark 2.9). The second item is Lemma 3.5. Thus, is a submatrix of , completing the induction. ∎
We now show that any -tuple of bad evaluation points admits a certificate.
Lemma 3.8 (Bad evaluations points admit certificates).
If are evaluation points such that does not have full column rank, returns a certificate (rather than ).
Proof.
Suppose for contradiction that returns at iteration in the loop. Then there is no index such that is singular, so in particular, is nonsingular and thus has full column rank. By Lemma 3.7, is a submatrix of , so we conclude has full column rank. ∎
3.5 Bounding the number of possible certificates
In this section, we upper bound the number of possible certificates. The key step is to prove the following structural result about certificates.
Lemma 3.9 (Certificate structure).
Given a sequence of evaluation points such that is not full column rank, the return value satisfies for all but at most values .
Proof.
Let be the return of , and let be the associated matrix sequence. By Lemma 3.3, we have for . Recall an index is a refresh index if is defined on Line 1 rather than Line 1. The lemma follows from two claims:
-
(i)
If is not a refresh index, then .
-
(ii)
Any two refresh indices differ by at least .
To see claim (i), let be the largest refresh index less than . By definition of a refresh index, the set stays constant between when is defined and when is defined. From the definition of at Line 2 in , we know that
-
•
For the matrix is nonsingular.
-
•
The matrix is singular.
Suppose for contradiction that . (Note that by Lemma 3.6.) We contradict the first item by showing, using the second item, that is also singular. By the definition of , since is not a refresh index, is defined in Line 1. By construction of and , we know that . Thus, not only is obtained from by replacing all copies of with , but is also obtained by replacing all copies of with in . Moreover, the variable does not appear in by Lemma 3.5. So we conclude that, as is singular, so is .
Now we show claim (ii). Suppose and are consecutive refresh indices. If a variable of type appears in the matrix , there must be exactly indices of type in (if there were fewer, then would contain all indices of type , and the corresponding variables would not appear in ). Let be the type of index . Since is a refresh index, the number of indices of type among must therefore be . In particular, this means , as desired. ∎
Corollary 3.10 (Certificate count).
The number of possible outputs to is at most .
3.6 Bounding the probability of one certificate
The goal of this section is to establish Corollary 3.12, which states that the probability of obtaining a particular certificate is at most . The argument is implicit in [GZ23], but we include a proof for completeness.
Lemma 3.11 (Implicit in [GZ23]).
Let be pairwise distinct indices, and be submatrices of . Over randomly chosen pairwise distinct evaluation points , define the following events for :
-
•
is the event that is non-singular for all .
-
•
is the event that is singular.
The probability that all the events hold is at most .
Proof.
Note that the set of evaluation points for which events and occur depends only on and . Furthermore, each of the events and depends only on , , and the evaluation points. Thus, by relabeling the index , we may assume without loss of generality that . We emphasize that we are not assuming that the output of satisfies (this is not true). We are instead just choosing how we ’reveal’ our events and : starting with the smallest index in and ending with the largest index in it.
We have
(12) |
Note that depends only on , and depends only on . For any for which holds, we have that is a matrix in whose determinant is a nonzero polynomial of degree at most in each variable (the determinant contains at most rows including , each time with maximum degree ). In particular, at most values of can make the determinant zero since, viewing the determinant as a polynomial in variables with coefficients in , any single nonzero coefficient becomes zero on at most values of . Conditioned on , the field element is uniformly random over elements. Thus, we have, for all such that ,
(13) |
Since depends only on and depends only on , we have
(14) |
Combining with (12) gives the desired result. ∎
The key result for this section is a corollary of Lemma 3.11.
Corollary 3.12 (Probability of one certficiate).
For any sequence , over randomly chosen pairwise distinct evaluation points , we have
(15) |
Proof.
By Lemma 3.6, we only need to consider pairwise distinct indices , otherwise the probability is 0. Let . By Lemma 3.7, matrices are all submatrices of . Thus, Lemma 3.11 applies. Let be the events in Lemma 3.11. If , then the definition of in Line 2 of implies that events and both occur. By Lemma 3.11, the probability that all and hold is at most , hence the result. ∎
3.7 Finishing the proof of Lemma 3.1
Proof of Lemma 3.1.
Recall (Section 3.2) that we fixed to be a type-ordered -weakly-partition-connected hypergraph. By Lemma 3.8, if the matrix does not have full column rank, then is some certificate . The probability that is at most by Corollary 3.12. By Corollary 3.10, there are at most certificates. Taking a union bound over possible certificates gives the lemma. ∎
4 Random Linear Codes
In this section, we discuss how to modify the proof of Theorem 1.1 to give Theorem 1.3, list-decoding for random linear codes (RLCs). Our proof follows the roadmap in Figure 2. The proof is identical up to a few minor modifications, which we state here for brevity. Below, we state the same lemmas as in the proof for Reed–Solomon codes, adjusted for random linear codes, and we highlight the key differences in purple. We expect that our framework could be applied even more generally to show that other families of random codes — beyond randomly punctured Reed–Solomon codes and random linear codes — achieve list-decoding capacity with small alphabet sizes, assuming such codes satisfy an appropriate GM-MDS theorem.
4.1 Preliminaries: Notation and Definitions
The generator matrix of a random linear code has independent uniformly random entries in . To transfer the proof for list-decoding Reed–Solomon codes to list-decoding random linear codes, a key analogy is to think of the generator matrix as a matrix of distinct indeterminates , evaluated at independent and uniformly random field elements .
(16) |
We note that our randomly punctured Reed–Solomon code can also be viewed as an evaluation of , where is assigned where are random distinct field elements over . In this light, one might expect our framework can also apply, and indeed it does.
Accordingly, we use similar indexing shorthand, where the notation represents the indeterminates , and similarly for field elements . For field elements , we write to denote for and .
We again use the notion of an agreement hypergraph in Section 2.2, and Lemma 2.3 still holds. For each agreement hypergraph , we consider more general reduced intersection matrix , where the -Vandermonde-rows are instead the -th row of . More precisely,
Definition 4.1 (Reduced intersection matrix, Random Linear Codes, Analogous to Definition 2.6.).
The reduced intersection matrix associated with a hypergraph is a matrix over the field of fractions . For each hyperedge with vertices , we add rows to . For , we add a row of length defined as follows:
-
•
If , then
-
•
If and , then
-
•
Otherwise, .
4.2 Preliminaries: Properties of RLC Reduced Intersection Matrices
We have similar preliminaries for reduced intersection matrices of random linear codes.
Lemma 4.2 (RIM of agreement hypergraphs are not full column rank, Analogous to Lemma 2.8).
Let be an agreement hypergraph for , where are distinct codewords of the code generated by . Then the reduced intersection matrix does not have full column rank.
Proof.
Analogous to the proof of Lemma 2.8. ∎
Lemma 4.3 (RIM have full column rank, Analogous to Theorem 2.11).
Let be a -weakly-partition-connected hypergraph with hyperedges and at least vertices. Then has full column rank over the field .
Proof.
We note that the Reed–Solomon code reduced intersection matrix can be obtained from the random linear code reduced intersection matrix by setting the indeterminates , so Lemma 4.3 immediately follows from Theorem 2.11. We emphasize that, while Reed–Solomon codes require large alphabet sizes , Theorem 2.11 still holds for constant alphabet sizes (see Remark 2.12), so we can use it here. ∎
We remark that Lemma 4.3 can be proven directly by following the proof framework of Theorem 2.11 in Appendix A.3, but instead substitute the use of Theorem A.2 with an analogous GM-MDS theorem for Random Linear Codes, which can be found in Lemma 7 of [DSY15] (Lemma 7 of [DSY15] only implies Lemma 4.3 for to be sufficiently large, but again by Remark 2.12 the sufficiently large version of Lemma 4.3 implies the lemma for all ). That way, the proof of Theorem 1.3 relies only on the proof framework of Theorem 1.1 and not on any of its lemmas.
We again define row deletions for reduced intersection matrices.
Definition 4.4 (Row deletions, Analogous to Definition 2.13).
Given a hypergraph and set , define to be the submatrix of obtained by deleting all rows containing the row with .
Now we show that, as for Reed–Solomon codes, the full-column-rankness of reduced intersection matrices is robust to deletions.
Lemma 4.5 (Robustness to deletions, Analogous to Lemma 2.14).
Let be a -weakly-partition-connected hypergraph with . For all sets with , we have that is nonempty and has full column rank.
4.3 The proof
The proof of Theorem 1.3 follows similarly to the proof of Theorem 1.1. Our key lemma, analogous to Lemma 3.1 is to show that reduced intersection matrices of weakly-partition-connected hypergraphs are full column rank with high probability.
Lemma 4.6 (Analogous to Lemma 3.1).
Let be a positive integer and . For each -weakly-partition-connected hypergraph with , we have, for ,
(17) |
We highlight that our probability bound here is better than the one in Lemma 3.1 for Reed–Solomon codes. This is because (i) all indeterminates in our generator matrix (and thus, the reduced intersection matrix) appear with degree 1 (rather than degree up to ), and (ii) our indeterminates are assigned independently uniformly at random, rather than random distinct values. Thus, the probability of any particular square submatrix matrix being made singular with an assignment is at most , rather than : item (i) improves the numerator from to , and item (ii) improves the denominator from to . This improved probability bound means we can use a smaller alphabet size for random linear codes than for Reed–Solomon codes. Other than this difference, the rest of our proof follows analogously. We include some more details for completeness.
We start with the same setup in Section 3.2, defining types in the same way, and starting with a -weakly-partition-connected hypergraph that we assume without loss of generality is type-ordered. We again fix
(18) |
To prove Lemma 4.6, we similarly find a certificate for each singular reduced intersection matrix. This certificate is generated by an analogous algorithm, GetCertificateRLC, which uses an analogous helper function GetMatrixSequenceRLC. We show this certificate has the same three properties
-
1.
A bad generator matrix, namely a generator matrix for which the reduced intersection matrix is not full column rank, must yield a certificate.
-
2.
There are few possible certificates
-
3.
The probability that a random generator matrix yields a particular certificate is small.
We generate the certificate in a similar way. This time, instead of sequentially revealing the evaluation points, we sequentially reveal rows of the generator matrix, and indicates.
The first item is captured in the following Lemma.
Lemma 4.7 (Bad generator matrix admits certificate, Analogous to Lemma 3.8).
If are entries for the generator matrix such that does not have full column rank, returns a certificate (rather than ).
Proof.
Analogous to the proof of Lemma 3.8. ∎
Just as for Reed–Solomon codes, we obtain the same bound on the number of possible certificates.
Lemma 4.8 (Certificate count, Analogous to Corollary 3.10).
The number of possible outputs to is at most .
Proof.
Analogous to the proof of Corollary 3.10. ∎
Lastly, we obtain an upper bound on the probability of obtaining a particular certificate.
Lemma 4.9 (Probability of one certficiate, Analogous to Corollary 3.12).
For any sequence , over independent uniformly random , we have
(19) |
Lemma 4.9 is slightly different from the analogous result for Reed–Solomon codes, Corollary 3.12, so we provide a little more justification here. Similar to Corollary 3.12, Lemma 4.9 follows from a lemma analogous to Lemma 3.11.
Lemma 4.10 (Analogous to Lemma 3.11).
Let be pairwise distinct indices, and be submatrices of . Over random generator matrix entries , define the following events for :
-
•
is the event that is non-singular for all .
-
•
is the event that is singular.
The probability that all the events hold is at most .
Proof of Lemma 4.10.
The proof is similar to the proof of Lemma 3.11. Lemma 3.11 follows from combining Eqaution (13) with the appropriate conditional probabilities. This lemma follows the same approach. We again assume without loss of generality .
Here, we want, analogous to Equation (13), for all such that ,
(20) |
To see (20), consider the determinant of , a matrix in . View the determinant of as a polynomial in variables with coefficients in . It is nonzero because we assume holds, so there is some coefficient of the form that is nonzero. Since matrix has at most rows containing any variables among , each appearing with total degree 1, the total degree of in the determinant of is at most . Thus, the total degree of is at most . Hence, by the Schwarz-Zippel lemma, becomes zero with probability at most over random . Thus, the probability that holds is at most , giving (20).
Combining conditional probabilities as in Lemma 3.11 gives the result. ∎
Proof of Theorem 1.3.
By Lemma 2.3, if our random linear code generated by is not average-radius list-decodable, then there exists a vector and codewords with such that the agreement hypergraph is -weakly-partition-connected. By Lemma 4.2, the matrix is not full column rank. Now, the number of possible agreement hypergraphs is at most . Thus by the union bound over possible agreement hypergraphs with Lemma 4.6, we have, for ,
(21) |
as desired. Here, we used that . ∎
4.4 Technical comparison with [GZ23]
To prove that random linear codes achieved list-decoding capacity (Theorem 1.3), we extended the framework for showing that (randomly punctured) Reed–Solomon codes achieve list-decoding capacity over linear-sized fields (Theorem 1.1). It is possible to instead use the framework of Guo and Zhang [GZ23] to show a similar result. However, using the framework of Guo and Zhang in the same way would have only worked for alphabet size that is linear in , rather than, in our case, a near-optimal constant. Below, we explain why our new ideas were necessary for obtaining our near-optimal alphabet size.
In (21), our upper bound on the non-list-decodability probability is
(22) |
where , where is roughly the gap to capacity. Here, the term comes from a union bound over the number of possible hypergraphs, the term comes from a union bound over the number of possible certificates, and the term bounds the probability of a single certificate. We saw above that this probability is as long as .
If we applied the framework of [GZ23] to random linear codes, the number of possible certificates would instead be . Our bound on the non-list-decodability probability would then be
(23) |
For this to bound to be , we need to take , giving an alphabet size of . This would still have been a new result, as, previously, the Reed–Solomon codes of [GZ23] gave the smallest known alphabet size () of any linear code achieving list-decoding capacity with optimal list size . However, using our framework allows us to achieve a near-optimal constant list size of .
Acknowledgements
We thank Mary Wootters and Francisco Pernice for helpful discussions about [BGM23] and the hypergraph perspective of the list-decoding problem. We thank Karthik Chandrasekaran for helpful discussions about hypergraph connectivity notions and for the reference of Theorem 9 in [Fra11]. We thank Nikhil Shagrithaya and Jonathan Mosheiff for pointing out a mistake in the proof of Lemma 4.3 in an earlier version of the paper. We thank an anonymous reviewer for pointing out a mistake in the proof of Corollary A.4 in an earlier version of the paper.
References
- [AGL24] Omar Alrabiah, Venkatesan Guruswami, and Ray Li. Ag codes have no list-decoding friends: Approaching the generalized singleton bound requires exponential alphabets. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1367–1378. SIAM, 2024.
- [BDG22] Joshua Brakensiek, Manik Dhar, and Sivakanth Gopi. Improved field size bounds for higher order mds codes. arXiv preprint arXiv:2212.11262, 2022.
- [BDG23a] Joshua Brakensiek, Manik Dhar, and Sivakanth Gopi. Generalized gm-mds: Polynomial codes are higher order mds. arXiv preprint arXiv:2310.12888, 2023.
- [BDG+23b] Joshua Brakensiek, Manik Dhar, Sivakanth Gopi, et al. Ag codes achieve list decoding capacity over contant-sized fields. arXiv preprint arXiv:2310.12898, 2023.
- [BGM22] Joshua Brakensiek, Sivakanth Gopi, and Visu Makam. Lower bounds for maximally recoverable tensor codes and higher order mds codes. IEEE Transactions on Information Theory, 68(11):7125–7140, 2022.
- [BGM23] Joshua Brakensiek, Sivakanth Gopi, and Visu Makam. Generic reed-solomon codes achieve list-decoding capacity. In STOC 2023, page to appear, 2023.
- [BKR10] Eli Ben-Sasson, Swastik Kopparty, and Jaikumar Radhakrishnan. Subspace polynomials and limits to list decoding of Reed-Solomon codes. IEEE Trans. Inform. Theory, 56(1):113–120, Jan 2010.
- [BKW03] Avrim Blum, Adam Kalai, and Hal Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. Journal of the ACM (JACM), 50(4):506–519, 2003.
- [CPS99] Jin-Yi Cai, Aduri Pavan, and D Sivakumar. On the hardness of permanent. In Annual Symposium on Theoretical Aspects of Computer Science, pages 90–99. Springer, 1999.
- [CW07] Qi Cheng and Daqing Wan. On the list and bounded distance decodability of Reed-Solomon codes. SIAM J. Comput., 37(1):195–209, April 2007.
- [CX18] Chandra Chekuri and Chao Xu. Minimum cuts and sparsification in hypergraphs. SIAM Journal on Computing, 47(6):2118–2156, 2018.
- [DL12] Zeev Dvir and Shachar Lovett. Subspace evasive sets. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 351–358, 2012.
- [DSY14] Son Hoang Dau, Wentu Song, and Chau Yuen. On the existence of mds codes over small fields with constrained generator matrices. In 2014 IEEE International Symposium on Information Theory, pages 1787–1791. IEEE, 2014.
- [DSY15] Son Hoang Dau, Wentu Song, and Chau Yuen. On simple multiple access networks. IEEE Journal on Selected Areas in Communications, 33(2):236–249, 2015.
- [Eli57] Peter Elias. List decoding for noisy channels. Wescon Convention Record, Part 2, Institute of Radio Engineers, pages 99–104, 1957.
- [Eli91] Peter Elias. Error-correcting codes for list decoding. IEEE Transactions on Information Theory, 37(1):5–12, 1991.
- [FGKP06] Vitaly Feldman, Parikshit Gopalan, Subhash Khot, and Ashok Kumar Ponnuswami. New results for learning noisy parities and halfspaces. In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), pages 563–574. IEEE, 2006.
- [FK09] András Frank and Tamás Király. A survey on covering supermodular functions. Research Trends in Combinatorial Optimization: Bonn 2008, pages 87–126, 2009.
- [FKK03a] András Frank, Tamás Király, and Zoltán Király. On the orientation of graphs and hypergraphs. Discrete Applied Mathematics, 131(2):385–400, 2003.
- [FKK03b] András Frank, Tamás Király, and Matthias Kriesell. On decomposing a hypergraph into k connected sub-hypergraphs. Discrete Applied Mathematics, 131(2):373–383, 2003.
- [FKS22] Asaf Ferber, Matthew Kwan, and Lisa Sauermann. List-decodability with large radius for reed-solomon codes. IEEE Transactions on Information Theory, 68(6):3823–3828, 2022.
- [Fra11] András Frank. Connections in combinatorial optimization, volume 38. Oxford University Press Oxford, 2011.
- [GHK11] Venkatesan Guruswami, Johan Håstad, and Swastik Kopparty. On the list-decodability of random linear codes. IEEE Trans. Inform. Theory, 57(2):718–725, Feb 2011.
- [GHSZ02] Venkatesan Guruswami, Johan Håstad, Madhu Sudan, and David Zuckerman. Combinatorial bounds for list decoding. IEEE Trans. Inform. Theory, 48(5):1021–1034, May 2002.
- [GI01] Venkatesan Guruswami and Piotr Indyk. Expander-based constructions of efficiently decodable codes. In Proceedings 42nd IEEE Symposium on Foundations of Computer Science, pages 658–667. IEEE, 2001.
- [GLM+21] Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Bounds for list-decoding and list-recovery of random linear codes. IEEE Transactions on Information Theory, 68(2):923–939, 2021.
- [GLS+22] Zeyu Guo, Ray Li, Chong Shangguan, Itzhak Tamo, and Mary Wootters. Improved list-decodability and list-recoverability of reed-solomon codes via tree packings. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 708–719. IEEE, 2022.
- [GM22] Venkatesan Guruswami and Jonathan Mosheiff. Punctured low-bias codes behave like random linear codes. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 36–45. IEEE, 2022.
- [GR06] Venkatesan Guruswami and Atri Rudra. Limits to list decoding Reed–Solomon codes. IEEE Trans. Inform. Theory, 52(8):3642–3649, August 2006.
- [GR08] Venkatesan Guruswami and Atri Rudra. Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy. IEEE Transactions on Information Theory, 54(1):135–150, 2008.
- [GRS22] Venkatesan Guruswami, Atri Rudra, and Madhu Sudan. Essential coding theory. Draft available at https://cse.buffalo.edu/faculty/atri/courses/coding-theory/book/, 2022.
- [GS99] Venkatesan Guruswami and Madhu Sudan. Improved decoding of Reed–Solomon and algebraic-geometry codes. IEEE Transactions on Information Theory, 45(6):1757–1767, 1999.
- [GST22] Eitan Goldberg, Chong Shangguan, and Itzhak Tamo. Singleton-type bounds for list-decoding and list-recovery, and related results. In 2022 IEEE International Symposium on Information Theory (ISIT), pages 2565–2570. IEEE, 2022.
- [GV10] Venkatesan Guruswami and Salil Vadhan. A lower bound on list size for list decoding. IEEE Transactions on Information Theory, 56(11):5681–5688, 2010.
- [GW13] Venkatesan Guruswami and Carol Wang. Linear-algebraic list decoding for variants of reed–solomon codes. IEEE Transactions on Information Theory, 59(6):3257–3268, 2013.
- [GX12] Venkatesan Guruswami and Chaoping Xing. Folded codes from function field towers and improved optimal rate list decoding. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 339–350, 2012.
- [GX13] Venkatesan Guruswami and Chaoping Xing. List decoding reed-solomon, algebraic-geometric, and gabidulin subcodes up to the singleton bound. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 843–852, 2013.
- [GZ23] Zeyu Guo and Zihan Zhang. Randomly punctured reed-solomon codes achieve the list decoding capacity over polynomial-size alphabets. arXiv preprint arXiv:2304.01403, 2023.
- [HRZW19] Brett Hemenway, Noga Ron-Zewi, and Mary Wootters. Local list recovery of high-rate tensor codes and applications. SIAM Journal on Computing, 49(4):FOCS17–157, 2019.
- [HW18] Brett Hemenway and Mary Wootters. Linear-time list recovery of high-rate expander codes. Information and Computation, 261:202–218, 2018.
- [JMS03] Kamal Jain, Mohammad Mahdian, and Mohammad R Salavatipour. Packing steiner trees. In SODA, volume 3, pages 266–274, 2003.
- [Joh62] Selmer Johnson. A new upper bound for error-correcting codes. IRE Transactions on Information Theory, 8(3):203–207, 1962.
- [Kir03] Tamás Király. Edge-connectivity of undirected and directed hypergraphs. PhD thesis, Eötvös Loránd University, 2003.
- [Kop15] Swastik Kopparty. List-decoding multiplicity codes. Theory of Computing, 11(1):149–182, 2015.
- [KRZSW18] Swastik Kopparty, Noga Ron-Zewi, Shubhangi Saraf Saraf, and Mary Wootters. Improved decoding of folded Reed-Solomon and multiplicity codes. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 212–223. IEEE, 2018.
- [Lov18] Shachar Lovett. Mds matrices over small fields: A proof of the gm-mds conjecture. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 194–199. IEEE, 2018.
- [LP20] Ben Lund and Aditya Potukuchi. On the list recoverability of randomly punctured codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020), volume 176, pages 30:1–30:11, 2020.
- [LW20] Ray Li and Mary Wootters. Improved list-decodability of random linear binary codes. IEEE Transactions on Information Theory, 67(3):1522–1536, 2020.
- [MRRZ+20] Jonathan Mosheiff, Nicolas Resch, Noga Ron-Zewi, Shashwat Silas, and Mary Wootters. Ldpc codes achieve list decoding capacity. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 458–469. IEEE, 2020.
- [NW61] Crispin St. J. A. Nash-Williams. Edge-disjoint spanning trees of finite graphs. Journal of the London Mathematical Society, 1(1):445–450, 1961.
- [PP23] Aaron Putterman and Edward Pyne. Pseudorandom linear codes are list decodable to capacity. arXiv preprint arXiv:2303.17554, 2023.
- [Reg09] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. Journal of the ACM (JACM), 56(6):1–40, 2009.
- [Rot22] Ron M Roth. Higher-order mds codes. IEEE Transactions on Information Theory, 68(12):7798–7816, 2022.
- [RS60] Irving S. Reed and Gustave Solomon. Polynomial codes over certain finite fields. Journal of the Society for Industrial and Applied Mathematics, 8(2):300–304, 1960.
- [RW14] Atri Rudra and Mary Wootters. Every list-decodable code for high noise has abundant near-optimal rate puncturings. In David B. Shmoys, editor, Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 764–773. ACM, 2014.
- [RW18] Atri Rudra and Mary Wootters. Average-radius list-recoverability of random linear codes. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 644–662. SIAM, 2018.
- [Sin64] Richard Singleton. Maximum distance -nary codes. IEEE Trans. Inform. Theory, 10(2):116–118, April 1964.
- [ST20] Chong Shangguan and Itzhak Tamo. Combinatorial list-decoding of Reed-Solomon codes beyond the Johnson radius. In Proceedings of the 52nd Annual ACM Symposium on Theory of Computing, STOC 2020, pages 538–551, 2020.
- [STV01] Madhu Sudan, Luca Trevisan, and Salil Vadhan. Pseudorandom generators without the xor lemma. Journal of Computer and System Sciences, 62(2):236–266, 2001.
- [Tut61] William T. Tutte. On the problem of decomposing a graph into n connected factors. Journal of the London Mathematical Society, 1(1):221–230, 1961.
- [Woo13] Mary Wootters. On the list decodability of random linear codes with large error rates. In Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pages 853–860, New York, NY, USA, 2013. ACM.
- [Woz58] John M. Wozencraft. List decoding. Quarterly Progress Report, Research Laboratory of Electronics, MIT, 48:90–95, 1958.
- [YH19] Hikmet Yildiz and Babak Hassibi. Optimum linear codes with support-constrained generator matrices over small fields. IEEE Transactions on Information Theory, 65(12):7868–7875, 2019.
- [ZP81] Victor Vasilievich Zyablov and Mark Semenovich Pinsker. List concatenated decoding. Problemy Peredachi Informatsii, 17(4):29–33, 1981.
Appendix A Alternate presentation of [BGM23]
Here, we include alternate presentations of some ideas from [BGM23]. Algebraically, our presentation is the same, but the hypergraph perspective streamlines combinatorial aspects of their ideas.
A.1 Preliminaries
A.1.0.0.1 Dual of Reed–Solomon codes.
It is well known that the dual of a Reed–Solomon code is a generalized Reed–Solomon code: Given positive integers and evaluation points , there exists nonzero such that the following matrix, called the parity-check matrix,
(24) |
satisfies if and only if .
A.1.0.0.2 Generic Zero Patterns.
Following [BGM23], we leverage the GM-MDS theorem to establish list-decodability of Reed–Solomon codes. In this work, we more directly connect the list-decoding problem to the GM-MDS theorem using a hypergraph orientation lemma (introduced in the next section). Here, we review generic zero-patterns and the GM-MDS theorem. To keep the meaning of the variable “” consistent throughout the paper, we unconventionally state the definition of zero patterns and the GM-MDS theorem with rows instead of rows.
Definition A.1.
Given positive integers , an -generic-zero-pattern (GZP) is a collection of sets such that, for all ,
(25) |
A.1.0.0.3 GM-MDS Theorem.
As in [BGM23], we connect the list-decoding problem to the GM-MDS theorem. Here, we make the connection more directly.
Theorem A.2 (GM-MDS Theorem [DSY14, Lov18, YH19]).
Given and any generic zero-pattern , there exists pairwise distinct evaluation points and an invertible matrix such that, if is the parity-check matrix for (as in (24)), then achieves zero-pattern , meaning that whenever .
We note that the original GM-MDS theorem shows that the generator matrix of a (non-generalized) Reed Solomon code achieves any generic zero pattern. Here, we state that the generator matrix of a generalized Reed–Solomon code achieves any generic zero pattern, which is an immediate corollary of the former result.
A.2 Hypergraph orientations
Our new perspective of the tools from [BGM23] leverages a well-known theorem about orienting weakly-partition-connected hypergraphs, stated below. This theorem is most explicitly stated in [Fra11], but it implicit in [Kir03, FKK03a].
A directed hyperedge is a hyperedge with one vertex assigned as the head. All the other vertices in the hyperedge are called tails. A directed hypergraph consists of directed hyperedges. In a directed hypergraph, the in-degree of a vertex is the number of edges for which is the head. A path in a directed hypergraph is a sequence such that for all , vertex is a tail of edge and vertex is the head of edge . An orientation of an (undirected) hypergraph is obtained by assigning a head to each hyperedge, making every hyperedge directed.
Theorem A.3 (Theorems 9.4.13 and 15.4.4 of [Fra11]).
A hypergraph is -weakly-partition-connected if and only if it has an orientation such that, for some vertex (the “root”), every other vertex has edge-disjoint paths to .999In [Fra11, Theorems 9.4.13 and 15.4.4], the property of having edge-disjoint paths to is called -edge-connected.
We note that Theorem 9 is remains true if “to ” is replaced with “from ” and -weakly-partition-connected is replaced with another hypergraph notion called -partition-connected. The following corollary essentially captures (the hard direction of) [BGM23, Lemma 2.8].
Corollary A.4.
Let be a -weakly-partition-connected hypergraph with hyperedges and . Then there exists integers summing to such that taking copies of gives an -GZP.
Proof.
Take the orientation of and root vertex given by Theorem 9. We now take our ’s as follows: for each non-root , let to be the in-degree of vertex . For the root , let . Note that any other vertex has edge-disjoint paths to , so has in-degree at least and . Since there are hyperedges, the sum of all ’s is thus . We now check the generic zero pattern condition (25). Consider any nonempty multiset such that each vertex appears at most times. We claim:
(26) |
The first equality holds by definition of . The second equality holds because . The second inequality holds because by definition of . It reamins to show the first inequality. We have two cases:
Case 1: the root is in . The number of hyperedges induced by the vertices is at most the sum of the indegrees of , which is exactly by definition of .
Case 2: the root is in . Fix an arbitrary vertex in . By our orientation of , vertex has edge-disjoint paths to . Each of these paths has an edges that “enters” , i.e., the head is in but the edge is not induced by . Thus, the number of edges induced by is at most , which is exactly by definition of . Hence, we have the first inequality. This covers all cases, proving (26), completing the proof. ∎
A.3 Proof of Theorem 2.11
In this section, we reprove Theorem 2.11, which we need in this work.
Proof of Theorem 2.11.
It suffices to prove that has full column rank for some evaluation of for . Furthermore, by Remark 2.12, it also suffices to prove Theorem 2.11 for when . Indeed, that would then show there that is a square submatrix of of full column rank, which means that submatrix has nonzero determinant (in ), which means the corresponding square submatrix of also has a nonzero determinant (in ), so has full column rank.
Let be the edges of our -weakly-partition-connected hypergraph . By Corollary A.4, there a generic zero pattern where, for all , the set is for some . By Theorem A.2, there exists pairwise distinct and a nonsingular matrix such that, for the parity check matrix of , the matrix achieves the zero pattern , meaning that whenever .
Suppose for the sake of contradiction there is a nonzero vector such that . Let be such that and . Define be such that where
(27) |
We next show that, for any , for all . Let . Since , we have, by definition of , for ,
(28) |
(note this is true even if , since ).
Define a vector such that, for , we have , where is an arbitrary element of hyperedge (by the previous paragraph, the choice of does not matter). For each , we must have for all such that is a copy of ; the ’th row of is supported only on , and is zero on by definition of . Since for all , we have, for all and all such that is a copy of ,
(29) |
By construction, all are a copy of some set , so we conclude . Since is invertible, we must have .
This means for some , so is the evaluation of a degree-less-than- polynomial. Since is -weakly-partition-connected, by considering the partition , there are at least hyperedges containing vertex in , so in at least indices . Since and are the evaluation of degree-less-than- polynomials, we must have . This holds for all , so we have (recall ), which contradicts our initial assumption that . ∎
Appendix B Alphabet size limitations
In this section, we establish Proposition 1.5. For positive integers , view as a vector space of dimension over base field . For a set , let
(30) |
An affine subspace is a set for some subspace of .
Lemma B.1 (Proposition 3.2 of [BKR10]).
Let be a subspace of . Then has the form
(31) |
where
As an immediate corollary, we have
Lemma B.2.
Let be an affine subspace of . Then has the form
(32) |
for .
Proof.
Since is an affine subspace, there exists such that is a subspace of . By Lemma B.1, we have is of the form
(33) |
for . In particular, is -linear, so
(34) |
Setting gives the desired form for . ∎
Lemma B.3 (Analogous to Lemma 3.5 of [BKR10]).
Let be a subset of of size . Let and be integers such that . Then there is a family of at least affine subspaces of dimension , such that each affine subspace satisfies , and for any two affine subspaces , the difference has degree at most .
Proof.
For every subspace of dimension , there exists such that the affine subspaces partition . By pigeonhole, there exists some such that The number of subspaces of dimension is
(35) |
so there are at least affine-subspaces with . For all such affine-subspaces , the polynomial has the form by Lemma B.2. Among these affine-subspaces , by the pigeonhole principle, for at least a fraction of these subspaces, their subspace polynomials have the same for . Let be this family of subspaces. The number of subspaces is at least , so is the desired family of affine subspaces. ∎
Proof of Proposition 1.5.
Let as in the statement of Proposition 1.5. Consider a Reed–Solomon code of length and rate over , where with sufficiently large. Let be the set of evaluation points. Apple Lemma B.3 with and . This gives a family of affine subspaces for which . Furthermore, for , the subspace polynomials each have roots, and agree on all coefficients of degree larger than . Let be an arbitrary element of . Then the polynomials are each of degree at most , and each agree with on at least values of . Thus, our Reed–Solomon code is not -list-decodable, as desired. ∎