Multiple-Error-Correcting Codes for
Analog Computing on Resistive Crossbars
Abstract
Error-correcting codes over the real field are studied which can locate outlying computational errors when performing approximate computing of real vector–matrix multiplication on resistive crossbars. Prior work has concentrated on locating a single outlying error and, in this work, several classes of codes are presented which can handle multiple errors. It is first shown that one of the known constructions, which is based on spherical codes, can in fact handle multiple outlying errors. A second family of codes is then presented with parity-check matrices which are sparse and disjunct; such matrices have been used in other applications as well, especially in combinatorial group testing. In addition, a certain class of the codes that are obtained through this construction is shown to be efficiently decodable. As part of the study of sparse disjunct matrices, this work also contains improved lower and upper bounds on the maximum Hamming weight of the rows in such matrices.
Index Terms:
Fault-tolerant computing, linear codes over the real field, vector–matrix multiplication, sparse group testing, disjunct matrices with limited row weightsI Introduction
Vector–matrix multiplication is a computational task that is found in numerous applications, including machine learning (e.g., deep learning) and signal processing. Designing circuits for vector–matrix multiplication requires achieving high computational throughput while concurrently ensuring minimal energy consumption and a compact physical footprint. These criteria have prompted recent proposals to incorporate resistive memory technology into analog computing architectures.
Let be a row -vector and be an matrix—both with (nonnegative) entries in or . In current implementations of vector–matrix multiplication [1],[9],[10],[14],[22], the matrix is realized as a crossbar of row conductors and column conductors with programmable nano-scale resistors at the junctions. The resistor at the junction is set to have conductance that is proportional to the entry of . Each entry of is converted into a voltage level that is proportional to and fed to the corresponding row conductor. Then the product , carried out over the real field , can be computed by reading the currents at the column conductors. Negative entries in or can be accommodated by duplication of the circuit.
Recently, the second author proposed two classes of coding schemes to locate computational errors under two distinct scenarios: exact integer vector–matrix multiplication [18] and approximate real vector–matrix multiplication [19]. We next describe the second scenario, as it will be the subject of this work as well.
In the model described in [19], the ideal computation may be distorted by two types of errors, which lead to a read vector
(1) |
where . The entries of are all within the interval for some prescribed threshold , representing small computational errors that are tolerable, while the entries of represent outlying errors that may be caused by events such as stuck cells or short cells in the array (and may have large magnitudes). The goal is to design a coding scheme that allows to locate all the non-zero entries of that are outside an interval , for the smallest , provided that the number of outlying errors does not exceed a prescribed number . A more general setting includes the option of detecting additional errors and, as shown in [19], in this case the value plays a role when analyzing the correction capability of a coding scheme.
The encoding scheme presented in [19] can be characterized by a linear code over : we allocate columns of the matrix for redundancy so that each row of forms a codeword of . Then the result of the multiplication of any input real row vector by the matrix is also a codeword of .
In crude terms (with more details to be provided in Section II), the required condition from the linear code is that it has a decoder that locates all the outlying errors of magnitude above , whenever the Hamming weight of does not exceed ; moreover, if the decoder returns a set of locations (rather than just detects errors), then should be nonzero at all these locations. Linear codes over which satisfy this condition are referred to as analog error-correcting codes.
For the case (which includes the single error location/detection cases, i.e., ), code constructions were proposed in [19] for several trade-offs between the redundancy and the smallest attainable ratio . One of the constructions for has a sparse parity-check matrix over and attains , for every even redundancy ; another construction has a parity-check matrix that forms a spherical code and attains with .
In this work, we present several classes of codes over for a wide range of values , and compute upper bounds on the attainable ratios , in terms of , , and ; see Table I. When , our bounds coincide with those presented in [19],[20]. One of the classes is actually the spherical code scheme of [19] when constructed with redundancy : we show that these codes can still attain yet for a wide range of . In our analysis we make use of the restricted isometry property (and a variant thereof) of matrices of low coherence—a tool which is widely used in compressed sensing [2],[3].
A second class of codes to be presented is based on disjunct matrices with limited row weights—a notion that has been applied, inter alia, in combinatorial group testing [11]. Employing the known construction of disjunct matrices of [11], for any such that is a prime power and , our codes attain with .
Our study also includes a new family of disjunct matrices (which, in turn, can then be employed in our code construction mentioned above). Specifically, for any positive integer such that is a prime power, we construct optimal disjunct matrices with maximum row weight, achieving the lower bound on the number of rows as stated in [11]; formerly, such disjunct matrices were exclusively established for . Moreover, by deriving a new lower bound on the number of rows, we show that the construction in [11] of disjunct matrices with maximum row weight is asymptotically optimal.
The paper is organized as follows. We begin, in Section II, by providing notation and known results used throughout the paper. This section also contains our new lower bound on the number of rows of a disjunct matrix with limited row weights.
In Section III, we analyze the spherical code construction and establish its performance for a wide range of values .
In Section IV, we present the code construction that is based on disjunct matrices with limited row weights. Efficient decoding algorithms to locate the outlying errors are also presented.
In Section V, we present our new family of optimal disjunct matrices. Employing these matrices in our code construction, for any (fixed) rational number they attain and for infinitely many values of . Interestingly, these parameters align with those of the single-error-correcting codes in [19] (wherein can be any even integer such that and ).
II Preliminaries
For integers , we denote by the integer subset . We will use the shorthand notation for , and we will typically use to index the entries of vectors in . Similarly, the entries of an matrix will be indexed by , and and will denote, respectively, row and column in . For a subset , the notation stands for the submatrix of that is formed by the columns that are indexed by .
Unless specified otherwise, all logarithms are taken to base .
II-A Analog error-correcting codes
Given , let
be the set of all tolerable error vectors with threshold , where stands for the infinity norm . For , define
In particular, is the ordinary support of . We use to denote the Hamming weight of . The set of all vectors of Hamming weight at most in is denoted by .
Let be a linear code over . A decoder for is a function which returns a set of locations of outlying errors or an indication that errors have been detected. Given and prescribed nonnegative integers and , we say that the decoder corrects errors and detects additional errors with respect to the threshold pair , or that is a -decoder for , if the following conditions hold for every as in (1), where , , and .
- (D1)
-
If then .
- (D2)
-
If then .
Let be a nonzero vector in and let be a permutation on such that
Given an integer , the -height of , denoted by , is defined as
and we formally define . For a linear code over , its -height, denoted by , is defined by
The minimum Hamming distance of , denoted by , can be related to by
(2) |
Theorem 1 motivated in [19] to define for every the expression
(3) |
so that is the smallest ratio for which there is a -decoder for . Equivalently, is the smallest such that there is a -decoder for . Thus, given and , our aim is to construct linear codes over with both and redundancy as small as possible.
For the case , a characterization of and was presented in [19] in terms of the parity-check matrix of . In the next proposition, we present a generalization of that characterization to any . Given a parity-check matrix of over , let
(4) |
and
Note that is the set of all the syndrome vectors (with respect to ) that can be obtained when there are no outlying errors, assuming that . Also, for let
(6) |
i.e., consists of all the vectors such that both and .
Proposition 2.
Given a linear code over , let be a parity-check matrix of and let . Then
(7) |
Proof:
We first show that is contained in the minimand set in (7). Assume to the contrary that there is a vector such that , namely, for some . Then and, so,
This, in turn, implies
which is a contradiction.
We next show that is indeed the minimum of the set in (7). Assuming to the contrary that this set contains some , there is a nonzero codeword such that
Without loss of generality we can assume that
where and (thus) . Define the vectors as follows:
Then and , namely, . On the other hand , which means that is not in the minimand in (7), thereby reaching a contradiction.
Propositions 9 and 8 in [19] are special cases of Proposition 2 for and , respectively. Proposition 2 holds (vacuously) also when : in this case the minimand in (7) is empty (since contains nonzero codewords in with arbitrary infinity norms), while (from (2)).
We end this subsection by mentioning two of the constructions for that were presented in [19].
Theorem 3 ([19, Proposition 6]).
Let be an matrix over which satisfies the following three conditions:
-
1.
all columns of are distinct,
-
2.
each column in contains exactly two nonzero entries, the first of which being a , and
-
3.
each row has Hamming weight or .
(In particular, these conditions require that .) The linear code over with a parity-check matrix satisfies .
When is even, the inequality is also sufficient for having a matrix that satisfies the conditions of the theorem [12].
A second construction is presented in [19] that is based on spherical codes. The construction will be recapped in Section III, and the next theorem summarizes its properties.
Theorem 4 ([20, Proposition 5]).
There exists a linear code over with , whenever is bounded away from (above) .
II-B Disjunct matrices
Let and let . An matrix over is called -disjunct if the union of the supports of any columns of does not contain the support of any other column. In other words, for any column index and a subset of additional column indexes there is a row index such that while for all . (Equivalently, every submatrix of contains rows that form the identity matrix.)
A -disjunct matrix is a -disjunct matrix whose rows all have weights bounded from above by .
Disjunct matrices play a crucial role in the area of group testing, which studies how to identify a set of at most positive items from a batch of total items. The basic strategy of group testing is to group the items into several tests, i.e., some subsets of items. In each test, a positive outcome indicates that at least one of the items included in this test is positive and a negative outcome indicates that all items included are negative. A disjunct matrix describes a nonadaptive group testing scheme: we use the tests to index the rows and use items to index the columns. Then the th test contains the th item if and only if . It is not very difficult to see that the -disjunct property ensures that this testing scheme can identify all the positive items as long as their number is at most .
The first explicit construction of disjunct matrices was proposed by Kautz and Singleton [13]. Their construction uses a Reed–Solomon (RS) outer code concatenated with binary unit vectors and requires tests, which matches the best known lower bound, , in [7],[8] when for some fixed . Subsequently, Porat and Rothschild [17] proposed another explicit construction, which is similar to the Kautz–Singleton construction but uses a code meeting the Gilbert–Varshamov (G–V) bound as the outer code. Their construction achieves and outperforms the Kautz–Singleton construction in the regime where .
More recently, motivated by practical applications in group testing and wireless communication, Inan et al. investigated disjunct matrices with constraints on either the maximal row weight (i.e, -disjunct matrices) or the maximal column weight [11]. In the context of this paper, we focus on -disjunct matrices and demonstrate in Section IV that -disjunct matrices can be used to construct analog error-correcting codes.
Inan et al. first examined the Kautz–Singleton construction and the Porat–Rothschild construction and computed the maximum row weight of the corresponding disjunct matrices.
Theorem 5 ([11, Theorems 2 and 3]).
The Kautz–Singleton construction yields a -disjunct matrix with constant row weight and
The Porat–Rothschild construction yields a -disjunct matrix where and .
In the Porat–Rothschild construction, the number of rows, , meets the lower bound when is fixed. The following result shows that in a -disjunct matrix with rows one must have ; so, in a regime where is fixed, both and in the Porat–Rothschild construction meet their respective lower bounds.
Lemma 6.
Let be a -disjunct matrix, where for some fixed . Then .
Proof:
Since is -disjunct, it cannot contain two identical columns and, so, . Let be such that , where is the binary entropy function. Then
Hence, there are at least columns in each of which has weight at least . By counting the number of s in , we get that
which implies that .
Inan et al. proved the following generic lower bound on the number of rows of a -disjunct matrix.
Theorem 7 ([11, Theorem 8]).
A -disjunct matrix must satisfy
They also modified the Kautz–Singleton construction by changing the dimension of the outer RS code and obtained the following result.
Theorem 8 ([11, Theorem 8]).
Let , let be a prime power, and set
Also, let be such that . The Kautz–Singleton construction yields a -disjunct matrix with constant row weight and
Substituting in Theorem 8 yields a construction for with , which, in view of Theorem 7, is optimal with respect to . In Section V, for any such that is a prime power, we construct optimal -disjunct matrices with number of rows .
We end this section with a new lower bound on the number of rows of -disjunct matrices; this bound, in turn, will imply that for any (fixed) , the matrices in Theorem 8 are asymptotically optimal when . We use the following terms (as defined in the proof of Theorem 4 in [11]). In an binary matrix , a row is said to be private for a column if row contains a only at column . Similarly, a private set for column is defined as a subset such that for any .
Theorem 9.
Let be such that . Any -disjunct matrix must satisfy
In particular, if , then
Proof:
Let be a -disjunct matrix where . Consider the columns that have weight and denote their number by . Since is -disjunct, each of these columns must have a private row; hence, . Remove these columns along with the corresponding private rows and let be the resulting matrix. Clearly, is -disjunct and each column in has weight.
Next, consider the columns of that have weight and denote their number by . Since is -disjunct, each of these columns must have a private set of size at most . Note that these private sets cannot be nested. If , it follows from the Lubell–Yamamoto–Meshalkin inequality (see [15]) that ; if , it follows from Sperner’s theorem that . Hence, . We remove these columns from and count the number of s in the resulting matrix in two ways; doing so, we get
which implies that
III The Spherical-Code Construction: Locating Multiple Errors
When , the spherical code construction of [19] yields a linear code over with redundancy and with . In this section, we use Proposition 2 to analyze the multiple-error-correcting capability of . In particular, we show that for any fixed , we still have .
We first recap the construction. Let be a linear code over which satisfies the following two properties:
- (B1)
-
contains the all-one codeword, and—
- (B2)
-
.
Let and let be the set of the codewords of whose first entry is a . Let be the matrix over whose columns are obtained from the codewords in by replacing the entries by . The code is defined as the code over with the parity-check matrix .
Remark 1.
Properties (B1)–(B2) imply that has a generator matrix with an all-one row and with columns that are all distinct. This, in turn, requires that . We will in fact assume that the latter inequality is strict (in order to have ), in which case .
Remark 2.
In what follows, we will also use codes which—in addition to satisfying properties (B1)–(B2)—attain the G–V bound, i.e.,
where is the binary entropy function. E.g., when is a power of , the construction of a generator matrix of such a code can start with the rows of the generator matrix of the first-order binary Reed–Muller code (thereby guaranteeing properties (B1)–(B2)), followed by iterations of adding rows that are within distance from the linear span of the already-selected rows. (As shown in [17], this process can be carried out by a deterministic algorithm in time .)
The property of guarantees that any two rows of are orthogonal which, in turn, implies that
Equivalently,
(8) |
where and are as defined in (4)–(II-A). The minimum Hamming distance of and property (B1) jointly imply that for any two distinct columns and in ,
(9) |
where is the angle between and . Then, using geometric arguments, it is shown in [19] that
As argued in [19], we can now select to be a linear code over that satisfies properties (B1)–(B2) with both and bounded away from , in which case the code has and .
Turning now to , we make use of of the following concepts used in the theory of compressed sensing [2],[3],[4],[6]. Let be an matrix over and let and . We say that satisfies the restricted isometry property (RIP) of order with constant , if for every ,
In what follows we concentrate on matrices whose columns are unit vectors, i.e., for all . For such matrices, we define the coherence by
Proposition 10 ([2, Proposition 1]).
Let be an matrix over with columns that are unit vectors and with coherence , and let be such that . Then satisfies the RIP of order with constant .
Under the conditions of Proposition 10, for every we then have
(10) |
Theorem 11.
Let be a linear code over that satisfies properties (B1)–(B2). Denote
(11) |
and let be such that . Then
In particular, if attains the G–V bound, then
for every for which the denominator under the outer square root is positive.
Proof:
Let be the parity-check matrix that was used to define and let . Each column in is a unit vector and, so, from (9) we get
(12) |
Let
(13) |
where the condition guarantees that . Also, let be an arbitrary vector in (see (6)). For such a vector,
(14) |
and, so,
(15) | |||||
It therefore follows from (8) that
and by Proposition 2 we thus conclude that .
If attains the G–V bound, then
(16) |
where . From we then get
or
Hence, in this case,
The next lemma (which is proved in the Appendix) presents an alternative to the bound (10) that leads to some improvement on Theorem 11. For and a positive integer , we introduce the notation
Remark 3.
In the range we have . Also, it is easy to verify by differentiation that in that range of (when is assumed to be fixed), the mapping is non-increasing.
Lemma 12.
Let be an matrix over with columns that are unit vectors and with coherence , and let be such that . Then for every ,
Theorem 13.
Under the conditions of Theorem 11,
Proof:
When , we have
and so, Theorem 13 is stronger than Theorem 11. The improvement of Theorem 13 is seen best when is close to .111This means that given and , we select to be close to the largest possible and analyze which values of can then be attained. For example, when , Theorem 11 yields the upper bound
while from Theorem 13 we get:
(18) |
In fact, (18) is the bound we get in Theorem 11 when we reduce (by almost half) to while, for this , Theorem 13 yields
When , the upper bounds in both theorems approach .
Corollary 14.
For any there exists a linear code over with
and
Proof:
Remark 4.
The last corollary is non-vacuous when (otherwise we have ). When , the corollary coincides with Theorem 4.
Remark 5.
In Corollary 14, we can make grow more slowly with at the expense of a faster growth with , while keeping the same upper bound . Specifically, in the proof, we take to be the dual of an extended binary BCH primitive code [16, p. 280], or as a concatenation of a RS outer code with the first-order binary Reed–Muller code. In both cases we have, for a parameter ,
i.e.,
Substituting and then yields
which is non-vacuous when .
IV Code Construction Based on Disjunct Matrices
In this section, we study the relationship between analog error-correcting codes and disjunct matrices. Specifically, we consider linear codes over with parity-check matrices that are -disjunct: we first study their properties (Theorem 15) and then propose decoding algorithms for these codes.
Theorem 15.
Let be a -disjunct matrix, for some , and let be the linear code over that has as a parity-check matrix. Then
Proof:
We show that is contained in the minimand set in (7); the result will then follow from Proposition 2. Given any vector , write and let be a position at which . Since is -disjunct and , there is a row index such that contains a only at position . Therefore,
(19) |
On the other hand, since , for every we have , namely,
(20) |
By (19) and (20) we get that , thus establishing that is contained in the minimand in (7).
Corollary 16.
Let , let be a prime power, and set . Then for any positive integer there is an explicit construction of a linear code over such that
and
In particular, by taking , for any one can obtain a linear code with
It is worth noting that when , the bound coincides with the one in Theorem 3 (although in that theorem can take multiple values, including values that are smaller than ). We also note that in Corollary 16, we have and , for certain (fixed) and infinitely many values of . In Section V, we present a construction of disjunct matrices which produce codes with similar dependence of and on and , yet for .
Next, we compare the construction of Corollary 14 with the case in Corollary 16 (as this case yields the slowest growth of with ). For the former we have , while for the latter , which is smaller since . Yet the construction of Corollary 16 requires , which can match the redundancy, , in Corollary 14 only when
(still, by Remark 4, this range partially overlaps with the range of for which the codes in Corollary 14 are realizable).
Remark 6.
In the remainder of this section, we present decoders for linear codes with parity-check matrices that are -disjunct. In Subsection IV-A we present a decoder for the generic case, yet its complexity is , i.e., polynomial only when is fixed. A much more efficient algorithm is presented in Subsection IV-B, yet under the additional assumption that the column weights in the parity-check matrix are also constrained.
IV-A Decoder for the generic disjunct construction
Our first decoder, denoted by , is presented in Algorithm 1.
Theorem 17.
Proof:
Assume a received (read) vector
where , , and .
We first show that
(21) |
Take and . Then for every , since and , we have
(22) |
Hence, passes the check in the while loop and, so, the set is joined into , thereby establishing (21).
Next, we assume that and show that
(23) |
Write ; then . Let be a pair in that passes the check in the while loop, i.e., for all . We claim that . Otherwise, take a . Since is -disjunct and
there is a row index such that and for all . Then and, so, . On the other hand, we also have , from which we get
Yet this means that the pair does not pass the check in the while loop, thereby reaching a contradiction. We conclude that when , any set that is joined into in the while loop is a subset of , thus establishing (23).
We note that
Given a pair , checking the conditions in the while loop of Algorithm 1 can be done in time.
IV-B Decoder when columns in are also weight-constrained
Let be a code as in Theorem 15 and be a positive integer. We next present a more efficient -decoder for under the following two additional conditions on :
- (H1)
-
Every row of has weight at least .
- (H2)
-
Every column of has weight at most .
Condition (H1) is not really limiting: the case where contains rows of weight is degenerate, as then there are positions on which all the codewords in are identically (and, thus, these coordinates can be ignored, thereby reducing the decoding to a shorter code). In Section V, we present constructions of -disjunct matrices that satisfy conditions (H1)–(H2).
We will use the following lemma.
Lemma 18.
Let be a -disjunct matrix that satisfies condition (H1). Given any nonempty subset of size , for every column index there exist at least nonzero rows in the submatrix that contain a only at column .
Proof:
The proof is by backward induction on , with the induction base, , following from the definition of a -disjunct matrix.
Turning to the induction step, suppose that and let be any column index in . By the disjunct property, there exists a row index such that contains a only at position . By condition (H1), there is at least one index for which . Letting , by the induction hypothesis there are at least nonzero rows in that contain a only at column ; clearly, none of these rows is indexed by since contains two s. Altogether there are at least nonzero rows in that contain a only at column .
Remark 7.
Given and a vector (such as a syndrome that is computed with respect to ), we let be the real row vector whose entries are given by
Theorem 19.
Let be a code as in Theorem 15 and suppose that also satisfies conditions (H1)–(H2). For nonnegative integers and such that
(24) |
let be defined for every by
(25) |
where . Then is a -decoder for .
Proof:
Assume a received (read) vector
where , , and .
We first show that
(26) |
Take and let . By Lemma 18 we get that the submatrix contains at least
(27) |
rows with a only at column . Denoting by the set of indexes of these rows, for every , the respective entry in the syndrome satisfies:
(similarly to (22)). It follows that the respective entry, , in equals and, so, the supports of and the column in overlap on at least positions. Hence,
i.e., . We conclude that
thereby establishing (26).
Next, we assume that and show that
(28) |
Write and let . Lemma 18, now applied with , implies that the submatrix contains at least
(29) |
rows with a only at column . Letting be the set of indexes of these rows, for every we then have and, so, the respective entry in the syndrome satisfies:
namely, . Hence, the number of positions on which the supports of and overlap is at most
and, so,
i.e., . We conclude that when ,
thereby establishing (28).
The decoder (25) is easy to compute: it consists of a multiplication of to the right by to obtain the syndrome , and then to the left by a binary vector which is a quantized copy of . Since is a matrix whose rows and columns have limited weights (at most and , respectively), the decoding requires less than real additions.
We note that the condition (24) (which was used in our analysis), is generally stricter than the condition which, by Theorems 15 and 17, is sufficient for having a -decoder for . These two conditions coincide when , and this case is characterized in the next lemma.
Lemma 20.
Let be a -disjunct matrix that satisfies conditions (H1)–(H2) with . Then the following holds.
- M1)
-
Every column of has weight (exactly) .
- M2)
-
The supports of every two distinct columns of intersect on at most one coordinate.
Equivalently, for every in :
(and, thus, the columns of constitute a spherical code).
Proof:
We end this section by presenting a simple decoder for the detection-only case, i.e., . In this case, we actually do not need conditions (H1)–(H2), and we can handle any .
Theorem 21.
Proof:
Condition (D1) pertains only to the error vector , in which case
Consequently, and we have , as required.
V Constructions of Disjunct Matrices with Weight-Constrained Rows and Columns
In this section, we present several constructions for disjunct matrices which satisfy conditions (H1)–(H2) with . Our constructions are based on combinatorial designs. We start by recalling several definitions.
Let be such that . A - packing design is a pair , where is a set of elements (called points) and is a collection of -subsets (called blocks) of , such that every -subset of is contained in at most one block. Furthermore, a packing design is called resolvable if its blocks can be partitioned into sets (parallel classes) such that each point is contained in exactly one block in each .
The incidence matrix of packing design is an binary matrix whose rows and columns are indexed by the elements of and , respectively, and for each and ,
Proposition 22.
Let be a resolvable - packing design with parallel classes. Then its incidence matrix is a -disjunct matrix with constant row weight , where .
Proof:
Write and let be arbitrary blocks in . Since is a - packing design, every two blocks of have at most common points. Then for all and, so,
where the third inequality follows from . It follows that
namely, there is a point such that whereas for all . Hence, is -disjunct.
Next, we consider the row weight, , where is any point in : this weight equals the number of blocks in which contain . Since is contained in exactly one block in each parallel class and there are in total parallel classes, we get .
A transversal design is a triple , where is a set of points, is a partition of into partition elements (groups), each of size , and is a collection of -subsets (blocks) of such that every -subset of is contained either in one group or in one block, but not both. A is called resolvable if its blocks can be partitioned into parallel classes.
It is easy to see that in a , each block intersects with each group at exactly one point. A direct calculation shows that there are blocks and each point is contained in blocks. So, if it is resolvable, then the blocks should be partitioned into parallel classes.
It is known that the existence of a resolvable is equivalent to the existence of mutually orthogonal latin squares of side , while the latter can be constructed by using linear polynomials (e.g., see Theorem 3.18 and Construction 3.29 in [5, Section III.3]). In the following example, we use linear polynomials to construct resolvable s directly.
Example 1.
Let be a prime power. We can construct a resolvable as follows. Take , , and , where
For each , let ; clearly, is a partition of .
Each block has size , which equals the number of groups, and every -subset of the form (which is contained in one group) cannot be contained in any block. For a -subset with , the system of equations
has a unique solution for ; hence, there is a unique block in which contains that -subset. Therefore, is a transversal design. Moreover, for each and each point , there is a unique such that . Hence, each is a parallel class.
Lemma 23.
If there is a resolvable , then for every and , there is a - packing design with parallel classes.
Proof:
From a resolvable we can form a resolvable by deleting groups. This resolvable design consists of parallel classes. We can take of them to form a - packing design.
Theorem 24.
Let be such that is a prime power and . There is an explicit construction of a -disjunct matrix with constant row weight , constant column weight , and (therefore) number of rows
thereby attaining the bound in Theorem 7.
Proof:
Since is a prime power, we can take a resolvable from Example 1. Then, according to Lemma 23, for any such that , we can construct a - packing with parallel classes. Since each parallel class consists of blocks, the total number of blocks is . So, the incidence matrix of this packing is of order , where , and constant row weight . Moreover, according to Proposition 22, is -disjunct.
Corollary 25.
Let be such that is a prime power and . There is an explicit construction of a linear code over with
We note that the in Example 1 is equivalent to a (extended) RS code over : each block in the corresponds to a codeword whose positions are indexed by the elements of . An element contained in the block indicates that in the corresponding codeword there should be a symbol at the position which is indexed by .
In the Kautz–Singleton construction, the columns of the disjunct matrix are the codewords of the binary code that is obtained by concatenating a RS outer code over with the binary code which consists of the words in of Hamming weight . In light of this, it is not difficult to see that the incidence matrix of the in Example 1 is actually the disjunct matrix from the Kautz–Singleton construction with an RS outer code of dimension . Hence, the disjunct matrices in Theorem 24 can also be obtained by carefully choosing the columns of the Kautz–Singleton disjunct matrix which correspond to the selected parallel classes.
We have the following product construction of ’s, which yields more disjunct matrices. The proof of this construction is straightforward and is therefore omitted.
Proposition 26.
Let be a resolvable with a group partition and with parallel classes , , and let be a resolvable with a group partition and with parallel classes , . For any two blocks and , denote
where (respectively, ) is the unique element in (respectively, ), . Then the set of points
the group partition , and the set of blocks
form a resolvable with parallel classes
Theorem 27.
Let be positive integers such that and divides , and let be the prime factorization of . Let .
(i) For any positive integer , there is an explicit construction of a -disjunct matrix with constant row weight and constant column weight , where (thereby attaining the bound of Theorem 7).
(ii) For any positive integer such that , there is a linear code over with
Proof:
Proof:
We first observe that the entries along the main diagonal of are all and that the absolute value of each off-diagonal entry is at most . Hence,
(30) | |||||
We next minimize the expression (30) over under the constraint that and is given. Assuming without loss of generality that and that for all , we claim that the minimum is attained when . Otherwise, if for some then replacing both and by would reduce the term while keeping unchanged.
References
- [1] B. E. Boser, E. Sackinger, J. Bromley, Y. L. Cun, and L. D. Jackel, “An analog neural network processor with programmable topology,” IEEE J. Solid-State Circuits, vol. 26, no. 12, pp. 2017–2025, Dec. 1991.
- [2] J. Bourgain, S. J. Dilworth, K. Ford, S. Konyagin, and D. Kutzarova, “Explicit constructions of RIP matrices and related problems,” Duke Math. J., vol. 159, no. 1, pp. 145–185, 2011.
- [3] E. Candès, “The restricted isometry property and its implications for compressed sensing,” C.R. Acad. Sci. Paris, Ser. I, vol. 346, pp. 589–592, 2008.
- [4] E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.
- [5] C. Colbourne and J. Dinitz, Handbook of combinatorial designs, second edition. CRC press Boca Raton, FL, 2007.
- [6] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.
- [7] A. G. D’yachkov and V. V. Rykov, “Bounds on the length of disjunctive codes,” Probl. Peredachi Inf., vol. 18, no. 3, pp. 7–13, 1982.
- [8] Z. Füredi, “On -cover-free families,” J. Combin. Theory Ser. A, vol. 73, no. 1, pp. 172–173, 1996.
- [9] M. Hu, C. E. Graves, C. Li, Y. Li, N. Ge, E. Montgomery, N. Davila, H. Jiang, R. S. Williams, J. J. Yang, Q. Xia, and J. P. Strachan, “Memristor-based analog computation and neural network classification with a dot product engine,” in Adv. Mater., vol. 30, Mar. 2018, paper no. 1705914.
- [10] M. Hu, J. P. Strachan, Z. Li, E. M. Grafals, N. Davila, C. Graves, S. Lam, N. Ge, J. Yang, and R. S. Williams, “Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication,” in Proc. 53rd ACM/EDAC/IEEE Design Automat. Conf. (DAC), Austin, TX, 2016, paper no. 19.
- [11] H. A. Inan, P. Kairouz, and A. Özgür, “Sparse combinatorial group testing,” IEEE Trans. Inf. Theory, vol. 66, no. 5, pp. 2729–2742, May 2020.
- [12] G. Katona and A. Seress, “Greedy construction of nearly regular graphs,” European J. of Combin., vol. 14, pp. 213–229, 1993.
- [13] W. Kautz and R. Singleton, “Nonrandom binary superimposed codes,” IEEE Trans. Inf. Theory, vol. 10, no. 4, pp. 363–377, Oct. 1964.
- [14] F. J. Kub, K. K. Moon, I. A. Mack, and F. M. Long, “Programmable analog vector-matrix multipliers,” IEEE J. Solid-State Circuits, vol. 25, no. 1, pp. 207–214, Feb. 1990.
- [15] D. Lubell, “A short proof of Sperner’s lemma,” J. Combin. Theory Ser. A, vol. 1, no. 2, p. 299, 1966.
- [16] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes. Amsterdam: North-Holland, 1977.
- [17] E. Porat and A. Rothschild, “Explicit non-adaptive combinatorial group testing schemes,” IEEE Trans. Inf. Theory, vol. 57, no. 12, pp. 7982–7989, Dec. 2011.
- [18] R. M. Roth, “Fault-tolerant dot-product engines,” IEEE Trans. Inf. Theory, vol. 65, no. 4, pp. 2046–2057, Apr. 2019.
- [19] ——, “Analog error-correcting codes,” IEEE Trans. Inf. Theory, vol. 66, no. 7, pp. 4075–4088, Jul. 2020.
- [20] ——, “Fault-tolerant neuromorphic computing on nanoscale crossbar architectures,” in Proc. 2020 IEEE Inf. Theory Workshop (ITW), Mumbai, India, 2022, pp. 202–207.
- [21] ——, “Correction to “analog error-correcting codes”,” IEEE Trans. Inf. Theory, vol. 69, no. 6, pp. 3793–3794, Jan. 2023.
- [22] A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian, J. P. Strachan, M. Hu, R. S. Williams, and V. Srikumar, “ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars,” in Proc. ACM/IEEE 43rd Annu. Int. Symp. Comput. Archit. (ISCA), Seoul, Korea, Jun. 2016, pp. 14–26.