Faster Walsh-Hadamard Transform and Matrix Multiplication over Finite Fields using Lookup Tables
Abstract
We use lookup tables to design faster algorithms for important algebraic problems over finite fields. These faster algorithms, which only use arithmetic operations and lookup table operations, may help to explain the difficulty of determining the complexities of these important problems. Our results over a constant-sized finite field are as follows.
The Walsh-Hadamard transform of a vector of length can be computed using bit operations. This generalizes to any transform defined as a Kronecker power of a fixed matrix. By comparison, the Fast Walsh-Hadamard transform (similar to the Fast Fourier transform) uses arithmetic operations, which is believed to be optimal up to constant factors.
Any algebraic algorithm for multiplying two matrices using operations can be converted into an algorithm using bit operations. For example, Strassen’s algorithm can be converted into an algorithm using bit operations. It remains an open problem with practical implications to determine the smallest constant such that Strassen’s algorithm can be implemented to use arithmetic operations; using a lookup table allows one to save a super-constant factor in bit operations.
1 Introduction
When is a power of , the Walsh-Hadamard transform is defined recursively by and
A common task in many areas of computation is to compute the length- Walsh-Hadamard transform, i.e., given as input a length- vector , compute the vector . The most straightforward algorithm would compute this using arithmetic operations, but the fast Walsh-Hadamard transform (FWHT) algorithm can compute this using only arithmetic operations. It is widely believed that arithmetic operations are necessary, and a substantial amount of work has gone into proving this in restricted arithmetic models222For instance, this is known to hold for arithmetic circuits with ‘bounded coefficients’; see e.g., [Lok09, Section 3.3]., and studying conjectures which would imply this333For instance, the now-refuted conjecture that the Walsh-Hadamard transform is rigid [AW17].; see, for instance, the survey [Lok09].
A natural question arises: why restrict ourselves to arithmetic models of computation? The Walsh-Hadamard transform is commonly used in practice, and speedups using non-arithmetic operations could be impactful. Nonetheless, there has not been much work on non-arithmetic algorithms for the Walsh-Hadamard transform.
A related problem is matrix multiplication: given as input two matrices, compute their product. The asymptotically fastest known algorithm is algebraic, and uses arithmetic operations. That said, non-algebraic algorithmic techniques, especially lookup tables, have been used to design more practical, ‘combinatorial’ algorithms for a variant on this problem called Boolean matrix multiplication444In Boolean matrix multiplication, given two matrices whose entries are from , our goal is to multiply them over the AND-OR semiring, or equivalently, to determine which entries of their product over are nonzero (but not necessarily what nonzero values they take on). since the work of [ADKF70]. These techniques save logarithmic factors over the straightforward time algorithm – the best algorithm along these lines runs in time [Yu18] in the standard word RAM model – but they are considered more practical than the algebraic algorithms (which save polynomial factors in ). Lookup tables have also been used in practice to approximately multiply matrices [JPK+20, BG21].
In this paper, we show that lookup tables can be used to speed up the asymptotically fastest known algorithms for both the Walsh-Hadamard transform and (exact) matrix multiplication over finite fields. We will show, for instance, that only bit operations suffice to compute the length- Walsh-Hadamard transform over finite fields when we augment the arithmetic model to allow for lookup tables. This may help to explain the difficulty of proving lower bounds for this problem, and help to guide future work on arithmetic circuit lower bounds (since any lower bounds technique would need to fail when lookup tables are allowed).
We focus here on constant-sized finite fields, though our algorithms generalize to larger finite fields as well. Our algorithms are simple modifications of the usual recursive algorithms for solving these problems, and we describe below how they compare favorably to algorithms which are used in practice.
1.1 Model of computation
As discussed, these problems are typically studied in the arithmetic circuit model, wherein an algorithm may only perform arithmetic operations () over the field applied to inputs and fixed constants, and the arithmetic complexity of the algorithm is the number of such operations. The asymptotically fastest known algorithms for the problems we study here all fit within this model. However, we would also like the ability to use lookup tables, so we consider a more general model: the bit operations model.
The bit complexity of a RAM algorithm is the number of operations on bits performed by the algorithm. This is the natural model with the most direct comparison to the arithmetic model, since an algorithm with arithmetic complexity over a constant-sized finite field naturally has bit complexity . This model is often used as a more realistic version of the arithmetic model (see e.g., [Pan81, Lin91, VDHL13, EDS18]).
One can see (via a simple tree data structure, for instance) that a lookup table with -bit keys and values can be implemented so that values can be looked up and changed using bit operations.
We note before moving on that all the algorithms in this paper will only perform arithmetic operations and lookup table operations; one could also define a nonstandard model of computation which allows for just these two types of operations, and get the same results.
1.2 Results: Walsh-Hadamard Transform
Our first result is a faster algorithm for the Walsh-Hadamard transform.
Theorem 1.1.
Let be a finite field of size , let be a positive integer, and let . There is an algorithm for computing the length- Walsh-Hadamard transform over that uses bit operations.
By comparison, the fast Walsh-Hadamard transform algorithm, which uses arithmetic operations, would take bit operations. arithmetic operations is widely believed to be optimal over any field (whose characteristic is not 2; the problem is trivial in that case), but our algorithm improves on this using lookup tables.
Our algorithm uses the same recursive approach as the fast Walsh-Hadamard transform; our main idea is to use results from a lookup table to quickly jump forward many recursive layers at a time.
Although the Walsh-Hadamard transform is most often applied over the real or complex numbers, such as in signal processing and data compression, it has been applied over finite fields in areas including coding theory [RL01, XQ19], cryptographic protocols [HK10, MQ19], and learning algorithms [LC20].
Our algorithm also generalizes directly to any transform defined by Kronecker powers of a fixed matrix over a finite field. For instance, given as input the coefficients of a multilinear polynomial in variables over , we can compute its evaluation on all inputs from in bit operations (improving over the usual recursive algorithm by a factor of )555This problem corresponds to computing the linear transform defined by Kronecker powers of the matrix ..
1.3 Results: Matrix Multiplication
Our result for matrix multiplication shows how to convert any algebraic algorithm into one which uses a lookup table to save a superconstant factor.
Theorem 1.2.
For any finite field of size , suppose there is an algebraic algorithm for multiplying matrices over in arithmetic operations for some constant . Then, there is another algorithm for multiplying matrices over which uses bit operations.
This speeds up the standard implementation of the algebraic algorithm by a factor of . For instance, Strassen’s algorithm [Str69] gives , resulting in an algorithm using bit operations by Theorem 4.2, and the asymptotically fastest known algorithm [CW90, AW21] gives , resulting in an algorithm using bit operations by Theorem 4.2.
Notably, much work has gone into improving the leading constant in the running time of Strassen’s algorithm. Strassen’s original algorithm [Str69] has leading constant (meaning, it uses operations), and Winograd [Win71] improved this to . This was believed to be optimal due to lower bounds by Probert [Pro76] and Bshouty [Bsh95]. However, in a recent breakthrough, Karstadt and Schwartz [KS20] gave a new ‘change of basis’ approach, and used it to improve the leading constant to . They showed that is optimal for their new approach, and later work showed it is also optimal for the more general ‘sparse decomposition’ approach [BS19]. The fact that we achieve an asymptotic speedup in this paper (which one can view as achieving leading constant for any in bit complexity) may help to explain the difficulty of extending these lower bounds on the constant beyond restricted classes of algorithmic approaches.
Our approach for matrix multiplication is one that is commonly used in practice: use iterations of a recursive algebraic algorithm until the matrices one would like to multiply are sufficiently small, and then use a different algorithm optimized for small matrices. When this is used in practice, an optimized version of the straightforward (cubic-time) algorithm is used for small matrices, giving a constant factor improvement to the running time; see e.g., [HSHVDG16]. We implement this approach by instead using lookup tables to multiply superconstant-sized matrices very quickly, and thus get an asymptotic improvement.
Matrix multiplication over finite fields has many applications. One prominent example is Boolean matrix multiplication, which has a simple randomized reduction to matrix multiplication over any field666Set each entry of one of the matrices to 0 independently with probability , then multiply the two matrices and check which entries are nonzero.. Hence our algorithm gives an asymptotic speedup for applications of Boolean matrix multiplication such as detecting triangles in graphs.
2 Preliminaries
2.1 Notation
Throughout this paper, we write to denote the base logarithm. For a positive integer , we use the notation .
2.2 Kronecker Products
If is a field, and and are matrices, their Kronecker product is a matrix given by
For a positive integer , we write for the th Kronecker power of , which is the Kronecker product of copies of .
2.3 Matrices of interest
For positive integer , we write to denote the identity matrix. If is a power of , we write to denote the Walsh-Hadamard transform, given by , where
3 Walsh-Hadamard Transform
Let be the finite field of size , and let be a power of . In this section, we give an algorithm for computing the length- Walsh-Hadamard transform, , over . The key idea behind our new algorithm is to pick a such that , and first create a lookup table of the length- Walsh-Hadamard transforms of all vectors . We will then use this lookup table in conjunction with the following standard recursive approach for computing Kronecker powers (sometimes called Yates’ algorithm), which we will apply with :
Lemma 3.1.
Let be any constant-sized finite field, let be positive integers, and let be any matrix. Suppose we are given an algorithm which, on input , outputs in time . Then, there is an algorithm which, on input , outputs in time .
Proof.
By definition of the Kronecker product, we can write as a block matrix (where each block is a matrix) as
Thus, we can multiply times with a two-step process (corresponding to multiplying the matrix on the right times , then the matrix on the left times the result):
-
1.
Partition into vectors . Recursively compute for each .
-
2.
For each , let be the vector consisting of, for each , entry of vector . Use the given algorithm to compute .
Finally, we output the appropriate concatenation of the vectors (where the first entries are the first entries of all the vectors, the second entries are the second entries of all the vectors, and so on).
Our algorithm makes recursive calls in the first step, and calls the given algorithm times in the second step. Hence, the total running time, , has the recurrence
This solves, as desired, to . ∎
We can now give our main algorithm for the Walsh-Hadamard transform:
Theorem 3.2.
Let be a finite field of size , let be a positive integer, and let . There is an algorithm for computing the length- Walsh-Hadamard transform over that uses bit operations.
Proof.
Let , and let . We begin by iterating over all vectors , computing , and storing it in a lookup table. We can do this in a straightforward way: there are such vectors, and each can be computed using additions and subtractions over , so the total time to create this lookup table is at most
(This simple time bound could be improved, but we won’t bother here since it won’t substantially contribute to the final running time.)
Our goal now is, given , to compute . Assume for now that divides . We will apply Lemma 3.1 with (and hence ) and , which will multiply the matrix times the vector , as desired. Each time that algorithm needs to multiply times a vector of length , we do so by looking up the answer from the lookup table. Hence, in Lemma 3.1 will be the time to do one lookup from this table whose keys and values have length , so .
The total number of bit operations of our algorithm is thus, as desired,
Finally, consider when does not divide . Let be the largest multiple of that is smaller than , so . By the usual recursive approach (e.g., one recursive step of the algorithm presented in Lemma 3.1), it suffices to first perform instances of a length- Walsh-Hadamard transform, and then perform instances of a length- Walsh-Hadamard transform. We now count the number of bit operations for these two steps.
Using the same algorithm as above, since divides , a length- Walsh-Hadamard transform can be performed using bit operations. Hence, the total bit operations for the first step is .
Using the usual fast Walsh-Hadamard transform, a length- Walsh-Hadamard transform can be performed using bit operations. Hence, the total bit operations for the second step is . We thus get the desired total running time. ∎
4 Matrix Multiplication
Fast algebraic algorithms for matrix multiplication over a field critically rely on algebraic identities which take the following form, for positive integers , formal variables for , and field coefficients for and :
(1) |
As we will see next, an identity (1) can be used to design a matrix multiplication algorithm which runs using only field operations. For instance, Strassen’s algorithm [Str69] gives an identity (1) with and , yielding exponent . The matrix multiplication exponent is defined as the infimum of all numbers such that, for every , there is a sufficiently large for which one gets an identity (1) with . Indeed, it is known [Str73] that any algebraic algorithm for matrix multiplication can be converted into an identity (1) yielding the same running time in this way, up to a constant factor, so captures the best exponent from any possible algebraic algorithm. The fastest known matrix multiplication algorithm [CW90, AW21] shows that .
The standard recursive algorithm for multiplying matrices using identity (1) works as described in Algorithm 1 below. It recurses until it gets to a base case of multiplying matrices when for some parameter . In the usual algorithm, one picks to be a constant, so that such matrices can be multiplied in a constant number of operations; in our improvement, we will pick a larger and multiply such matrices using a lookup table.
Lemma 4.1.
Algorithm 1 correctly outputs the product .
Proof.
For a fixed , we can see that lines 8-15 of the algorithm will set to be the following matrix:
Notice in particular that this is the coefficient of the formal variable in the right-hand side of the identity (1). Hence, it is also the coefficient of that variable in the left-hand side, namely,
This is the desired output block when multiplying matrices . ∎
Normally one would analyze the recursion for the number of operations performed by this algorithm when is a constant, and conclude a total operation count of . Here we will improve on this over finite fields using a lookup table:
Theorem 4.2.
Fix an identity (1) and let . For any finite field of size , and any positive integer , there is an algorithm for multiplying matrices over which uses bit operations.
Proof.
Let . We begin by iterating over all positive integers and all pairs of matrices , computing their product , and storing it in another lookup table. Similar to before, the running time for this step won’t substantially contribute to the final running time, so we give a straightforward upper bound on the time it takes. The number of pairs of matrices we need to multiply is at most , and the time to multiply each pair is at most by the straightforward algorithm. The total time to create the lookup table is hence at most
Note that this table’s keys and values are strings of length , so lookup table operations can be performed using bit operations.
We now use Algorithm 1, with base case size . The base case procedure is that, whenever we need to multiply two matrices for , we find the result in the lookup table.
Let denote the running time of this algorithm to multiply two matrices. We get if , and if , the recurrence
(2) |
recalling that are constants given in the identity (1). The term in the right-hand side of Equation (2) counts the bit operations to do a constant number of additions and scalar multiplications of matrices. Solving this recurrence yields
as desired. ∎
5 Conclusion
We showed that for two important open problems – determining the complexity of the Walsh-Hadamard transform, and determining the leading constant of Strassen’s algorithm for matrix multiplication – asymptotic improvements over the conjectured optimal arithmetic bounds are possible if one is allowed to use bit operations rather than just arithmetic operations.
Our algorithms only made use of arithmetic operations and lookup table operations, so they could extend to other models of computation as well. One natural question is whether they extend to the standard word RAM model with word size for input size . Indeed, operations for the lookup tables we use, (with keys and values of bits) require only word operations in this model.
It is not hard to see that our algorithm for matrix multiplication can be implemented to take advantage of this model, improving Theorem 4.2 to running time (improving by a factor of ).
On the other hand, our algorithm for the Walsh-Hadamard transform seemingly cannot be implemented in the word RAM model to get asymptotic savings. The main culprit is step 2 in the proof of Lemma 3.1: if we want to efficiently use the lookup table to compute , then we first have to permute the bits of the vectors so that each fits in a single word. In other words, we are given the vectors in order, and would like to permute their entries to get the vectors in order instead.
Let be the number of words that our input fits into. In general it is known that performing a fixed permutation of the bits of a string contained in words, for , requires word operations [BMM97]. However, our particular permutation can be broken up into different permutations on words (e.g., the first words in the descriptions of are a permutation of the first words in the descriptions of ). It can thus be performed in only word operations.
Since this must be done at all levels of recursion, it incurs an additional word operations. With the parameter setting we ultimately use in Theorem 3.2, this is word operations to perform all these permutations. By comparison, it is not too difficult to implement the usual fast Walsh-Hadamard transform to use a total of word operations as well. Hence, the time to perform permutations (which doesn’t come into play in the arithmetic or bit operation models) swamps any other computational savings in this model, and another approach is needed for a speedup.
Acknowledgements
I would like to thank Dylan McKay and Ryan Williams for invaluable discussions throughout this project, and anonymous reviewers for helpful comments. This research was supported in part by a grant from the Simons Foundation (Grant Number 825870 JA).
References
- [ADKF70] Vladimir L’vovich Arlazarov, Yefim A Dinitz, MA Kronrod, and IgorAleksandrovich Faradzhev. On economical construction of the transitive closure of an oriented graph. In Doklady Akademii Nauk, volume 194, pages 487–488. Russian Academy of Sciences, 1970.
- [AW17] Josh Alman and Ryan Williams. Probabilistic rank and matrix rigidity. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 641–652, 2017.
- [AW21] Josh Alman and Virginia Vassilevska Williams. A refined laser method and faster matrix multiplication. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 522–539. SIAM, 2021.
- [BG21] Davis Blalock and John Guttag. Multiplying matrices without multiplying. In International Conference on Machine Learning, pages 992–1004. PMLR, 2021.
- [BMM97] Andrej Brodnik, Peter Bro Miltersen, and J Ian Munro. Trans-dichotomous algorithms without multiplication - some upper and lower bounds. In Workshop on Algorithms and Data Structures, pages 426–439. Springer, 1997.
- [BS19] Gal Beniamini and Oded Schwartz. Faster matrix multiplication via sparse decomposition. In The 31st ACM Symposium on Parallelism in Algorithms and Architectures, pages 11–22, 2019.
- [Bsh95] Nader H Bshouty. On the additive complexity of 2 2 matrix multiplication. Information processing letters, 56(6):329–335, 1995.
- [CW90] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of symbolic computation, 9(3):251–280, 1990.
- [EDS18] Mohab Safey El Din and Éric Schost. Bit complexity for multi-homogeneous polynomial system solving—application to polynomial minimization. Journal of Symbolic Computation, 87:176–206, 2018.
- [HK10] Tor Helleseth and Alexander Kholosha. New binomial bent functions over the finite fields of odd characteristic. In 2010 IEEE International Symposium on Information Theory, pages 1277–1281. IEEE, 2010.
- [HSHVDG16] Jianyu Huang, Tyler M Smith, Greg M Henry, and Robert A Van De Geijn. Strassen’s algorithm reloaded. In SC’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 690–701. IEEE, 2016.
- [JPK+20] Yongkweon Jeon, Baeseong Park, Se Jung Kwon, Byeongwook Kim, Jeongin Yun, and Dongsoo Lee. Biqgemm: matrix multiplication with lookup table for binary-coding-based quantized dnns. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–14. IEEE, 2020.
- [KS20] Elaye Karstadt and Oded Schwartz. Matrix multiplication, a little faster. Journal of the ACM (JACM), 67(1):1–31, 2020.
- [LC20] Ting Liu and Xuechen Chen. Deep learning-based belief propagation algorithm over non-binary finite fields. In 2020 International conference on wireless communications and signal processing (WCSP), pages 164–169. IEEE, 2020.
- [Lin91] Andrzej Lingas. Bit complexity of matrix products. Information processing letters, 38(5):237–242, 1991.
- [Lok09] Satyanarayana V Lokam. Complexity lower bounds using linear algebra. Foundations and Trends® in Theoretical Computer Science, 4(1–2):1–155, 2009.
- [MQ19] Sihem Mesnager and Longjiang Qu. On two-to-one mappings over finite fields. IEEE Transactions on Information Theory, 65(12):7884–7895, 2019.
- [Pan81] V Pan. The bit-complexity of arithmetic algorithms. Journal of Algorithms, 2(2):144–163, 1981.
- [Pro76] Robert L Probert. On the additive complexity of matrix multiplication. SIAM Journal on Computing, 5(2):187–203, 1976.
- [RL01] B Sundar Rajan and Moon Ho Lee. Quasicyclic dyadic codes in walsh-hadamard domain. In Proceedings. 2001 IEEE International Symposium on Information Theory (IEEE Cat. No. 01CH37252), page 37. IEEE, 2001.
- [Str69] Volker Strassen. Gaussian elimination is not optimal. Numerische mathematik, 13(4):354–356, 1969.
- [Str73] Volker Strassen. Vermeidung von divisionen. Journal für die reine und angewandte Mathematik, 264:184–202, 1973.
- [VDHL13] Joris Van Der Hoeven and Grégoire Lecerf. On the bit-complexity of sparse polynomial and series multiplication. Journal of Symbolic Computation, 50:227–254, 2013.
- [Win71] Shmuel Winograd. On multiplication of 2 2 matrices. Linear algebra and its applications, 4(4):381–388, 1971.
- [XQ19] Guangkui Xu and Longjiang Qu. Three classes of minimal linear codes over the finite fields of odd characteristic. IEEE Transactions on Information Theory, 65(11):7067–7078, 2019.
- [Yu18] Huacheng Yu. An improved combinatorial algorithm for boolean matrix multiplication. Information and Computation, 261:240–247, 2018.