Dither computing: a hybrid deterministic-stochastic computing framework
Abstract
Stochastic computing has a long history as an alternative method of performing arithmetic on a computer. While it can be considered an unbiased estimator of real numbers, it has a variance and MSE on the order of . On the other hand, deterministic variants of stochastic computing remove the stochastic aspect, but cannot approximate arbitrary real numbers with arbitrary precision and are biased estimators. However, they have an asymptotically superior MSE on the order of . Recent results in deep learning with stochastic rounding suggest that the bias in the rounding can degrade performance. We proposed an alternative framework, called dither computing, that combines aspects of stochastic computing and its deterministic variants and that can perform computing with similar efficiency, is unbiased, and with a variance and MSE also on the optimal order of . We also show that it can be beneficial in stochastic rounding applications as well. We provide implementation details and give experimental results to comparatively show the benefits of the proposed scheme.
Index Terms:
computer arithmetic; stochastic computing; stochastic rounding; deep learning;I introduction
Stochastic computing [1, 2, 3, 4] has a long history and is an alternative framework for performing computer arithmetic using stochastic pulses. While not as efficient as binary encoding in representing numbers, it does provide a simpler implementation for arithmetic units and can tolerate errors, making it a suitable choice for computational substrates that are inherently noisy such as analog computers and quantum systems. Stochastic computing can approximate arbitrary real numbers and perform arithmetic on them to reach the correct value in expectation, but the stochastic nature means that the result is not precise each time. Recently, Ref. [5] suggests that deterministic variants of stochastic computing can be just as efficient, and does not have the random errors introduced by the random nature of the pulses. Nevertheless in such deterministic variants the finiteness of the scheme implies that it cannot approximate general real numbers with arbitrary accuracy. This paper proposes a framework that combines these two approaches to get the best of both worlds, and inherit some of the best properties of both schemes. In the process, we also provide a more complete probabilistic analysis of these schemes. In addition to considering both the first moment of the approximation error (e.g. average error) and the variance of the representation, we also consider the set of real numbers that are represented and processed to be drawn from an independent distribution as well. This allows us to provide a more complete picture of the tradeoffs in the bias, variance of the approximation and the number of pulses along with the prior distribution of the data.
II Representation of real numbers via sequences
For a real number , has the usual definition of . We consider two independent random variables , with support in the unit interval . A common assumption is that and are uniformly distributed. The interpretation is that and generate the real numbers that we want to perform arithmetic on. In order to represent a sample from the main idea of stochastic computing (and other representations such as unary coding [6]) is to use a sequence of binary pulses. In particular, is represented by a sequence of independent Bernoulli trials . We estimate via . Our standing assumption is that , , and are all independent. We are interested in how well approximates a sample in . In particular, we define and are interested in the expected mean squared error (EMSE) defined as . Note that consists of two components, bias and variance, and the bias-variance decomposition [7] is given by where . The following result gives a lower bound on the EMSE:
Theorem II.1
.
Proof: Follows from the fact that is a rational number with denominator and thus .
Given the standard assumption that is uniformly distributed, this implies that , i.e. the EMSE can decrease at a rate of at most .
In the next sections, we analyze how well approximates samples in asymptotically as by analyzing the error for various variants of stochastic computing.
II-A Stochastic computing
A detailed survey of stochastic computing can be found in Ref. [2]. We give here a short description of the unipolar format. Using the notation above, are chosen to be iid Bernoulli trials with . Then and is an unbiased estimator of . Since and , i.e. , we have for . More specifically, if has a uniform distribution on , then .
II-B A deterministic variant of stochastic computing
In [5] deterministic variants of stochastic computing111In the sequel for brevity we will sometimes refer to these schemes simply as “deterministic variants”. are proposed. Several approaches such as clock dividing, and relative prime encoding are introduced and studied. One of the benefits of a deterministic algorithm is the lack of randomness, i.e. the representation of via does not change and . However, the bias term can be nonzero. Because is represented by counting the number of 1’s in , it can only represent fractions with denominator . For where is odd, the error is . This means that such values of , . If is a discrete random variable with support only on the rational points for integer , then . However, in practice, we want to the represent arbitrary real numbers in . Assume that is uniformly distributed in . By symmetry, we only need to analyze the . Then and . It follows that .
II-C Stochastic rounding
As the deterministic variant (Sect. II-B) has a better asymptotic EMSE than stochastic computing (Sect. II-A), one might wonder why is stochastic computing useful. It is instructive to consider a special case: 1-bit stochastic rounding [8], in which rounding a number is given as a Bernoulli trial with . This type of rounding is equivalent to the special case of the stochastic computing mechanism in Sect. II-A. In deterministic rounding, and the corresponding EMSE is . For stochastic rounding, has a Bernoulli distribution. If , then . Since , it follows that for , is minimized when and for , is minimized when , i.e. is minimized when . Thus with equality exactly when . This shows that the EMSE for deterministic rounding is minimal among all stochastic rounding schemes. Thus at first glance, deterministic rounding is preferred over stochastic rounding. While deterministic rounding has a lower EMSE than stochastic rounding, it is a biased estimator. This is problematic for application such as reduced precision deep learning where an unbiased estimator such as stochastic rounding has been shown to provide improved performance over a biased estimator such as deterministic rounding. As indicated in [9], part of the reason is that subsequent values that are rounded deterministically are correlated and in this case the stochastic rounding prevents stagnation.
II-D Dither computing: A hybrid deterministic-stochastic computing framework
The main goal of this paper is to introduce dither computing, a hybrid deterministic-stochastic computing framework that combines the benefits of stochastic computing (Sec. II-A) and its deterministic (Sec. II-B) variants and eliminates the bias component while preserving the optimal asymptotic rate for the EMSE . Furthermore, it converges to the zero bias faster than stochastic computing. The main idea here is to approximate deterministically as close as possible to the desired quantity and to stochastically approximate the remaining difference. The encoding is constructed as follows. Let be a permutation of .
For , let and . Then we pick the Bernoulli trials with for and for with . Then . In addition, since and , this implies that . . Thus the bias is 0 and the EMSE is of the order . It is clear that this remains true if is either a deterministic or a random permutation as does not depend on .
For , let and . We pick the Bernoulli trials with for and for with . . In addition, since and , this implies that . . Thus again the bias is 0 and the EMSE is of the order .
The above analysis shows that dither computing offers better EMSE error than stochastic computing while preserving the zero bias property. In order for such representations to be useful in building computing machinery, we need to show that this advantage persists under arithmetic operations such as multiplication and (scaled) addition.
III Multiplication of values
In this section, we consider whether this advantage is maintained for these schemes for multiplication of sequences via bitwise AND. The sequence corresponding to the product of and is given by and the product is estimated via .
III-A Stochastic computing
In this case we want to compute the product . Let and be independent with and , Then for , are Bernoulli with and . and .Thus and the variance and the MSE of the product maintains the suboptimal asymptotic rate.
III-B Deterministic variant of stochastic computing
For numbers , we consider a unary encoding for , i,e, for and otherwise, where . For we have if . Let be the number of indices such that , then and . This means that . Since , this implies that and thus the bias is on the order of and the EMSE is on the order of .
III-C Dither computing
For numbers , we consider the encoding in Section II-D with the permutation for defined as the identity and the permutation for defined as spreading bits in a sample of as much as possible. In particular, let be a sample of and . Then for , where is a uniformly distributed random variable on [0,1] independent from and . We will only consider the case as the other cases are similar. Let , , and . Then for of indices on average and otherwise. This implies that and the bias is . Similar to the deterministic variant, it can be shown that for a constant , and thus is .
IV Scaled addition (or averaging) of values
For , the output of the scaled addition (or averaging) operation is . An auxiliary control sequence of bits is defined that is used to toggle between the two sequences by defining as alternating between and : and is estimated via .
IV-A Stochastic computing
The control sequence is defined as independent Bernoulli trials with . It is assumed that , and are independent. Then , i.e., . . Thus .
IV-B Deterministic variant of stochastic computing
For this case are deterministic and we define if is even and otherwise. Let and be the number of even and odd numbers in respectively. Then and . If is even, . If is odd, . In either case, and , .
IV-C Dither computing
We set and both equal to the identity permutation and define sequence with for odd and otherwise. With probability , for all and for all otherwise. Thus the 2 sequences and are each chosen with probability . Note that and are correlated, and . This means that and the bias is . The 2 sequences for selects 2 disjoint sets of random variables and the average of which has variance . can be written as , where is a Bernoulli trial with , , and . Since and takes about half of all the and the alternating over the indices , it is easy to show that . Furthermore , and the formula for the variance of products of independent r.v’s shows that and thus .
V Numerical results
In Figs. 1-6 we show the EMSE and the bias for the computing schemes above by generating independent pairs from a uniform distribution of and of . For each pair , 1000 trials of dither computing and stochastic computing222The set of pairs are the same for the 3 schemes. For the deterministic variant, only 1 trial is performed as are are deterministic. are used to represent them and compute the product and average .
We see that the sample estimate for the bias for , and are lower for both the stochastic computing scheme and the dither computing scheme as compared with the deterministic variant. On the other hand, the dither computing scheme has similar EMSE on the order of as the deterministic scheme, whereas the stochastic computing scheme has higher EMSE on the order of .
Even though both stochastic computing and dither computing have zero bias, the sample estimate of this bias is lower for dither computing than for stochastic computing. This is because the standard error of the mean (SEM) is proportional to the standard deviation and dither computing has standard deviation vs for stochastic computing and this is observed in the slope of the curves in Figs 2, 4, 6.
Furthermore, even though the dither computing representations of and have worse EMSE than the deterministic variant, the dither computing representations of both the product and the scaled addition have better EMSE. The asymptotic behavior of bias and EMSE for these different schemes are listed in Table I.
Stoch. Comp. | Determ. Variant | Dither Comp. | |
---|---|---|---|
Bias (repr.) | |||
Variance (repr.) | |||
EMSE (repr.) | |||
Bias (mult.) | |||
Variance (mult.) | |||
EMSE (mult.) | |||
Bias (average) | |||
Variance (average) | |||
EMSE (average) |
VI Asymmetry in operands
In the dither computing scheme (and in the deterministic variant of the stochastic computing as well), the encoding of the two operands and are different. For instance is encoded as a unary number (denoted as Format 1) and has its -bits spread out as much as possible (denoted as Format 2) for multiplication while both and are encoded as unary numbers for scaled addition. For multilevel arithmetic operations, this asymmetry requires additional logic to convert the output of multiplication and scaled addition into these 2 formats depending on which operand and which operation the next arithmetical operation is. On the other hand, there are several applications where the need for this additional step is reduced. For instance,
-
1.
In memristive crossbar arrays [10], the sequence of pulses in the product is integrated and converted to digital via an A/D converter and thus the product sequence of pulses is not used in subsequent computations.
-
2.
Using stochastic computing to implement the matrix-vector multiply-and-add in neural networks [11], one of the operand is always a weight or a bias and thus fixed throughout the inference operation. Thus the weight can be precoded in Format 2 for multiplication and the bias value is precoded in Format 1 for addition, whereas the data to be operated on is always in Format 1 and the result recoded to Format 1 for the next operation.
VII Dither rounding: stochastic rounding revisited
In order to reduce power while increasing throughput, there has been much research activities in using reduced precision hardware, in particular in deep learning both for training and inference [12, 13]. In these applications, the data and operations are performed with far lower precision than traditional fixed point or floating point arithmetic, going to as low as a single bit [14]. To address the loss of precision in such reduced precision arithmetical units, stochastic rounding has emerged as an alternative mechanism to deterministic rounding for using reduced precision hardware in applications such as solving differential equations [15] and deep learning [16]. As mentioned in Sec. II-C, 1-bit stochastic rounding can be considered as the special case of stochastic computing with . For -bit stochastic rounding, the situation is similar as only the least significant bit is stochastic. Another alternative interpretation is that stochastic computing is stochastic rounding in time, i.e. , can be considered as applying stochastic rounding times. Since the standard error of the mean of dither computing is asymptotically superior to stochastic computing, we expect this advantage to persist for rounding as well when applied over time.
Thus we introduce dither rounding as follows. We assume as the case can be handled similarly. We define dither rounding of a real number as where is the dither computing representation of as defined in Sect. II-D and is the fractional part of . Note that there is an index in the definition of which is an integer . In practice we will compute as , where counts how many times the dither rounding operation has been applied so far and is a fixed permutation, one for the left operand and one for the right operand of the scalar multiplier. The implementation complexity of dither rounding is similar to stochastic rounding, except that we need to keep track of the index , whereas in stochastic rounding, the rounding of the elements are done independently.
To illustrate the performance of these different rounding schemes, consider the problem of matrix-matrix multiplication, a workhorse of computational science and deep learning algorithms. Let and be and matrices with elements in . We denote the -th element of as . The goal is to compute the matrix . A straightforward algorithm for computing requires (scalar) multiplications. Although there are asympotically more optimal algorithms for square matrices such as Strassen [17], Coppersmith-Winograd [18] and Alman-Williams [19] they are not widely used in practice. Let us assume that we have at our disposal only -bit fixed point digital multipliers and thus floating point real numbers are rounded to -bits before using the multiplier. We use the following standard definition of a -bit quantizer. For simplicity, we will only deal with nonnegative numbers. The quantized value is simply for . If , then (underflow) and if , then (overflow). We will also refer to this as traditional rounding. We want to compare the performance of computing between traditional rounding, stochastic rounding and dither rounding. In particular, since each element of is used times and each element of is used times, for dither rounding we set for matrix and for matrix . For dither rounding the computation of each of the partial results is illustrated in Fig. 7, and the other schemes can be obtained by simply replacing the rounding scheme. We measure the error by computing the Frobenius matrix norm where is the product matrix computed using the specified rounding method and the -bit fixed point multiplier. In our case this is implemented by rescaling the interval to and rounding to fixed point -bit integers. Note that the Frobenius matrix norm is equivalent to the vector norm when the matrix is flattened as a vector.
In Sect. II-C, it was shown that deterministic rounding has the lowest EMSE, but Ref. [9] argues that correlation in subsequent data makes an unbiased estimator such as stochastic rounding a better choice for deep learning. We propose another reason why dither or stochastic rounding can be superior to deterministic rounding in deep learning. If the prior distribution of the data is not uniform, the output levels of the quantizer are not obtained uniformly and some of the output levels may be wasted by rarely being used. As an extreme example, if the input is in the range , the output of a -bit quantizer in deterministic rounding will always be and all information about the input is lost. On the other hand, the output from a dither or stochastic rounding will take on both values and .
If we assume that the range of the matrix elements is narrow when compared to the quantization interval, then we expect dither rounding (and stochastic rounding) to outperform traditional rounding. For example, take the special case of and , where is the square matrix of all ’s and . When we use traditional rounding to round the elements of and , the corresponding is , where which is in general. The analysis in Section III shows that for both dither rounding and stochastic rounding the resulting satisfies , with for dither rounding and for stochastic rounding.
To ensure there is no overflow or underflow in the quantization process, in practical applications such as deep learning training the numbers are conservatively scaled to lie well within the range of the quantizer so the above assumption is reasonable. As an another example, we generate 100 pairs of 100 by 100 matrices and where elements of and are randomly chosen from the range and choose . The average for traditional rounding, stochastic computing and dither computing are shown in Fig. 8.333Note that for traditional rounding and , and are both rounded to the zero matrix, and in this case. We see that dither rounding has smaller than stochastic rounding and that for small both dither computing and stochastic rounding has significant lower error in computing than traditional rounding. There is a threshold where traditional rounding outperforms dither or stochastic rounding for , and we expect this threshold to increase when increase.
Next, we apply this approach to a simple neural network solving the MNIST handwritten digits recognition task [20]. The input images are grayscale images with pixel values in the range . A single layer neural network with a softmax function can obtain an accuracy of on the sample test set, which we denote as the baseline accuracy. The neural network has a single weight matrix and a single bias vector and inference includes a single matrix-matrix multiplication of the input data matrix and the weight matrix. We scaled the weight matrix to the range . In order to use a -bit fixed point multiplier, we rescale both the weights and the input from to and apply the standard -bit quantizer. Note that since the input is restricted to the range , it did not fully utilize the full range of the quantizer. We apply the 3 rounding methods discussed earlier to compute the partial products in the matrix multiplication (Fig. 9). We see in Fig. 9 that dither rounding has similar classification accuracy as stochastic rounding and both of them are significantly better than deterministic rounding for small . Note that the rounding method sometimes has slightly better accuracy than using full precision. Furthermore, dither rounding has less variance on the classification accuracy compared with stochastic rounding (Fig. 10).
VIII Other dither rounding schemes for matrix multiplication
In the above matrix multiplication scheme, the dither (or stochastic) rounding operation is performed on each of the partial products and 2 rounding operations per partial product (Fig. 7), resulting in rounding operations. We next consider other variants for dither rounding schemes for matrix multiplication. For instance, one can compute the partial product where is rounded once for each and applied to each whereas is rounded for each partial product. For the MNIST task this corresponds to the input being only quantized once. We find that this dither rounding variant results in slighter better performance than stochastic rounding as shown in Figs. 11 and 12. The number of rounding operations is .
Another variant is to apply the rounding to and separately and then perform matrix multiplication on the rounded matrices. This will only require rounding operations which can be much less that . We show that this variant (for both dither and stochastic rounding) can still provide benefits when compared with deterministic rounding. The results are shown in Figs. 13-14. For stochastic and dither rounding, the mean accuracy over trials are plotted. We see that the results are similar to Figs. 9-10.
We also performed the same experiment for the Fashion MNIST clothing image recognition task [21]. Since this is a harder task than MNIST, we use a 3-layer MLP (multi-layer perceptron) network with ReLu activation functions between layers and a softmax output function. Each of the 3 weight matrices, the input data matrix and the intermediate result matrices are rounded separately before applying the matrix multiplication operations. The results are shown in Figs. 15-16 which illustrate similar trends for this task as well, although the range of where dither and stochastic rounding are beneficial is much narrower . We plan to provide further results on various variants of dither rounding and explore the various tradeoffs in a future paper.
IX Conclusions
We present a hybrid stochastic-deterministic scheme that encompasses the best features of stochastic computing and its deterministic variants by achieving the optimal asymptotic rate for the EMSE of the deterministic variant while inheriting the zero bias property of stochastic computing schemes with faster convergence to the zero bias. We also show how it can be beneficial in stochastic rounding applications as well, where dither rounding has similar mean performance as stochastic rounding, but with lower variance and is superior to deterministic rounding especially in cases where the range of the data is smaller than the full range of the quantizer.
References
- [1] B. R. Gaines, “Stochastic computing,” in Proc. of the AFIPS Spring Joint Computer Conference, pp. 149–156, 1967.
- [2] A. Alaghi and J. P. Hayes, “Survey of stochastic computing,” ACM Trans. on Embedded Computing Syst., vol. 12, no. 2s, pp. 1–19, 2013.
- [3] T.-H. Chen and J. P. Hayes, “Analyzing and controlling accuracy in stochastic circuits,” in IEEE 32nd International Conference on Computer Design (ICCD), 2014.
- [4] R. P. Duarte, M. Vestias, and H. Neto, “Enhancing stochastic computations via process variation,” in 25th International Conference on Field Programmable Logic and Applications (FPL), 2015.
- [5] D. Jenson and M. Riedel, “A deterministic approach to stochastic computation,” in ICCAD, 2016.
- [6] M. D. Davis, R. Sigal, and E. J. Weyuker., Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science. Academic Press, 1994.
- [7] G. James, D. Witten, T. Hastie, and R. Tibshirani, An Introduction to Statistical Learning. Springer, 2013.
- [8] M. Höhfeld and S. E. Fahlman, “Probabilistic rounding in neural network learning with limited precision,” Neurocomputing, vol. 4, no. 6, pp. 291–299, 1992.
- [9] M. P. Connolly, N. J. Higham, and T. Mary, “Stochastic rounding and its probabilistic backward error analysis,” Tech. Rep. MIMS EPrint 2020.12, The University of Manchester, 2020.
- [10] T. Gokmen, M. Onen, and W. Haensch, “Training deep convolutional neural networks with resistive cross-point devices,” Frontiers in Neuroscience, vol. 11, 10 2017.
- [11] Y. Liu, S. Liu, Y. Wang, F. Lombardi, and J. Han, “A survey of stochastic computing neural networks for machine learning applications,” IEEE Trans. on Neural Networks and Learning Systems, pp. 1–16, 2020.
- [12] P. Colangelo, N. Nasiri, E. Nurvitadhi, A. K. Mishra, M. Margala, and K. Nealis, “Exploration of low numeric precision deep learning inference using Intel FPGAs,” in Proc. of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA 2018, Monterey, CA, USA (J. H. Anderson and K. Bazargan, eds.), p. 294, 2018.
- [13] J. Choi, S. Venkataramani, V. Srinivasan, K. Gopalakrishnan, Z. Wang, and P. Chuang, “Accurate and efficient 2-bit quantized neural networks,” in Proc. of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA (A. Talwalkar, V. Smith, and M. Zaharia, eds.), 2019.
- [14] H. Qin, R. Gong, X. Liu, X. Bai, J. Song, and N. Sebe, “Binary neural networks: A survey,” Pattern Recognition, vol. 105, p. 107281, 2020.
- [15] M. Hopkins, M. Mikaitis, D. R. Lester, and S. Furber, “Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations,” Phisophical Transactions A, vol. 378, p. 20190052, 2020.
- [16] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, “Deep learning with limited numerical precision,” in Proc. of the 32nd International Conference on Machine Learning, pp. 1737–1746, Feb. 2015.
- [17] V. Strassen, “Gaussian elimination is not optimal,” Numer. Math., vol. 13, pp. 354–356, 1969.
- [18] D. Coppersmith and S. Winograd, “Matrix multiplication via arithmetic progressions,” J. Symb. Comput., vol. 9, no. 3, pp. 251–280, 1990.
- [19] J. Alman and V. V. Williams, “A refined laser method and faster matrix multiplication,” in Proc. of the 2021 ACM-SIAM Symposium on Discrete Algorithms, SODA 2021 (D. Marx, ed.), pp. 522–539, 2021.
- [20] Y. LeCun, C. Cortes, and C. Burges, “MNIST handwritten digit database,” AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist, vol. 2, 2010.
- [21] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms,” arXiv:1708.07747 [cs, stat], Aug. 2017. arXiv: 1708.07747.