label=0.,leftmargin=15pt,labelwidth=10pt,labelsep=5pt, topsep=0pt,parsep=0pt,partopsep=0pt,noitemsep
Training Transformers with 4-bit Integers
Abstract
Quantizing the activation, weight, and gradient to 4-bit is promising to accelerate neural network training. However, existing 4-bit training methods require custom numerical formats which are not supported by contemporary hardware. In this work, we propose a training method for transformers with all matrix multiplications implemented with the INT4 arithmetic. Training with an ultra-low INT4 precision is challenging. To achieve this, we carefully analyze the specific structures of activation and gradients in transformers to propose dedicated quantizers for them. For forward propagation, we identify the challenge of outliers and propose a Hadamard quantizer to suppress the outliers. For backpropagation, we leverage the structural sparsity of gradients by proposing bit splitting and leverage score sampling techniques to quantize gradients accurately. Our algorithm achieves competitive accuracy on a wide range of tasks including natural language understanding, machine translation, and image classification. Unlike previous 4-bit training methods, our algorithm can be implemented on the current generation of GPUs. Our prototypical linear operator implementation is up to 2.2 times faster than the FP16 counterparts and speeds up the training by up to 35.1%. Our code is available at https://github.com/xijiu9/Train_Transformers_with_INT4.
1 Introduction
Training neural networks is computationally demanding. Training with low-precision arithmetic (a.k.a., fully quantized training or FQT) is promising to improve computational and memory efficiency. FQT methods add some quantizers and dequantizers in the original full-precision computational graph, and replace expensive floating-point operations with cheap low-precision ones.
Research in FQT aims to reduce the training numerical precision, without sacrificing much convergence speed or accuracy. The required numerical precision has been reduced from FP16 [32] to FP8 [53, 45], INT32+INT8 [3] and INT8+INT5 [7]. FP8 training is implemented in Nvidia’s H100 GPU with Transformer Engine [34], achieving impressive speedup for the training of large-scale transformers. Recently, the training numerical precision has been pushed down to 4 bits. Sun et al. [46] successfully trained several modern networks with INT4 activation/weights and FP4 gradients; and Chmiel et al. [8] propose a custom 4-bit logarithmic numerical format to further improve the accuracy. However, these 4-bit training methods cannot be directly utilized for acceleration as they require custom numerical formats which are not supported on contemporary hardware.
There are significant optimization challenges to train neural networks at an extremely low 4-bit level. First, the non-differentiable quantizers in forward propagation make the loss landscape rugged, where gradient-based optimizers can easily stuck at local optima [30]. Second, gradients are only computed approximately in low-precision. Such imprecise gradients slow down the training process and even cause the training to be unstable or diverge.
In this work, we propose a novel INT4 training algorithm for a class of popular neural networks, transformers [51]. All the costly linear operations for training transformers can be written in a matrix multiplication (MM) form. This MM form allows us to design more flexible quantizers, which better approximate FP32 matrix multiplications by utilizing specific structures of the activations, weights, and gradients in transformers. Our quantizers leverage advances in the field of randomized numerical linear algebra (RandNLA) [14].
For forward propagation, we find that outliers in the activation are the main reason for accuracy degradation. To suppress outliers, we propose a Hadamard quantizer, which quantizes a transformed version of the activation matrix. The transformation is a block diagonal Hadamard matrix, which spreads the information carried in outliers to its nearby entries of the matrix and thus reduces the numerical range of the outliers.
For backpropagation, we exploit the structural sparsity of activation gradients. We find that the gradients of a few tokens are extremely large. Meanwhile, the gradients for the rest majority of the tokens are very small, even smaller than the quantization residuals of larger gradients. Rather than computing these small gradients, it is better to save the computational resource for calculating the residuals of the larger gradients. To utilize such sparsity, we propose bit splitting, which split the gradient of each token into higher 4 bits and lower 4 bits. Then, we choose the most informative gradients by leverage score sampling, which is an importance sampling technique for RandNLA.
Combining quantization techniques for forward and backward propagation, we propose an algorithm that uses INT4 MMs for all linear operations in transformers. We evaluate our algorithm for training transformers on a wide variety of tasks, including natural language understanding, question answering, machine translation, and image classification. Our algorithm achieves competitive or superior accuracy compared with existing works on 4-bit training [46, 8]. Moreover, our algorithm is compatible with contemporary hardware like GPUs, since it does not require custom numerical formats like FP4 or logarithm formats. Our prototypical quantization + INT4 MM operator implementation is up to 2.2 times faster than the FP16 MM baseline, and it speeds up the training by up to 35.1%.
2 Related Work
Fully Quantized Training
Fully quantized training (FQT) [32, 53, 45, 3, 15, 1, 56, 64, 28, 29, 58, 67] methods accelerate training by quantizing the activations, weights, and gradients to low-precision, so linear and nonlinear operators during training can be implemented with low-precision arithmetic. Researches on FQT design novel numerical formats and quantization algorithms which better approximate full-precision tensors. The current research frontier is 4-bit FQT. FQT is challenging due to the vast numerical range of the gradient and the optimization issues of training quantized networks from scratch. Due to these challenges, existing 4-bit FQT algorithms [46, 8] still have 1-2.5% accuracy drop on several tasks, and they cannot support contemporary hardware.
Other Efficient Training Methods
Mixture-of-experts [42] improves the model capacity without increasing the training budget. Structural dropout [21, 17] exploits computationally efficient ways to regularize the model. Efficient attention [26, 10] reduces the quadratic time complexity for computing attention. Distributed training systems [38, 22] reduce training time by leveraging more computational resources. Our work on reducing numerical precision is orthogonal with these directions.
3 Forward Propagation
Neural network training is an iterative optimization procedure with stochastic gradients computed by forward and back propagation. We accelerate forward and back propagation with 4-bit integer (INT4) arithmetic. We first describe the forward propagation of our training procedure. The forward propagation can be formulated as a composition of linear and non-linear (GeLU, normalization, softmax, etc.) operators. In our training procedure, we accelerate all the linear operators with INT4 arithmetic and leave all the less-computationally-intensive non-linear operators in the 16-bit floating-point (FP16) format. All linear operations in transformers can be written in a matrix multiplication (MM) form. For ease of presentation, we consider the acceleration of the following simple matrix multiplication throughout this paper:
(1) |
The most predominant use case of such MM is the fully-connected layer. Consider a transformer with an input shape of (batch size , sequence length , dimensionality ). The fully-connected layer can be written as Eq. (1) where is the activation for tokens, and is the weight matrix. For attention layers, batch matrix multiplications (BMMs) might be required. Our proposed techniques can be applied to BMMs, and we leave the discussion of BMMs in Appendix. A.1.
3.1 Learned Step Size Quantization
To accelerate training, the forward propagation must be computed with integer arithmetic. We leverage the learned step size quantizer (LSQ) [16] for this purpose. LSQ is a static quantization method whose quantization scale does not depend on the input, and is thus cheaper than dynamic quantization methods [23], which need to compute the quantization scale dynamically per iteration.
Given a FP matrix , LSQ quantizes to integer with
(2) |
where is a learnable scalar parameter, clamp restricts its input to the range , is a rounding operation, and is computed elementwise. The resultant matrix takes values from . Since we aim to perform INT4 MMs, we set . The integer matrix can be dequantized back to FP through
With LSQ, Eq. (1) can be computed approximately as where the INT4 MM can be implemented efficiently on hardware.
Remark:
Quantization-aware training (QAT) [9, 62, 66, 23, 12, 11, 43, 59, 44, 48, 63, 2, 18, 54] is an inference acceleration technique which trains networks with quantizers inserted in the forward propagation graph, so the trained network can perform efficiently during inference. QAT can compress activation/weights to extremely low precision (e.g. 1-2 bits). It is tempting to think that directly applying a quantizer for QAT to FQT can lead to similar low activation/weights bit-width. However, even only quantizing the forward propagation for FQT is much more challenging than QAT because: (1) QAT requires a converged full-precision model as initialization [16] and/or as a teacher model for knowledge distillation [2]; (2) QAT can adopt expensive multi-stage training pipelines without worrying about the convergence speed [31], while FQT algorithm must converge as fast as full-precision training algorithms to be useful; (3) QAT may approximate the discrete quantizer with continuous functions during training [19], which cannot be implemented with integer arithmetic. Due to these challenges, it is still an open problem to do FQT with 4-bit activations/weights.
3.2 Activation Outliers
Simply applying LSQ for FQT with 4-bit activation/weights leads to accuracy degradation due to activation outliers [57]. As shown in Fig. 2, activations have some outlier entries, which are much larger in magnitude than other entries. In this case, the step size poses a trade-off between quantization granularity and representable numerical range. If is large, we can represent the outliers well at the expense of representing most other entries in a very coarse manner. On the other hand, if is small, we have to truncate the entries outside the range . Unfortunately, the transformers tend to store information in these outliers, and such truncation would seriously harm accuracy (see Sec. 5.2 for details). The outlier problem is particularly significant when the training task is to fine-tune a pre-trained model on some new downstream tasks, since the pre-train model contains more outliers [57] than random initialization.
There exists some works to handle activation outliers for post-training quantization (PTQ). Outlier Suppression [55] discover that LayerNorms amplify outliers, and propose Gamma Migration and Token-Wise Clipping to solve this issue and achieves 6-bit BERT PTQ without too much degradation. SmoothQuant [57] migrates the quantization difficulty of activation outliers to weights and achieves 8-bit PTQ for large language models, such as OPT-175B. Outlier Channel Splitting [65] duplicates channels containing outliers with small overhead on the size of the network. However, these methods mainly focus on PTQ or QAT, and seldom successfully deal with ultra-low 4-bit training.
3.3 Hadamard Quantization
We propose a Hadamard quantizer (HQ) to solve the outlier problem. Its main idea is to quantize the matrices in another linear space which has fewer outliers.
The outliers in activation matrices form a feature-wise structure [57]. They are typically concentrated on a few dimensions, i.e., only a few columns of are significantly larger than others. Hadamard transform [47] is a linear transformation, which can amortize the outliers into other entries. Specifically, the Hadamard transform is a matrix, where
Hadamard matrices are orthogonal and symmetric: , so . Consider any coordinate row vector111A vector which -th dimension is 1, and all other dimensions are 0. , we have , where is a -dimensional all-one-vector. This demonstrates the extreme case when a single outlier dominates all the rest dimensions. In this case, Hadamard transformation effectively turns the vector into a quantization-friendly all-one-vector. The practical effect of the Hadamard transform on suppressing activation outliers is demonstrated in Fig. 2.
HQ uses a block-diagonal transformation matrix : where is a multiple of . To suppress outliers, we quantize a transformed version of and :
Combining the quantized matrices, we get
(3) |
where the inverse transformations cancel with each other, and the MM can be implemented as:
Procedure HQ-MM
1.
Compute and in FP16.
2.
Quantize the resultant matrices to INT4 by LSQ.
3.
Multiply the two INT4 matrices.
4.
Dequantize the resultant INT32 matrix to FP16 by multiplying .
For time complexity,
Step 1 takes FP16 multiply-accumulates (MACs); Step 2 and Step 4 takes FP16 MACs in total; and Step 3 takes INT4 MACs. Comparing with the plain LSQ Eq. (2), the amount of FP16 MACs increases by times, from to . However, our HQ-MM is still much cheaper than an FP16 MM given and .
The number shows a tradeoff between the ability to suppress outliers and computation complexity. Larger allows for amortizing the outlier within a larger horizon, at the cost of being more expensive. We propose an adaptive algorithm to choose for each activation depending on the outlier scale, as discussed in Appendix A.5. The typical value is , while the dimensionality and ranges from 768 to 4096.




4 Backpropagation
We now consider accelerating the backpropagation of the linear layer with INT4 operations. The linear operator HQ-MM defined in Eq. (3) has four inputs: activation , weight , and step sizes , . Given the output gradient w.r.t. some loss function , we need to compute the gradient of all four inputs. We discuss the computation of activation/weight gradients in this section, and left the discussion of step size gradients to Appendix A.3. For simplicity, we omit and simply use to denote the gradient in the following text.
By the straight-through estimator [5] and the chain rule, we have
(4) |
where we define , , , and . For computing the gradients, three types of matrix multiplications are required:
-
1
The element-wise multiplication of a matrix (or ) with another INT4 (or INT32) matrix. This operation has low time complexity.
-
2
The multiplication of an INT32 matrix with an FP16 block-wise Hadamard matrix , which also has low-time complexity, as discussed in Sec. 3.3.
-
3
The multiplication of the FP16 gradient with an INT4 matrix or , which we will accelerate by quantizing to INT4.
In the rest of this section, we will discuss quantization methods to compute the “type 3” MMs and . We quantize dynamically for each MM, while and have been already calculated in forward propagation in Section. 3. We start by discussing the structure of the gradient.
4.1 Structural Sparsity of Gradients
We note that the gradient matrix tends to be very sparse along the training process. Furthermore, the sparsity has a structure: few rows (i.e., tokens) of have large entries, while most other rows are close to an all-zero vector. We illustrate this by plotting the histogram of per-row norm for all the rows in Fig. 2.
Such a structural sparsity arises from the heavy overparameterization [61] of modern neural networks. During almost the entire training process, the network operates in the overparameterized scheme [33], where it can fit most training data well, except for a few hard examples. Therefore, the (activation) gradient will be close to zero for well-fitted data points. We find that for pretraining tasks, such structural sparsity quickly emerges after only a few training epochs. For fine-tuning tasks, the gradient is always sparse during the whole training process.
4.2 Bit Splitting and Leverage Score Sampling
Here, we discuss how to design gradient quantizers to accurately compute the MMs during backpropagation by leveraging structural sparsity. The high-level idea is that many rows of the gradient are so small that they have little impact on the parameter gradient, yet they waste abundant computation. On the other hand, the large rows cannot be accurately represented with INT4. We drop some small rows and use the saved computation to represent large rows more accurately.
First, we propose bit splitting (BS), which splits a full-precision matrix as higher and lower 4 bits:
(5) |
where are two floating-point scalars, and , are INT4 matrices representing the higher and lower 4 bits, respectively. BS can be implemented by first quantizing to INT4 as and then quantize the residual to INT4 as . BS can be viewed as an INT8 representation of a matrix, where and are the higher and lower 4 bits of the INT8 representation. Next, we discuss how to compute the weight and activation gradient.
Weight Gradient
As discussed earlier, weight gradient involves the matrix multiplication , where and is an INT4 matrix. By Eq. (5):
(6) |
where we define and to be a INT4 matrix. Eq. (6) represents the product of an INT8 and an INT4 , and can be implemented by two INT4 MMs and . Such MM is rather accurate since is represented with 8 bits.
However, comparing to a naïve quantization of to INT4, BS doubles the amount of INT4 operations for MM. We propose leverage score sampling (LSS) to cut the operations of Eq. (5) by half, to the same amount as the naïve MM . Noticing that the MM Eq. (6) can be written as the sum of rank-1 matrices:
(7) |
where . Due to the sparsity of , the matrices differ in magnitude and small matrices can be discarded without having a big influence on the result.
Our proposed LSS assigns each a probability , that satisfies . We define random masks and mask matrix , and approximate it as
which is an unbiased approximation since
In expectation, there are only nonzero s. Therefore, LSS reduces the cost of MM by half. For LSS to be accurate, we minimize its variance. We have:
Proposition 4.1.
(LSS variance for weight gradient)
The coefficient is called the leverage score, which can be easily computed in low time complexity. When , the variance attends its minimum due to Cauchy inequality:
where the equality holds when Intuitively, LSS can approximate the MM Eq. (7) well with significantly lower computational cost when the leverage scores are diverse, which is indeed the case as shown in Fig. 2.
Define to be the top-left submatrix of and to be the bottom-right one, we have
which can be implemented by two INT4 MMs with sampled rows/columns. Putting everything together, we propose the following MM procedure to compute the weight gradient: Procedure LSS-MM 1. Quantize with BS to obtain and in INT4. 2. Compute the leverage score in FP16. 3. Sample the masks . 4. Sample rows of and given the masks . 5. Compute INT4 MMs and 6. Dequantize and sum up the resultant INT32 matrices to obtain the FP16 result . As only has non-zero elements in expectation, the two matrix multiplications in Step 5 take about INT4 MACs, which aligns with the cost of the naïve MM . The overhead of all the other steps is in total.
Activation Gradient
Similar to the previous discussion, the gradient of input can be written as
(8) |
where we define and to be a INT4 matrix, is a identity matrix. The original product can also be implemented by two INT4 MMs and But different from weight gradients, we now focus on in Eq. (8) and do leverage score sampling on this MM. A detailed discussion can be found in Appendix B.2, and we only present the leverage score here. Similarly, we write the MM as the sum of smaller multiplications:
where we define and associate the probability and Bernoulli mask with the multiplication. The leverage score for activation gradient is and the variance attains minimum when . More details about the algorithm can be found at Appendix. A.3 On the implementation side, once the mask is known, we can decompose the MM Eq. (8) as two INT4 MMs: .
Baseslines | 4-bit training methods | ||||||
Dataset | Train type | Model | Metric name | FP | INT8 | LSQ+LUQ | HQ+LSS |
GLUE-dev | FT | Bert-base | Avg | ||||
Bert-large | Avg | ||||||
SQUAD v1 | FT | Bert-base | F1 | ||||
SQUAD v2 | FT | Bert-base | F1 | ||||
Adversarial QA | FT | Bert-base | F1 | ||||
SWAG | FT | Bert-base | Acc | ||||
CONLL | FT | Bert-base | Acc | ||||
WMT | PT | Transformer-base | BLEU | 27.5 | 25.4(Ultra Low) | 27.17 | - |
SacreBLEU | 26.5 | - | - | 25.57 | |||
CIFAR10 | FT | ViT-B/32 | Top1 Acc | ||||
ViT-L/32 | 98.98 | 98.76 | 98.38 | 98.47 | |||
CIFAR100 | FT | ViT-B/32 | Top1 Acc | ||||
ViT-L/32 | 93.07 | 92.2 | 90.97 | 91.13 | |||
ImageNet1k | FT | ViT-B/32 | Top1 Acc | 81.88 | 80.42 | 77.25 | 79.18 |
ViT-L/32 | 81.62 | 81.3 | 77.41 | 80.06 | |||
ViT-L/16 | 84.55 | 83.05 | 82.4 | 82.61 | |||
PT | Deit-small | Top1 Acc | 73.1 | 70.95 | 69.96 | 69.18 |
5 Experiments
We evaluate our INT4 training algorithm on a wide variety of tasks including language model fine-tuning, machine translation, and image classification. We implement our proposed HQ-MM and LSS-MM algorithms with CUDA and cutlass222https://github.com/NVIDIA/cutlass, and the implementation details can be found in Appendix A. We replace all the floating-point linear operators with our INT4 implementation except simply using LSQ for embedding layers, and leaving the last classifier layer in full precision. We adopt default architectures, optimizers, schedulers, and hyper-parameters for all the evaluated models.
5.1 Converged Model Accuracy
We compare the accuracy of the converged model on various tasks in Table 1. The compared methods include full-precision training (FP), INT8 training [3](INT8), FP4 training [46] (“Ultra-low”), 4-bit logarithm quantization [8] with LSQ for activations and weights (LSQ+LUQ), and our algorithm which utilizes HQ for forward and LSS for backpropagation (HQ+LSS). Ultra-low does not have a public implementation, so we only report its performance from its original paper on the machine translation task. Except for the large machine translation task and the task of large vision transformers, we repeat each run by three times and report the standard deviation as subscripts in tables. We do not include any kind of knowledge distillation or data augmentation.
Language model fine-tuning:
We use the pretrained BERT-base-uncased and BERT-large-uncased [24] model, and evaluate the performance of our method on GLUE dev-set [52], SQUAD [40], SQUADv2 [39], Adversarial QA [4], CoNLL-2003 [41] and SWAG [60] datasets. We present the average result of bert-base-uncased and bert-large-uncased model on the GLUE dataset. The full results are listed in Appendix C.2. Compared with LSQ+LUQ, our method achieves improvement of accuracy on average for the bert-base model and achieves improvement of accuracy on average for the bert-large model. We further show the result on the SQUAD, SQUAD 2.0, Adversarial QA, CoNLL-2003, and SWAG datasets. On all of the tasks, compared with LSQ+LUQ, our method achieves better performance. We improve by and on SQUAD and SQUAD 2.0 compared to LSQ+LUQ, respectively. On the more difficult Adversarial QA, we improve by on F1 score. On SWAG we improve by and on CoNLL-2003 we improve by accuracy.
Machine translation:
We also apply our method for pretraining. We train a Transformer-base [51] model on WMT 14 En-De dataset [6] for machine translation. Note that we reproduce this experiment with Fairseq’s recipe 333https://github.com/facebookresearch/fairseq, which reports the SacreBleu score (26.5 for FP) [36], while Ultra-low and LUQ report the more optimistic original BLEU score (27.5 for FP) [35]. Our HQ+LSS has about BLEU degradation, which is smaller than of Ultra-low and higher than reported in the LUQ paper. Nevertheless, HQ+LSS still performs comparably with existing methods for this pretraining task, and it supports contemporary hardware.
Image Classification:
We load ViT checkpoints pretrained on ImageNet21k [13], and fine-tune it on CIFAR-10, CIFAR-100 [27], and ImageNet1k. We use ViT-B/32 and ViT-L/32 for CIFAR datasets and use ViT-B/32, ViT-L/32 and ViT-L/16 for ImageNet1k. On CIFAR10 we achieve accuracy degradation, while LSQ+LUQ has degradation for ViT-B/32 and degradation for ViT-L/32. On CIFAR100, INT8 already has accuracy degradation, which shows its difficulty. We improve by accuracy for ViT-B/32 and accuracy for ViT-L/32 compared with LSQ+LUQ. On ImageNet1k, we improve by accuracy for ViT-B/32, accuracy for ViT-L/32 and for ViT-L/32 compared with LSQ+LUQ. We further test the effectiveness of our algorithm for pretraining a DeiT-Small model [50] on ImageNet1K, where HQ+LSS can still converge to similar accuracy level compared to LSQ+LUQ, while being more hardware friendly.





5.2 Ablation Study
Here, we conduct ablation studies to show the effectiveness of our forward and backward methods independently on the challenging CoLA dataset. To study the effectiveness of different quantizers for forward propagation, we leave backpropagation in FP16. The result is shown in Fig. 5. We first validate the claim in Sec. 3.2 that outliers are the main cause of accuracy degradation in quantized forward propagation. We test an “outlier” method which maintains largest activation entries in FP. The “outlier” method achieves good performance, which proves that outliers are indeed the most significant challenge of the transformer’s forward quantization. The hardware-unfriendly “outlier” method serves as an upper bound of methods to handle outliers. Our HQ outperforms LSQ by better handling the outliers and achieves comparable results to maintaining the outliers.
We also investigated whether more granular quantizers, such as per-token quantization or per-channel quantization could be used to quantify outliers, or whether existing methods like SmoothQuant [57] could be used for INT4 FQT. The results are listed in Appendix C.3, and we find that without HQ, none of these methods achieve good accuracy under 4-bit quantization, and the result of HQ is not strongly affected when more granular quantization methods are applied.
For backpropagation, we compare a simple minimax quantizer [3], LUQ [8] and our LSS, and leave forward propagation in FP16. The minimax quantizer divides the numerical range from the minimum to the maximum into equally large quantization bins. The result is shown in Fig. 5. While the bit-width is higher than 2, our LSS achieves results that are comparable and even slightly higher than LUQ. Meanwhile, LSS is more hardware friendly as it requires only INT4 arithmetic.
5.3 Computational and Memory Efficiency
Finally, we demonstrate the potential of our method to accelerate neural network training by evaluating our prototypical implementation discussed in Appendix A.6. We emphasize that our implementation is not fully optimized. For example, the backward computation requires an INT4 MM in the form of , while cutlass only supports , so explicit transpose is required. We also do not fuse the linear operators with nonlinearities and normalizations. Therefore, the results cannot fully reflect the potential of INT4 training algorithms. A fully optimized implementation requires heavy engineering, which exceeds the scope of our paper.
Operator Speed:
We compare the throughput of our proposed HQ-MM (HQ), LSS for computing weight gradient (LSSWeight), LSS for computing activation gradient (LSSAct), and their average throughput (INT4) with a baseline tensor-core FP16 GEMM implementation (FP16) provided by cutlass in Fig. 5 on an Nvidia RTX 3090 GPU which has a peak throughput at 142 FP16 TFLOPs and 568 INT4 TFLOPs. As the matrix size grows, the overhead of quantization diminishes and our INT4 operators can be up to 2.2 times faster compared with FP16 MM. We further analyze the quantization overhead for each operator in Appendix C.5.
Training Throughput:
We compare the training throughput of the FP16 PyTorch AMP and our INT4 training algorithm for training BERT [24] and GPT [37]-style language models on a system of 8 Nvidia A100 GPUs. We vary the hidden layer size, intermediate fully-connected layer size, and batch size, and plot the speedup of INT4 training in Fig. 5. Our INT4 training algorithm can achieve up to 35.1% speedup for BERT-style models and up to 26.5% speedup for GPT-style models. The training time can be found in Appendix C.4.
6 Conclusions
We propose a hardware-friendly INT4 training method for transformers. By analyzing the properties of MMs in transformers, we propose HQ and LSS methods to quantize activations and gradients while preserving accuracy. On several important tasks, our method performs comparably or better than existing INT4 methods. Our work can be potentially extended beyond transformers to other MM-only architectures, such as MLP-Mixer [49], graph neural networks [25], and recurrent neural networks [20]. We leave it as a future direction.
Broader Impacts:
Our algorithm can improve efficiency and reduce the energy consumption of training neural networks, which helps reduce the carbon footprint caused by deep learning. However, our efficient training algorithm might also facilitate the development of large language models with safety concerns for human beings; and malicious AI applications such as fake content generation.
Limitations:
The main limitation of this work is that it can only accelerate models with a large portion of matrix multiplications (linear layers), but can not accelerate convolution layers. Moreover, the proposed method cannot yet work well for those extremely large models such as OPT-175B. To the best of our knowledge, even INT8 training is still an open problem for these large models.
References
- [1] Menachem Adelman and Mark Silberstein. Faster neural network training with approximate tensor operations. arXiv preprint arXiv:1805.08079, 2018.
- [2] Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. Binarybert: Pushing the limit of bert quantization. arXiv preprint arXiv:2012.15701, 2020.
- [3] Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. Scalable methods for 8-bit training of neural networks. In Advances in Neural Information Processing Systems, pages 5145–5153, 2018.
- [4] Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. Beat the ai: Investigating adversarial human annotation for reading comprehension. Transactions of the Association for Computational Linguistics, 8:662–678, 2020.
- [5] Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
- [6] Ondřej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the ninth workshop on statistical machine translation, pages 12–58, 2014.
- [7] Jianfei Chen, Yu Gai, Zhewei Yao, Michael W Mahoney, and Joseph E Gonzalez. A statistical framework for low-bitwidth training of deep neural networks. In Advances in neural information processing systems, 2020.
- [8] Brian Chmiel, Ron Banner, Elad Hoffer, Hilla Ben Yaacov, and Daniel Soudry. Logarithmic unbiased quantization: Practical 4-bit training in deep learning. arXiv preprint arXiv:2112.10769, 2021.
- [9] Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.
- [10] Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. In International Conference on Learning Representations, 2020.
- [11] Zhen Dong, Zhewei Yao, Yaohui Cai, Daiyaan Arfeen, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Hawq-v2: Hessian aware trace-weighted quantization of neural networks. arXiv preprint arXiv:1911.03852, 2019.
- [12] Zhen Dong, Zhewei Yao, Amir Gholami, Michael Mahoney, and Kurt Keutzer. Hawq: Hessian aware quantization of neural networks with mixed-precision. ICCV, 2019.
- [13] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- [14] Petros Drineas and Michael W Mahoney. Randnla: randomized numerical linear algebra. Communications of the ACM, 59(6):80–90, 2016.
- [15] Mario Drumond, LIN Tao, Martin Jaggi, and Babak Falsafi. Training dnns with hybrid block floating point. In Advances in Neural Information Processing Systems, pages 453–463, 2018.
- [16] Steven K Esser, Jeffrey L McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S Modha. Learned step size quantization. In International Conference on Learning Representations, 2019.
- [17] Angela Fan, Edouard Grave, and Armand Joulin. Reducing transformer depth on demand with structured dropout. In International Conference on Learning Representations, 2019.
- [18] Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412, 2020.
- [19] Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4852–4861, 2019.
- [20] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- [21] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European conference on computer vision, pages 646–661. Springer, 2016.
- [22] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32, 2019.
- [23] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2704–2713, 2018.
- [24] Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186, 2019.
- [25] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
- [26] Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In International Conference on Learning Representations, 2019.
- [27] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, 2009.
- [28] Hamed F Langroudi, Zachariah Carmichael, and Dhireesha Kudithipudi. Deep learning training on the edge with low-precision posits. arXiv preprint arXiv:1907.13216, 2019.
- [29] Hamed F Langroudi, Zachariah Carmichael, David Pastuch, and Dhireesha Kudithipudi. Cheetah: Mixed low-precision hardware & software co-design framework for dnns on the edge. arXiv preprint arXiv:1908.02386, 2019.
- [30] Zechun Liu, Zhiqiang Shen, Shichao Li, Koen Helwegen, Dong Huang, and Kwang-Ting Cheng. How do adam and training strategies help bnns optimization. In International Conference on Machine Learning, pages 6936–6946. PMLR, 2021.
- [31] Zechun Liu, Zhiqiang Shen, Marios Savvides, and Kwang-Ting Cheng. Reactnet: Towards precise binary neural network with generalized activation functions. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, pages 143–159. Springer, 2020.
- [32] Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training. In International Conference on Learning Representations, 2018.
- [33] Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment, 2021(12):124003, 2021.
- [34] Nvidia. Transformer Engine. https://github.com/NVIDIA/TransformerEngine, 2023. Online; accessed 23 January 2023.
- [35] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318, 2002.
- [36] Matt Post. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771, 2018.
- [37] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
- [38] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–16. IEEE, 2020.
- [39] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
- [40] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
- [41] Erik F Sang and Fien De Meulder. Introduction to the conll-2003 shared task: Language-independent named entity recognition. arXiv preprint cs/0306050, 2003.
- [42] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
- [43] Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Q-bert: Hessian based ultra low precision quantization of bert. arXiv preprint arXiv:1909.05840, 2019.
- [44] Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Q-bert: Hessian based ultra low precision quantization of bert. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8815–8821, 2020.
- [45] Xiao Sun, Jungwook Choi, Chia-Yu Chen, Naigang Wang, Swagath Venkataramani, Vijayalakshmi Viji Srinivasan, Xiaodong Cui, Wei Zhang, and Kailash Gopalakrishnan. Hybrid 8-bit floating point (hfp8) training and inference for deep neural networks. In Advances in Neural Information Processing Systems, pages 4901–4910, 2019.
- [46] Xiao Sun, Naigang Wang, Chia-Yu Chen, Jiamin Ni, Ankur Agrawal, Xiaodong Cui, Swagath Venkataramani, Kaoutar El Maghraoui, Vijayalakshmi Viji Srinivasan, and Kailash Gopalakrishnan. Ultra-low precision 4-bit training of deep neural networks. In Advances in Neural Information Processing Systems, volume 33, 2020.
- [47] James Joseph Sylvester. Lx. thoughts on inverse orthogonal matrices, simultaneous signsuccessions, and tessellated pavements in two or more colours, with applications to newton’s rule, ornamental tile-work, and the theory of numbers. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 34(232):461–475, 1867.
- [48] Hanlin Tang, Xipeng Zhang, Kai Liu, Jianchen Zhu, and Zhanhui Kang. Mkq-bert: Quantized bert with 4-bits weights and activations. arXiv preprint arXiv:2203.13483, 2022.
- [49] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34:24261–24272, 2021.
- [50] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, pages 10347–10357. PMLR, 2021.
- [51] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
- [52] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018.
- [53] Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, and Kailash Gopalakrishnan. Training deep neural networks with 8-bit floating point numbers. In Advances in Neural Information Processing Systems, pages 7675–7684, 2018.
- [54] Zheng Wang, Juncheng B Li, Shuhui Qu, Florian Metze, and Emma Strubell. Squat: Sharpness-and quantization-aware training for bert. arXiv preprint arXiv:2210.07171, 2022.
- [55] Xiuying Wei, Yunchen Zhang, Xiangguo Zhang, Ruihao Gong, Shanghang Zhang, Qi Zhang, Fengwei Yu, and Xianglong Liu. Outlier suppression: Pushing the limit of low-bit transformer language models. arXiv preprint arXiv:2209.13325, 2022.
- [56] Shuang Wu, Guoqi Li, Feng Chen, and Luping Shi. Training and inference with integers in deep neural networks. In International Conference on Learning Representations, 2018.
- [57] Guangxuan Xiao, Ji Lin, Mickael Seznec, Julien Demouth, and Song Han. Smoothquant: Accurate and efficient post-training quantization for large language models. arXiv preprint arXiv:2211.10438, 2022.
- [58] Yukuan Yang, Lei Deng, Shuang Wu, Tianyi Yan, Yuan Xie, and Guoqi Li. Training high-performance and large-scale deep neural networks with full 8-bit integers. Neural Networks, 125:70–82, 2020.
- [59] Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. Q8bert: Quantized 8bit bert. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS), pages 36–39. IEEE, 2019.
- [60] Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326, 2018.
- [61] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021.
- [62] Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. LQ-Nets: Learned quantization for highly accurate and compact deep neural networks. In The European Conference on Computer Vision (ECCV), September 2018.
- [63] Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. Ternarybert: Distillation-aware ultra-low bit bert. arXiv preprint arXiv:2009.12812, 2020.
- [64] Xishan Zhang, Shaoli Liu, Rui Zhang, Chang Liu, Di Huang, Shiyi Zhou, Jiaming Guo, Yu Kang, Qi Guo, Zidong Du, et al. Adaptive precision training: Quantify back propagation in neural networks with fixed-point numbers. arXiv preprint arXiv:1911.00361, 2019.
- [65] Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Chris De Sa, and Zhiru Zhang. Improving neural network quantization without retraining using outlier channel splitting. In International conference on machine learning, pages 7543–7552. PMLR, 2019.
- [66] Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. International Conference on Learning Representations, 2017.
- [67] Feng Zhu, Ruihao Gong, Fengwei Yu, Xianglong Liu, Yanfei Wang, Zhelong Li, Xiuqi Yang, and Junjie Yan. Towards unified int8 training for convolutional neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1969–1979, 2020.
Appendix A Implementation Details
In this section, we present some works that need to be done to actually accelerate the training process on hardware.
A.1 BMM in Attention
In attention, there are batch matrix multiplications (BMMs) that need to be dealt with. We now show that our method for MMs can be extended to BMMs.
Consider the following BMM product:
where we define The Hadamard matrix is defined as :
where In this case,
which verifies that our HQ can be applied to BMMs.
For backward, the gradient of weight and activation can be calculated by the straight-through estimator and the chain rule:
where we define being the batch step size, , , , and .
Similar to Sec. 4.2, we only focus on and , since we do leverage sampling on them.
For , we define the sample probability and sample the in the same way as MMs. The matrix can be computed as , where is defined as , and follows the same definition of Eq. 6and the leverage score is for
For , similarly, can be viewed as where we define , , follows the definition of Eq.5. So it can be computed as , where is defined as , and the leverage score is for which verifies that our LSS can be applied to BMM.
A.2 Computing Leverage Score
In the previous discussion, we find the optimal sample probability that can minimize the variance of the gradient. However, it is likely for the proportional is larger than one, which is invalid for the Bernoulli distribution. Accordingly, we propose an algorithm to solve this issue.
Define the probability array as
we first clamp the array to . In this case, , so we scale the which is smaller than 1 to make sure their sum is again . However, this will probably introduce some more elements larger than 1, so we cycle through the above operations until all the This process will certainly stop, since if after the scaling operation, no element is larger than 1, then we get a valid distribution. Otherwise, the number larger than 1 is reduced by at least one, thus the process will halt after at most times.
A.3 Learning Quantizer Parameters
In this section, we discuss the detail of how to calculate the gradient of activation and quantization step size.
For gradient of activation, the coefficient is the leverage score for activation gradient, and the variance achieves its minimum When by the Cauchy Inequality.
Putting everything together, we propose the following MM procedure to compute activation gradient: Procedure LSS-MM 1. Quantize with BS to obtain and in INT4. 2. Compute the leverage score in FP16. 3. Sample the masks . 4. Sample rows of given the masks . 5. Compute and by discard some of its rows. 6. Compute INT4 MMs and 7. Dequantize and sum up the resultant INT32 matrices to obtain the FP16 result . The two matrix multiplications in Step 5 take about INT4 MACs in expectation.
For the quantization step sizes. Following the chain rule, we have
where we define , , and being the number of elements of weight and activation, , and .
A.4 Cold Start Problem
There is a cold start problem. When the model is trained from scratch (i.e., from a random initialization), distributions of weights and activations can change rapidly in the early stage of optimization. In this case, jointly optimizing the quantization step size and the weights would cause the training to be unstable. As a remedy, we do not learn the step size in the first few iterations, and use a heuristic rule to dynamically set the step size for each tensor to in each iteration.
A.5 Choose hadamard matrix size
For the hadamard matrix, let the hadamard matrix to be : where is a multiple of . We first define
where and can be viewed as an approximation of and . Then, we define the quantization error to be We search for the optimal that can minimize this quantization error. For fine-tuning tasks, once the hadamard matrix size has been calculated, we fix it through the training process. For the pre-training task, since the distribution shifts greatly as we train the model, we empirically define a time when we re-initialize the hadamard matrix size and the LSQ step size. Usually, we do this when the first 2 epochs finish.
A.6 GPU Implementation
In the previous discussion, we get to know HQ-MM and LSS-MM from an algorithm level, nevertheless it is not enough to actually implement it on hardware. In this section, we will delve deeper into hardware implementation details as well as extra limitations.
HQ-MM can be divided into 5 parts: Hadamard matrix multiplication, Quantize, Data Pack, INT4 GEMM, and Dequantize.
For the Hadamard matrix multiplication process, since it can be interpreted as a half float matrix multiplication process where the two matrices involved in the operation are input/weight matrix and hadamard matrix, respectively, we implement it in Python, because PyTorch MM uses CublassGemm and is more efficient then CutlassGemm.
In the quantize process, we quantize input/weight into INT4 data respectively, and also preserve a corresponding FP16 version for the LSQ Back Propagation process to use.
In the previous discussion, we assume the quantize part of HQ-MM is quantizing the resultant matrices to INT4, however, the smallest representation unit of data is INT8. As a result, we actually use INT8 data type to represent quantized data and pack two adjacent data into one data using in the data packing process, which means we use one INT8 data to represent two adjacent INT4 data. With both input matrices’ data packed in this way, we then use cutlass tensor-core INT4 GEMM to do the matrix multiplication.
For the GEMM process, we choose Nvidia CutlassGemm because it’s the most efficient open-source operator library we can find. We use INT4 Tensor Core Gemm for our implementation and it requires the two input matrices A&B to be RowMajor and ColMajor, respectively. Since the default Pytorch tensor is RowMajor, we have to use Transpose+Contiguous operations to make it ColMajor, which is very time-consuming and needs further optimization in the future.
Finally, we dequantize the INT GEMM result back into FP16 output using a dequantize kernel, which is the final output of the forward kernel.
As compared, LSS-MM is more complicated, and can be divided into 7 parts: Quantization of higher lower 4-bit, Leverage Score Calculating, Sampling, Data Pack, INT4 GEMM, Dequantize, and LSQ Back Propagation.
In the Quantize process, we fuse the quantize operation of higher 4-bit and lower 4-bit into a single kernel for acceleration. In the Leverage Score Calculating process, we use the quantized INT8 data to calculate the score and scale up it in the final because integer arithmetic is far more efficient than float arithmetic.
In the sampling process, we sample out rows/columns given the previously calculated leverage score. Note that in Section. A.2, we repeat our proposed algorithm for several loops to sample out specific elements, which is effective but not efficient. According to experiments, however, we notice that simply selecting elements whose leverage score is bigger than 0 can also work well, even better than our proposed algorithm in some cases. So in real quantization implementation, we just sample out rows/ columns whose Euclidean norm is bigger than 0 to accelerate our training process.
Pack, Gemm, and Dequantize processes are as similar as before. It’s worth noting that for Int4 Tensor Core Gemm, suppose two input matrices have shape and , needs to be a multiple of 32 so that the Tensor core Gemm address can be aligned. We do not need to consider this in the Forward Propagation process because the input data shape always satisfies. However, in the Back Propagation process, the matrix shape may not meet the requirement after sampling. As a result, we need zero_padding the sampled matrix so that can be a multiple of 32.
Finally, we utilize the dequantized data to do the LSQ Back Propagation. We also fuse all operations into a single Cuda kernel for acceleration, and the metric remains.
Besides the component of HQ-MM and LSS-MM , there is still something that needs to be mentioned.
-
1.
We omit the Quantization and Leverage Score Calculating process in LSSinput, and use the same value as LSSWeight to accelerate the training process.
-
2.
For Element-Wise kernel, we set block size as 256, grid size as input.numel()/256. For Reduction kernels like sum and min/max, we set block size as 32, grid size as RowNum, reducing elements in each row to the first 32 elements. We find this setting to be most efficient through experiments.
Appendix B Proofs.
In this section, we present the proofs of the leverage score.
B.1 Proof of Proposition. 4.1
Proposition B.1.
(LSS variance for weight gradient)
Proof.
∎
So that
(9) | ||||
(10) |
which proves.
B.2 Proof of Activation Leverage Score in Sec. 4.2
we divide the matrix multiplication into the sum of smaller multiplications:
(11) |
where we define
We assigns each a probability , that satisfies . We define random masks , and define , and make an unbiased estimation:
Define to be the top-left submatrix of and to be the bottom-right one, we have
In this case, and both only have parts of its rows being non zero, and the rest rows are zeros since they are discarded. Then, when we multiply it by , there are half of rows being zeros in and . So there’s no need to calculate them, and we successfully cut off half of the computation in this case.
Now focus on the variance that
Proposition B.2.
(LSS variance for activation gradient)
Proof.
∎
In this way, the coefficient is the leverage score.
Appendix C Experiments.
In this section, we present more details for experiments in Sec. 5.
C.1 Experiments setup
For the GLUE, QA, SWAG, and CONLL tasks, we implement our algorithm based on https://github.com/huggingface/transformers. For the machine translation task, we implement our algorithm based on https://github.com/facebookresearch/fairseq. For the ViT fine-tuning task, we implement our algorithm based on https://github.com/jeonsworld/ViT-pytorch. For the deit pretraining task, we implement our algorithm based on https://github.com/facebookresearch/deit.
We employed NVIDIA GeForce RTX 3090 for running most of the experiments, while the NVIDIA A40 was utilized to evaluate the performance of BERT-Large and ViT-L. Furthermore, we conducted runtime measurements using the NVIDIA T4, 3090, and A100 GPUs.
C.2 GLUE results
In this section, we present the detailed result of fine-tuning the GLUE dataset on BERT-base-uncased and BERT-large-uncased.
On BERT-base, on STSB, SST2, QNLI, and QQP, HQ+LSS only has accuracy degradation. On the most challenging tasks CoLA and RTE, our accuracy degradation is much smaller compared to LSQ+LUQ. On QQP and MNLI, our method achieves degradation, while LSQ + LUQ has degradation. The trend is that the more difficult the task is, the more significant our advantage over LSQ+LUQ.
On BERT-large, the improvement is significant. On CoLA, QNLI, and MNLI, the accuracy improvement compared with LSQ+LUQ . On other datasets like SST2 and QQP, the accuracy improvement is . On RTE the accuracy improvement is , and on STSB and MRPC the improvement is .
We suspect that for those challenging tasks, there is more information stored in the outliers, which results in a larger gap between our method and LSQ+LUQ.
Quantization Methods | |||||
---|---|---|---|---|---|
Model | Dataset | FP | INT8 | LSQ+LUQ | HQ+LSS |
Bert-base | CoLA | ||||
STSB | |||||
RTE | |||||
MRPC | |||||
SST2 | |||||
QNLI | |||||
QQP | |||||
MNLI | |||||
MNLI-mm | |||||
Bert-large | CoLA | ||||
STSB | |||||
RTE | |||||
MRPC | |||||
SST2 | |||||
QNLI | |||||
QQP | |||||
MNLI | |||||
MNLI-mm |
Training Methods | ||||
Model | (hidden_size, intermidiate_size, batch_size) | FP16 | HQ+LSS | SpeedUp |
Bert-large | (2560, 10240, 2048) | 15.094s | 18.949s | |
(4096, 16384, 1280) | 32.016s | 30.594s | ||
(5120, 20480, 960) | 47.418s | 39.482s | ||
(7680, 30720, 600) | 95.832s | 67.253s | ||
(8960, 35840, 480) | 128.441s | 83.388s | ||
(9600, 38400, 160) | 161.114s | 114.325s | ||
(12800, 51200, 100) | 326.265s | 255.966s | ||
(14400, 57600, 96) | 409.291s | 346.354s | ||
GPT2-base | (2560, 10240, 1536) | 17.253s | 22.037s | |
(4096, 16384, 960) | 35.937s | 35.694s | ~ | |
(5120, 20480, 768) | 52.723s | 46.548s | ||
(7680, 30720, 260) | 113.855s | 92.548s | ||
(8960, 35840, 200) | 150.680s | 114.881s | ||
(9600, 38400, 180) | 172.182s | 126.540s | ||
(12800, 51200, 112) | 320.757s | 236.433s |





C.3 More Granular Quantization Methods
In this section, in Table 4, we show that the more granular quantization methods, such as per-token quantization and per-channel quantization, or smoothing techniques, such as SmoothQuant, do not work under the 4-bit FQT setting. Meanwhile, combining these methods with HQ will not bring significant improvement.
We find that LSQ is beneficial for all of these more granular quantization methods under low-bit settings, which highlights the importance of LSQ. Meanwhile, we also notice that the smoothquant will even harm the result of LSQ when the bit-width is low. Our explanation is that the motivation of LSQ is to learn a trade-off between outliers and inliers, while smoothquant aims to sacrifice the precision of inliers in order to exactly maintain the information of outliers. When the bitwidth is high, this is not a problem, since there are still enough bits to quantize the inliers. But when the bitwidth is low, such sacrifice will cause severe problems since the inlier information is discarded.
Quantize Bits | |||||||
quantization methods | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
Per-tensor | 0 | 0 | 0 | 0 | 0 | 50.2 | 54.6 |
Per-token | 0 | 0 | 0 | 0 | 31.4 | 52.8 | 56 |
Per-channel | 0 | 0 | 0 | 0 | 0 | 51.9 | 56.7 |
smoothquant | 0 | 0 | 0 | 0 | 0 | 49.4 | 57.7 |
Per-token + Per-channel + smoothquant | 0 | 0 | 0 | 0 | 40.7 | 55.7 | 56.7 |
LSQ | 0 | 9.16 | 24.2 | 37.3 | 39.6 | 45.3 | 51.4 |
Per-token + LSQ | 0 | 15.3 | 27.8 | 31.6 | 42.9 | 46 | 54.4 |
Per-channel + LSQ | 0 | 8 | 23.9 | 29.3 | 40 | 45.5 | 50.7 |
smoothquant + LSQ | 0 | 0 | 0 | 0 | 49.6 | 54.9 | 57 |
Per-token + Per-channel + smoothquant + LSQ | 0 | 0 | 0 | 0 | 28.8 | 52.4 | 55.2 |
HQ | 0 | 45.2 | 54.6 | 54.2 | 56.5 | 57.4 | 58.4 |
HQ + Per-token + Per-channel | 0 | 48.4 | 54.1 | 54.9 | 55 | 56 | 56 |
HQ + Per-token + Per-channel + smoothquant | 0 | 0 | 46.6 | 54.9 | 55.9 | 55.8 | 56.5 |
C.4 Large Language Model Operator Speed
In this section, we show that our hardware-friendly INT4 training method can really accelerate the training process on Large Language Models. We run distributed training on a system of 8 A100 cards and our implementation uses distributed data parallel training with zero-3, gradient checkpointing, and optimizer offloading.
We experimented with two architectures: BERT-Large and GPT2-base. We vary the network width and batch size to make full utilization of the GPU memory and show the end-to-end performance for fine-tuning these models on the SuperGLUE RTE dataset in Table 3.
C.5 More experiments on Operator Speed
Time proportion
We examine the proportion of time for each part of computation in HQ-MM and LSS-MM operator in Fig. 6 when the shapes of input matrices vary. In HQ, hadamard means multiplying the input matrix with the Hadamard matrix, pack means packing input data into INT4 data, gemm means the matrix multiplication of two INT4 matrices. In LSSWeight, quantize corresponds to the quantization of higher and lower 4-bit, leverage means computing leverage score, sample means sample out rows/columns given the leverage score, dequantize is the process of dequantizing INT data back into FP16 data, and LSQ is the backpropagation process of LSQ method. In LSSAct, we ignore quantize and leverage process, using the same value as LSSWeight for saving time, other processes share the same meaning with LSSWeight. Note that our implementation is not fully optimized, and optimizations like operator fusion can further improve the performance.
Operator Speed on more GPUs
On an Nvidia RTX 3090 GPU with a Cuda capability of sm_86., we show the comparison of FP16 MM, HQ, and LSS operators in Section 5.3 as well as time proportion in each operator in Figure. 6. We also adjust our hardware implementation and test its performance on Nvidia T4 GPU and Nvidia A100 GPU, which have Cuda capability of sm_75 and sm_80 , respectively. The result is shown in Fig. 7 and Fig. 8.