Trainable Fixed-Point Quantization for Deep Learning Acceleration on FPGAs
Abstract.
Quantization is a crucial technique for deploying deep learning models on resource-constrained devices, such as embedded FPGAs. Prior efforts mostly focus on quantizing matrix multiplications, leaving other layers like BatchNorm or shortcuts in floating-point form, even though fixed-point arithmetic is more efficient on FPGAs. A common practice is to fine-tune a pre-trained model to fixed-point for FPGA deployment, but potentially degrading accuracy.
This work presents QFX, a novel trainable fixed-point quantization approach that automatically learns the binary-point position during model training. Additionally, we introduce a multiplier-free quantization strategy within QFX to minimize DSP usage. QFX is implemented as a PyTorch-based library that efficiently emulates fixed-point arithmetic, supported by FPGA HLS, in a differentiable manner during backpropagation. With minimal effort, models trained with QFX can readily be deployed through HLS, producing the same numerical results as their software counterparts. Our evaluation shows that compared to post-training quantization, QFX can quantize models trained with element-wise layers quantized to fewer bits and achieve higher accuracy on both CIFAR-10 and ImageNet datasets. We further demonstrate the efficacy of multiplier-free quantization using a state-of-the-art binarized neural network accelerator designed for an embedded FPGA (AMD Xilinx Ultra96 v2). We plan to release QFX in open-source format.
1. Introduction
Quantization has been one of the primary techniques to improve the efficiency of deep neural network (DNN) inference. There is an active body of work focusing on quantizing the matrix multiplications in DNNs (Banner et al., 2018; Migacz, 2017). Layers such as batch normalization (BatchNorm), activation functions, and residual connections, however, typically remain floating-point during training (Pappalardo, 2023). At the deployment time, on the other hand, FPGA accelerators usually convert these floating-point operations to fixed-point, since they are more efficient, through post-training quantization (PTQ) (Yang et al., 2019; Zhang et al., 2021). Eventually, there are two different models for training and FPGA-based inference. However, PTQ has several drawbacks: (1) PTQ requires extensive finetuning on fixed-point precision. Practical FPGA DNN accelerators, for example, commonly have dozens of normalization layers, activation functions, or residual connections. Each of these layers may have a different fixed-point bitwidth in order to minimize the model accuracy loss compared to its floating-point counterpart and in the meantime optimize for fewer hardware resources. Therefore, finetuning the bitwidth is intractable due to the large design space. (2) PTQ obviously requires a larger word length, i.e., more hardware resources, than quantization-aware training (QAT) to maintain the same model accuracy.
This paper explores fixed-point quantization-aware training to address the aforementioned problems. Specifically, we introduce QFX, a differentiable quantized fixed-point emulation technique. QFX emulates the fixed-point casting function and basic arithmetic operations, e.g., addition, subtraction, and multiplication. All operations can properly propagate gradients and can be applied anywhere in a DNN during model training. At deployment time, the fixed-point operations in the model can be directly replaced by their synthesizable counterparts supported by HLS without any numerical issues. Therefore, the trained model is exactly the one that will be deployed on an FPGA.
We leverage QFX and automatically learn the position of the binary point during model training. Fixed-point data types are determined by two hyperparameters: word length and integer length. Given the word length, the integer length affects the range of a fixed-point value and is critical for DNN model accuracy. Instead of tuning the integer length manually, QFX can automatically learn it through model training for each layer where fixed-point quantization is applied.
We further propose a differentiable K-hot multiplier-free quantization scheme. To minimize DSP usage on FPGAs, we develop a new quantization approach, where the fixed-point representation only includes a select few ”1”s, allowing us to substitute multiplications with additions and shifts. We are able to apply this novel scheme to binarized neural networks (BNNs) to construct multiplier-free DNN accelerators on FPGAs.
We demonstrate the efficacy of our proposed approach on both accuracy and resource usage. We benchmark QFX against PTQ on several popular DNN models using both CIFAR-10 and ImageNet datasets. Our trainable quantization method consistently outperforms PTQ in accuracy, especially in low-bitwidth settings. We further show that the K-hot quantization scheme drastically reduces the DSP usage in FracBNN (Zhang et al., 2021), a state-of-the-art BNN accelerator on an AMD Xilinx Ultra96 v2 FPGA.
2. Background
In this section, we provide a concise introduction to the fundamental concepts of integer and fixed-point quantization, laying the groundwork for later discussions.
2.1. Integer Quantization
Earlier research in this field primarily focused on integer quantization, where DNNs’ floating-point weights and activations are quantized to low-bitwidth integer values. This allows inference to be executed efficiently through hardware-accelerated integer matrix multiplications.
Typically, quantization functions map a floating-point tensor to an integer tensor using the following formula:
Here, denotes the scaling factor, and serves as the zero point. and determines the range of values that can be represented by the quantized number. clip cuts the value outside the range and round operation maps floating-point value to an integer.
After computations of each quantized layer, the output q will be dequantized to floating-point:
Fake quantization. Fake quantization is often used to simulate QAT (Jacob et al., 2018; Pappalardo, 2023). In this approach, weights and activations are simulated as integers with floating-point representation, and all the multiplications and accumulations occur in FP32 so that the gradient computation is enabled. Nevertheless, HAWQ-V3 (Yao et al., 2021) mentioned that this method results in significant error accumulation when deployed on real hardware, primarily due to the discrepant integer castings on the simulation and the actual hardware.
Scaling factor quantization. Efforts (Jacob et al., 2018; Shawahna et al., 2022; Yao et al., 2021) have also been made to quantize the scaling factor to lower precision, recognizing that it is often directly cast to a fixed-point number during deployment on hardware which exacerbates the software-hardware gap. HAWQ-V3 proposed that we can construct the multiplier consisting of the scaling factors as a dyadic number during training. A dyadic number is essentially a fixed-point number, where is an integer, and represents the number of fractional bits. This approach not only replaces the multiplication between integer output and floating-point scaling factor with integer multiplication and bit shift but also helps avoid direct casting errors.
2.2. Fixed-point Quantization
While the computational overhead related to floating-point scaling factor calculations can be negligible compared to matrix multiplication in convolutional layers, element-wise layers themselves only involve one or two floating operations. Therefore, rather than representing a floating-point tensor as a low-bitwidth integer tensor and a floating-point scalar , an alternative approach is to quantize it to a fixed-point tensor, as fixed-point operations are also efficient on hardware (Shawahna et al., 2022). A floating-point tensor can be quantized to fixed-point with a given total bitwidth and an integer bitwidth through Algorithm 1.
Here, round operation rounds the number to nearest integer, removing extra bits in the fractional part, and overflow ensures the number falls into the representation range with bitwidth, removing extra bits in the integer part.
3. Methodology
In this section, we first introduce the differentiable fixed-point quantization emulation library (QFX), which enables fixed-point model training. We then describe how we leverage the library to automatically learn the fixed-point data type through model training. Finally, we demonstrate the construction of a multiplier-free CNN using these techniques on top of a BNN.
3.1. QFX Implementation

The fixed-point emulation in our QFX library is encapsulated in regular PyTorch layers inheriting nn.module
. The implementation is all through native PyTorch tensor operations, which makes QFX layers easily be plugged into existing PyTorch models. Most importantly, QFX layers are backpropagation compatible. Models with QFX quantization can support normal gradient descent training.
An example flow of using the QFX library is shown in Figure 1. In contrast to the traditional practice where there are different model copies for training and deployment, users can apply the fixed-point casting, addition, or multiplication operations anywhere in a given PyTorch model at training time. Exactly the same model will be deployed, except that the data type casting will be replaced directly by the hardened circuits. Below we introduce how fixed-point casting and basic arithmetic are implemented and how they are assembled into a quantized BatchNorm layer as an example.
Fixed-point casting. Given a floating-point input , the following equation converts it to a fixed-point representation:
For the quantization function, rounding towards the nearest integer is the most common case. On FPGAs, however, various circuits break the tie differently. QFX supports seven quantization modes, the implementation of which is shown in Table 1. It produces the same numerical outputs as the reference AMD Xilinx ap_fixed
library (XILINX, 2022). Similarly, all available overflow implementations are shown in Table 2. One may notice that the casting function leveraging PyTorch native operators is inherently differentiable except for floor
, ceil
, round
, and trunc
functions. Their gradients are almost zero everywhere. To enable training with QFX, we redefine their gradients as:
which is known as the straight-through estimator (STE) (Bengio et al., 2013). STE is commonly used to approximate gradients for non-differentiable operations (Mishra et al., 2017; Bhalgat et al., 2020; Jung et al., 2019). Users can also define customized gradient functions and integrate them under the QFX training scheme.
Supporting a wide range of commonly used quantization and overflow modes guarantees that the model inference during software training and FPGA deployment are aligned. However, DNN models are typically trained on GPUs or CPUs where rounding to the nearest even number, i.e., RND_CONV in Table 1, is supported by default. This subtle difference, especially in a DNN accelerator (Zhang et al., 2021), will lead to completely different hardware outputs. In order to synchronize the outputs produced by both software quantization emulation and FPGA deployment, the general QFX approach is necessary.
Basic arithmetic. QFX supports a subset of fixed-point element-wise operations that are the most common in DNNs, including addition, subtraction, multiplication, and division. To enable fixed-point arithmetic during model training, we apply the casting function to both the input and output of these operations. For example, the floating-point addition operation:
transforms into the following for fixed-point computation within QFX:
Each cast
function has its own word length and fractional bitwidth. Importantly, the basic operations remain differentiable with QFX casting functions. When deploying the model, the casting functions can be seamlessly replaced by circuits, by programming the data type of input and output variables using high-level synthesis (HLS) or hardware programming languages. The other three operations — subtraction, multiplication, and division, follow a similar approach.
Quantization Mode | Implementation |
---|---|
RND | |
RND_ZERO | |
RND_MIN_INF | |
RND_INF | |
RND_CONV 111Also referred to as round to the nearest even. | |
TRN | |
TRN_ZERO |
Overflow Mode | Implementation222Signed and unsigned are treated equivalently. For signed values, and . For unsigned fixed-point values, and . |
---|---|
SAT | |
SAT_ZERO | |
SAT_SYM_SIGNED | |
SAT_SYM_UNSIGNED | |
WRAP_SIGNED 333 | |
WRAP_UNSIGNED |
Quantized element-wise operations. With the supported basic arithmetic, common element-wise operations, e.g., BatchNorms and residual connections, can be emulated in fixed-point during training. As an example, the floating-point BatchNorm layer was computed by:
where and are learnable parameters, and is a small value for numerical stability. With fixed-point quantization, it becomes:
where and . Note that the casting operation is embedded within qfx.add
and qfx.mul
. Both input and output can be specified with different word lengths and fractional bitwidths. The QFX library includes pre-implemented fixed-point BatchNorm layers and residual connections, which are commonly used in DNNs. Users can easily customize their models to fixed-point by substituting the four basic operations with QFX.
3.2. Learning Binary Point Position
In Section 3.1, we have introduced the QFX library. In this section, we delve into how QFX can be harnessed to automate the learning of fixed-point data types, thereby eliminating the need for extensive precision tuning in practical applications.
A fixed-point representation can be divided into two parts: the integer and the fraction, separated by the binary point. The number of bits allocated to the integer and fractional parts determines the range and precision of the fixed-point representation. For example, a decimal number can be accurately represented by a fixed binary number , with integer bits and fractional bits:
Previous work (Loroch et al., 2017; Abadi et al., 2016; Shawahna et al., 2022) in fixed-point quantization primarily focused on fine-tuning PTQ with the binary point position fixed, demanding additional human effort to strike the right balance between range and precision.
In our work, QFX offers support for fixed-point QAT with different configurations for each layer and even for different weights within a single layer. Consequently, automating the binary point position searching is as straightforward as assigning a new parameter to Algorithm 1 with gradient backpropagation enabled. In this case, we are able to improve the accuracy with all quantization effects included during training.
Init parameters:
Set integer_bits , grad=True
forward:
= round(clamp())
= Quant()
As shown in Algorithm 2, the trainable floating point is rounded to integer to be fed into the fixed-point casting function of . Notably, gradient backpropagation in torch.clamp is enabled via STE, and QFX also enables gradient backpropagation for rounding. This ensures that the function is differentiable and binary point position remains trainable.
3.3. K-hot Fixed-point Quantization
Dedicated DSP units on FPGAs are often considered as a limited resource in contrast to the ample availability of LUTs. Even for highly compressed DNN models, a nontrivial number of DSPs are typically still required for operations using fixed-point arithmetic. By profiling a state-of-the-art BNN accelerator, FracBNN, we find that more than 60% of the DSPs on its Ultra96v2 FPGA deployment are still in use, even if all the matrix multiplications in the model are binarized. This presents a suboptimal scenario for edge devices. To mitigate this and further conserve DSP resources, this section introduces a multiplier-free fixed-point quantization scheme, which incurs zero DSP usage.
Instead of solely quantizing the weights in element-wise layers (e.g., BatchNorms) to fixed-point format, we further quantize them to a K-hot encoding — enforcing it to have only ”1”s in its binary representation:
(1) | ||||
(2) |
where is the position of the -th most significant ”1”, is the fractional bits, and bshift shifts left by bits in binary format. One multiplication between a fixed-point number and an integer number typically requires DSP units. By imposing the -hot constraint with a small , the multiplication is replaced by at most additions and bit shifts.
(3) | ||||
(4) |
This -hot fixed-point leads to an entirely DSP-free design.
To convert a floating-point input to a k-hot fixed-point number, we first use the Quant casting function in QFX to quantize it to fixed-point with corresponding integer bits . Then we find the most significant bits to approximate the number in binary representation. can be flexibly configured based on the accuracy-efficiency trade-off. A smaller is more efficient but less accurate. Detailed algorithm description can be found in Algorithm 3.
4. Evaluation
In this section, we first evaluate the efficacy of QFX on accuracy using four representative convolutional neural networks (CNNs), including ResNet-18 (He et al., 2016), ResNet-50 (He et al., 2016), MobileNetV2 (Sandler et al., 2018), and EfficientNet-B0 (Tan and Le, 2019). Then we present QFX results on three popular BNNs, including FracBNN-CIFAR-10 (Zhang et al., 2021), Bi-Real Net (Liu et al., 2018), and FracBNN-ImageNet. FracBNN-CIFAR-10 is trained on the CIFAR-10 dataset (Krizhevsky et al., 2009), while the others are trained on the ImageNet dataset (Deng et al., 2009). We compare them with PTQ and show QFX achieves considerable gains in accuracy. Additionally, we present the results of applying 2-hot quantization to the BNNs, which make them multiplier-free. We design an HLS accelerator for FracBNN-ImageNet to evaluate the hardware performance when employing these techniques.
4.1. Evaluation of Model Accuracy
We quantize element-wise operations to fixed-point, e.g., BatchNorm layers, activations functions, and residual connections. Both input activations and layer weights are quantized to target bitwidth, with the binary point position of each quantized number being automatically learned during training. Subsequently, we employ 2-hot quantization in BNNs for all the weight multipliers in BatchNorm layers and activation functions.
Quantization configurations.
In the experiments, we use RND/ TRN_ZERO rounding mode and SAT overflow mode since they are commonly used in quantization. For the baseline PTQ, we sweep binary point positions to find the configuration that minimizes the mean square error (MSE) of each layer output, which represents the gap between the floating-point layers and post-training fixed-point quantized layers.
Training details.
The training strategy for QFX can vary depending on the model architecture, model size, and training cost. For the CNNs, we only fine-tune for 5 epochs, since they are more resilient to low bitwidth element-wise operation with the convolutional layers remaining in floating-point. The training is executed on NVIDIA RTX 2080Ti GPUs, employing a learning rate of 1e-5 and a batch size of 64.
For FracBNN-CIFAR-10, we follow the two-step training strategy defined in FracBNN and train the model from scratch. QFX fixed-point quantization is applied at the second training step. For the other two BNN models, we only fine-tune QFX quantization for 30 epochs from the pretrained checkpoint due to the long training time. The hyperparameters are the same as defined in their original papers except that the learning rate of FracBNN-ImageNet is adjusted linearly with batch size of 128 due to limited GPU memory. We further fine-tune models with 2-hot quantization for 10 more epochs with a 10 smaller learning rate. Further fine-tuning may lead to a higher accuracy but at a higher training cost. The training for BNNs are conducted on NVIDIA RTX A6000 GPUs. All the experiments use the Adam optimizer (Kingma and Ba, 2014) without weight decay.
Accuracy Results.
Table 3 reports the accuracy results on CNN models. QFX outperforms PTQ significantly at 8 bits, and has similar accuracy at higher bits. This clearly shows the advantages of emulating fixed-point arithmetic during training and providing the support of differentiable quantization functions.
Table 4 shows the results of BNNs. With a larger bitwidth, e.g., 16 bits, PTQ can easily obtain good accuracy, especially on a small dataset. However, when quantized to 8 bits, PTQ is unable to produce reasonable accuracy, while QFX can still preserve the accuracy. For FracBNN-CIFAR-10, quantizing to 8 bits using QFX only causes a marginal 0.59% loss in accuracy compared to the BNN with floating-point element-wise layers. For Bi-Real Net, the loss is only 1.23% on ImageNet. Unlike other BNNs where the first convolutional layers are in high-precision, FracBNN has all convolutional layers binarized and appears to be more sensitive to the precision of element-wise layers. Nevertheless, it still has 69.27% accuracy at 8 bits with QFX.
Table 5 shows the results of applying 2-hot quantization on the multipliers within element-wise layers. After only 10 epochs of fine-tuning, we achieve an accuracy loss of less than 0.4% on both datasets with FracBNN. Notably, the slight improvement for Bi-Real Net underscores the potential for QFX to yield even better results with extended fine-tuning, indicating the possibility of lossless performance with multiplier-free quantization.
Network | FP32 | Method | W=10 | W=8 |
---|---|---|---|---|
ResNet-18 | 69.9 | PTQ | 66.10 | 42.78 |
QFX | 69.86 | 67.86 | ||
ResNet-50 | 76.1 | PTQ | 47.03 | 0.12 |
QFX | 75.67 | 67.80 | ||
MobileNetV2 | 72.2 | PTQ | 8.11 | 0.14 |
QFX | 70.19 | 57.45 | ||
EfficientNet-B0 | 76.1 | PTQ | 74.87 | 69.90 |
QFX | 76.92 | 76.26 |
Network | FP32 | Method | W=16 | W=12 | W=10 | W=8 |
---|---|---|---|---|---|---|
FracBNN- | 88.7 | PTQ | 88.70 | 87.40 | 80.70 | 11.90 |
CIFAR-10 | QFX | 88.34 | 88.63 | 88.65 | 88.11 | |
Bi-Real Net | 56.4 | PTQ | 54.90 | 45.50 | 7.90 | 0.40 |
QFX | 55.75 | 55.60 | 55.68 | 55.17 | ||
FracBNN- | 71.8 | PTQ | 69.70 | 23.30 | 0.70 | 0.10 |
ImageNet | QFX | - | - | - | 69.27 |
Network | QFX | QFX w/ 2-hot | Diff |
---|---|---|---|
FracBNN-CIFAR-10 | 88.11 | 87.89 | -0.22 |
Bi-Real Net | 55.17 | 55.31 | 0.14 |
FracBNN-ImageNet | 69.27 | 68.95 | -0.32 |
4.2. FPGA Implementation


Resource | QFX | QFX w/ 2-hot | BIND_OP imp=fabric |
---|---|---|---|
DSP | 128 | 0 (100%) | 0 (100%) |
FF | 7222 | 9034 (25.1%) | 15497 (114.6%) |
LUT | 2944 | 3676 (24.9%) | 4421 (50.2%) |
To evaluate the performance and resource improvements by QFX, we implement our FracBNN-ImageNet accelerator with 8-bit QFX quantization on an AMD Xilinx Ultra96 v2 FPGA board, which has 360 DSPs, 71k LUTs, 141K FFs and 949 KB BRAMs. We preserve the existing binary and fractional convolution layers in the original FracBNN accelerator. Our efforts focus on applying QFX to the element-wise layers (including BatchNorm, BPRelu, and residual), as well as binary quantization layers, thereby achieving a DSP-free design in these regions without any degradation in throughput.
With the library provided by HLS, it is straightforward to cast all the weights and activations of different element-wise layers to 8-bit fixed-point numbers upon obtaining the learned data types from the training stage. In contrast, the fixed-point data types used in the original FPGA implementation were determined manually. Additionally, our K-hot quantization technique facilitates DSP-free implementation in BatchNorm and BPRelu layers. In our design, both layers can be represented as:
where represents a two-hot quantized fixed-point parameter with only two 1s. Thus we can easily transform the multiplication into simple bit shift and addition. Besides replacing DSPs with 2-hot MAC units in element-wise layers, similar optimizations apply to binary quantization layers, where scaling factors could be quantized using K-hot encoding.
Figure 2 compares the hardware performance of our 8-bit QFX accelerators against the original FracBNN accelerator. By limiting the bitwidth of fixed-point numbers in both element-wise and binary quantization layers, 8-bit QFX decreases DSP, LUT and FF overheads by 8.9%, 4.7%, and 7.0%, respectively. Moreover, the multiplier-free design can substantially reduce DSP usage from 55.3% to 10.8%, while only increasing 5.8% and 1.1% LUT overhead compared to 8-bit QFX without 2-hot encoding and the original FracBNN design.
Figure 3 further provides a breakdown of the DSP usage. We observe that 8-bit QFX reduces total DSP usage from 231 to 199, while the 2-hot encoding completely eliminates DSP usage in element-wise layers and binary quantization layers. This leaves only 39 DSPs, which are used for memory address calculations.
Notably, compared to simply using the BIND_OP imp=fabric pragma in HLS to avoid instantiating DSPs, our design is much more efficient. As shown in Table 6, using the HLS fabric pragma to remove DSPs leads to 114.6% and 50.2% LUT and FF increases for element-wise layers with 8-bit QFX quantization. In contrast, the multiplier-free technique with 2-hot encoding requires much fewer LUTs and FFs.
5. Related Work
Integer quantization on matrix multiplication. Early work focused on 8-bit uniform quantization (Banner et al., 2018; Holt and Hwang, 1993; Migacz, 2017) as it was widely accessible on commercial hardware such as CPUs and GPUs. As domain-specific ML hardware emerged (Jouppi et al., 2021), researchers started to explore lower precision quantization. Recently, binarized neural networks (BNNs) have demonstrated competitive performance (Qin et al., 2023; Yuan and Agaian, 2023) in terms of accuracy-efficiency trade-off compared to float ones on both vision and language tasks (Zhang et al., 2023, 2022). Our work is motivated by the profiling on FracBNN.
Quantization-aware training. To improve quantized model accuracy further, QAT is necessary (Abdolrashidi et al., 2021; Bhalgat et al., 2020; Jung et al., 2019). The key is enabling gradients to propagate effectively ”through” the quantization function, allowing the network to account for the quantization loss, even though its gradients are theoretically zero at all points. To tackle the challenge, various gradient approximation techniques have been proposed, including the Straight-Through Estimator (STE) (Bengio et al., 2013), differentiable soft tanh (Gong et al., 2019), and even using Fourier transforms (Xu et al., 2021). In our QFX implementation, we achieve differentiability by leveraging the STE, thereby supporting QAT.
Fixed-point emulation An existing framework QNNPACK (Dukhan et al., 2018) optimizes fixed-point quantization for mobile devices but only with 8 bits. TensorQuant (Loroch et al., 2017) and Tensorflow (Abadi et al., 2016) emulate arbitrary bitwidth fixed point but lack of support on multiple rounding and overflow modes. QPyTorch (Zhang et al., 2019) has three rounding modes while its casting is not differentiable. Brevitas (Pappalardo, 2023) supports most rounding modes but lacks fixed-point quantization aware training for element-wise layers.. Typically, existing quantization training tools on software cover a subset of quantization modes or lack differentiability to facilitate QAT.
6. Conclusion
This work introduces QFX, a new approach to trainable fixed-point quantization. QFX is implemented as a PyTorch library, which can emulate different quantization modes provided by FPGA HLS tools and enables backpropagation for quantization parameters. QFX can automate the process of binary-point determination, enabling a systematic search of the optimal precision level for each layer. We further introduce the K-hot quantization to transform fixed-point multiplications into a series of bit shifts and additions. This design, combined with binary multiply-accumulate operations in BNNS, culminates in a “DSP-free” approach, which can be beneficial for deploying compressed deep learning models on embedded FPGAs.
References
- (1)
- Abadi et al. (2016) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: a system for large-scale machine learning.. In Osdi.
- Abdolrashidi et al. (2021) AmirAli Abdolrashidi, Lisa Wang, Shivani Agrawal, Jonathan Malmaud, Oleg Rybakov, Chas Leichner, and Lukasz Lew. 2021. Pareto-optimal quantized resnet is mostly 4-bit. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3091–3099.
- Banner et al. (2018) Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. 2018. Scalable methods for 8-bit training of neural networks. Advances in neural information processing systems (2018).
- Bengio et al. (2013) Yoshua Bengio, Nicholas Léonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 (2013).
- Bhalgat et al. (2020) Yash Bhalgat, Jinwon Lee, Markus Nagel, Tijmen Blankevoort, and Nojun Kwak. 2020. Lsq+: Improving low-bit quantization through learnable offsets and better initialization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 696–697.
- Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition.
- Dukhan et al. (2018) Marat Dukhan, Yiming Wu, and Hao Lu. 2018. QNNPACK: Open source library for optimized mobile deep learning.
- Gong et al. (2019) Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. 2019. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4852–4861.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
- Holt and Hwang (1993) Jordan L Holt and J-N Hwang. 1993. Finite precision error analysis of neural network hardware implementations. IEEE Trans. Comput. (1993).
- Jacob et al. (2018) Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2704–2713.
- Jouppi et al. (2021) Norman P Jouppi, Doe Hyun Yoon, Matthew Ashcraft, Mark Gottscho, Thomas B Jablin, George Kurian, James Laudon, Sheng Li, Peter Ma, Xiaoyu Ma, et al. 2021. Ten lessons from three generations shaped google’s tpuv4i: Industrial product. In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA).
- Jung et al. (2019) Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Jae-Joon Han, Youngjun Kwak, Sung Ju Hwang, and Changkyu Choi. 2019. Learning to quantize deep networks by optimizing quantization intervals with task loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4350–4359.
- Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
- Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
- Liu et al. (2018) Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. 2018. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In Proceedings of the European conference on computer vision (ECCV). 722–737.
- Loroch et al. (2017) Dominik Marek Loroch, Franz-Josef Pfreundt, Norbert Wehn, and Janis Keuper. 2017. Tensorquant: A simulation toolbox for deep neural network quantization. In Proceedings of the Machine Learning on HPC Environments.
- Migacz (2017) Szymon Migacz. 2017. NVIDIA 8-bit inference width TensorRT. In GPU Technology Conference.
- Mishra et al. (2017) Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, and Debbie Marr. 2017. WRPN: Wide reduced-precision networks. arXiv preprint arXiv:1709.01134 (2017).
- Pappalardo (2023) Alessandro Pappalardo. 2023. Xilinx/brevitas. https://doi.org/10.5281/zenodo.3333552
- Qin et al. (2023) Haotong Qin, Mingyuan Zhang, Yifu Ding, Aoyu Li, Zhongang Cai, Ziwei Liu, Fisher Yu, and Xianglong Liu. 2023. Bibench: Benchmarking and analyzing network binarization. arXiv preprint arXiv:2301.11233 (2023).
- Sandler et al. (2018) Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4510–4520.
- Shawahna et al. (2022) Ahmad Shawahna, Sadiq M Sait, Aiman El-Maleh, and Irfan Ahmad. 2022. FxP-QNet: a post-training quantizer for the design of mixed low-precision DNNs with dynamic fixed-point representation. IEEE Access 10 (2022), 30202–30231.
- Tan and Le (2019) Mingxing Tan and Quoc Le. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning. PMLR, 6105–6114.
- XILINX (2022) AMD XILINX. 2022. Vitis High-Level Synthesis User Guide-UG1399 (v2022. 1).
- Xu et al. (2021) Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing Xu, and Yunhe Wang. 2021. Learning frequency domain approximation for binary neural networks. Advances in Neural Information Processing Systems (2021).
- Yang et al. (2019) Yifan Yang, Qijing Huang, Bichen Wu, Tianjun Zhang, Liang Ma, Giulio Gambardella, Michaela Blott, Luciano Lavagno, Kees Vissers, John Wawrzynek, et al. 2019. Synetgy: Algorithm-hardware co-design for convnet accelerators on embedded fpgas. In Proceedings of the 2019 ACM/SIGDA international symposium on field-programmable gate arrays. 23–32.
- Yao et al. (2021) Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael Mahoney, et al. 2021. Hawq-v3: Dyadic neural network quantization. In International Conference on Machine Learning. PMLR, 11875–11886.
- Yuan and Agaian (2023) Chunyu Yuan and Sos S Agaian. 2023. A comprehensive review of binary neural network. Artificial Intelligence Review (2023), 1–65.
- Zhang et al. (2019) Tianyi Zhang, Zhiqiu Lin, Guandao Yang, and Christopher De Sa. 2019. QPyTorch: A low-precision arithmetic simulation framework. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS). IEEE, 10–13.
- Zhang et al. (2023) Yichi Zhang, Ankush Garg, Yuan Cao, Łukasz Lew, Behrooz Ghorbani, Zhiru Zhang, and Orhan Firat. 2023. Binarized Neural Machine Translation. arXiv preprint arXiv:2302.04907 (2023).
- Zhang et al. (2021) Yichi Zhang, Junhao Pan, Xinheng Liu, Hongzheng Chen, Deming Chen, and Zhiru Zhang. 2021. FracBNN: Accurate and FPGA-efficient binary neural networks with fractional activations. In The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 171–182.
- Zhang et al. (2022) Yichi Zhang, Zhiru Zhang, and Lukasz Lew. 2022. PokeBNN: A Binary Pursuit of Lightweight Accuracy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12475–12485.