“…These instructions can noticeably speed up eightbit QNN inference [16]. Fast implementations are also available for ternary [17][18][19] and binary networks [18,20]. However, binary and ternary networks still suffer from accuracy loss compared to full-precision or eight-bit quantized networks with a similar number of parameters and architecture, which limits their suitability for certain tasks.…”
Section: Related Workmentioning
confidence: 99%
“…We also implemented floating-point, eight-bit, and four-bit matrix multiplications as suggested in [18]. The eight-bit multiplication uses gemmlowp-like [12] microkernels.…”
Section: Hardware and Softwarementioning
confidence: 99%
“…In our first experiment, we compared the proposed 4.6-bit quantized matrix multiplication with floating-point, 8-bit, and 4-bit algorithms described above. The four-bit algorithm [18,25] is only available for ARM CPUs, so it is skipped in the x86 comparison. We compute matrix multiplication of H × D matrix by D × W matrix, thus obtaining the H × W result.…”
Section: Matrix Multiplication Timementioning
confidence: 99%
“…We compute matrix multiplication of H × D matrix by D × W matrix, thus obtaining the H × W result. The parameters H, W and D are chosen as in [18]: H ∈ {72, 120, 240, 360}, W ∈ {24, 48, 72, 96}, and D ∈ {128, 256, 384, 512}. These parameters are multiples of microkernel sizes for each algorithm, ensuring optimal efficiency.…”
Section: Matrix Multiplication Timementioning
confidence: 99%
“…Those times are reported in Table 1. We also compute average acceleration for each pair of matrix multiplication algorithms as suggested in [18]:…”
Quantization is a widespread method for reducing the inference time of neural networks on mobile Central Processing Units (CPUs). Eight-bit quantized networks demonstrate similarly high quality as full precision models and perfectly fit the hardware architecture with one-byte coefficients and thirty-two-bit dot product accumulators. Lower precision quantizations usually suffer from noticeable quality loss and require specific computational algorithms to outperform eight-bit quantization. In this paper, we propose a novel 4.6-bit quantization scheme that allows for more efficient use of CPU resources. This scheme has more quantization bins than four-bit quantization and is more accurate while preserving the computational efficiency of the later (it runs only 4% slower). Our multiplication uses a combination of 16- and 32-bit accumulators and avoids multiplication depth limitation, which the previous 4-bit multiplication algorithm had. The experiments with different convolutional neural networks on CIFAR-10 and ImageNet datasets show that 4.6-bit quantized networks are 1.5–1.6 times faster than eight-bit networks on the ARMv8 CPU. Regarding the quality, the results of the 4.6-bit quantized network are close to the mean of four-bit and eight-bit networks of the same architecture. Therefore, 4.6-bit quantization may serve as an intermediate solution between fast and inaccurate low-bit network quantizations and accurate but relatively slow eight-bit ones.
“…These instructions can noticeably speed up eightbit QNN inference [16]. Fast implementations are also available for ternary [17][18][19] and binary networks [18,20]. However, binary and ternary networks still suffer from accuracy loss compared to full-precision or eight-bit quantized networks with a similar number of parameters and architecture, which limits their suitability for certain tasks.…”
Section: Related Workmentioning
confidence: 99%
“…We also implemented floating-point, eight-bit, and four-bit matrix multiplications as suggested in [18]. The eight-bit multiplication uses gemmlowp-like [12] microkernels.…”
Section: Hardware and Softwarementioning
confidence: 99%
“…In our first experiment, we compared the proposed 4.6-bit quantized matrix multiplication with floating-point, 8-bit, and 4-bit algorithms described above. The four-bit algorithm [18,25] is only available for ARM CPUs, so it is skipped in the x86 comparison. We compute matrix multiplication of H × D matrix by D × W matrix, thus obtaining the H × W result.…”
Section: Matrix Multiplication Timementioning
confidence: 99%
“…We compute matrix multiplication of H × D matrix by D × W matrix, thus obtaining the H × W result. The parameters H, W and D are chosen as in [18]: H ∈ {72, 120, 240, 360}, W ∈ {24, 48, 72, 96}, and D ∈ {128, 256, 384, 512}. These parameters are multiples of microkernel sizes for each algorithm, ensuring optimal efficiency.…”
Section: Matrix Multiplication Timementioning
confidence: 99%
“…Those times are reported in Table 1. We also compute average acceleration for each pair of matrix multiplication algorithms as suggested in [18]:…”
Quantization is a widespread method for reducing the inference time of neural networks on mobile Central Processing Units (CPUs). Eight-bit quantized networks demonstrate similarly high quality as full precision models and perfectly fit the hardware architecture with one-byte coefficients and thirty-two-bit dot product accumulators. Lower precision quantizations usually suffer from noticeable quality loss and require specific computational algorithms to outperform eight-bit quantization. In this paper, we propose a novel 4.6-bit quantization scheme that allows for more efficient use of CPU resources. This scheme has more quantization bins than four-bit quantization and is more accurate while preserving the computational efficiency of the later (it runs only 4% slower). Our multiplication uses a combination of 16- and 32-bit accumulators and avoids multiplication depth limitation, which the previous 4-bit multiplication algorithm had. The experiments with different convolutional neural networks on CIFAR-10 and ImageNet datasets show that 4.6-bit quantized networks are 1.5–1.6 times faster than eight-bit networks on the ARMv8 CPU. Regarding the quality, the results of the 4.6-bit quantized network are close to the mean of four-bit and eight-bit networks of the same architecture. Therefore, 4.6-bit quantization may serve as an intermediate solution between fast and inaccurate low-bit network quantizations and accurate but relatively slow eight-bit ones.
Binary Neural Networks (BNNs) are showing tremendous success on realistic image classification tasks. Notably, their accuracy is similar to the state-of-the-art accuracy obtained by full-precision models tailored to edge devices. In this regard, BNNs are very amenable to edge devices since they employ 1-bit to store the inputs and weights, and thus, their storage requirements are low. Moreover, BNNs computations are mainly done using xnor and pop-counts operations which are implemented very efficiently using simple hardware structures. Nonetheless, supporting BNNs efficiently on mobile CPUs is far from trivial since their benefits are hindered by frequent memory accesses to load weights and inputs.In BNNs, a weight or an input is stored using one bit, and aiming to increase storage and computation efficiency, several of them are packed together as a sequence of bits. In this work, we observe that the number of unique sequences representing a set of weights or inputs is typically low (i.e., 512). Also, we have seen that during the evaluation of a BNN layer, a small group of unique sequences is employed more frequently than others. Accordingly, we propose exploiting this observation by using Huffman Encoding to encode the bit sequences and then using an indirection table to decode them during the BNN evaluation. Also, we propose a clustering-based scheme to identify the most common sequences of bits and replace the less common ones with some similar common sequences. As a result, we decrease the storage requirements and memory accesses since the most common sequences are encoded with fewer bits.In this work, we extend a mobile CPU by adding a small hardware structure that can efficiently cache and decode the compressed sequence of bits. We evaluate our scheme using the ReAacNet model with the Imagenet dataset on an ARM CPU. Our experimental results show that our technique can reduce memory requirement by 1.32x and improve performance by 1.35x.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.