2020
DOI: 10.1007/978-3-030-58580-8_14
|View full text |Cite
|
Sign up to set email alerts
|

ProxyBNN: Learning Binarized Neural Networks via Proxy Matrices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 25 publications
0
11
0
Order By: Relevance
“…By looking at the reported table, we can observe that our method outperforms all the previous quantization methods in top-1 accuracy. Specifically, we can achieve significant performance gain over the recent state-of-the-art methods 56.4 XNOR-Net++ [3] 57.1 IR-Net [41] 58.1 ProxyBNN [20] 58.7 RBNN [33] 59.9 BinaryDuo [28] 60. LSQ, QKD, and SAT on all comparing architectures.…”
Section: Imagenet Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…By looking at the reported table, we can observe that our method outperforms all the previous quantization methods in top-1 accuracy. Specifically, we can achieve significant performance gain over the recent state-of-the-art methods 56.4 XNOR-Net++ [3] 57.1 IR-Net [41] 58.1 ProxyBNN [20] 58.7 RBNN [33] 59.9 BinaryDuo [28] 60. LSQ, QKD, and SAT on all comparing architectures.…”
Section: Imagenet Resultsmentioning
confidence: 99%
“…Several studies increased the representation capacity by using more weight and activation bases [34,56]. Most studies incorporate changes to improve training efficiency such as dual skip connections [20,36,53]. In addition, more aggressive changes are sought for via neural architecture search [27,39,53].…”
Section: Introductionmentioning
confidence: 99%
“…The authors of all published works on BNN inference acceleration to date made use of high-precision floating-point data types during training (Courbariaux et al, 2015;Courbariaux & Bengio, 2016;Lin et al, 2017;Ghasemzadeh et al, 2018;Liu et al, 2018;Wang et al, 2019a;Umuroglu et al, 2020;He et al, 2020;Liu et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…For such edge devices, we propose compact architectures called Binarized LResNet18A (BLResNet18A) using Binary Neural Networks (BNNs) [27] where we binarize the filters, weights, biases, and activations. Different variants of BNNs [28]- [30] are proposed for use for traditional ML applications not only to avoid generalization error but also to achieve faster computation during deployment with one-bit xnor and bitcount operations [31]. But if the real network does not have enough complexity to extract the features, the regular BNN version suffers from poor representation power and has lower accuracy.…”
Section: Introductionmentioning
confidence: 99%