2018
DOI: 10.48550/arxiv.1802.00939
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Recent Advances in Efficient Computation of Deep Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 55 publications
0
7
0
Order By: Relevance
“…It can be seen that the gain rate not only considers the information gain of the training samples in attribute B , but also considers the amount of information generated by the v branches generated by the v attribute values of attribute B [ 17 ]. So in the above example, due to the existence of extended parameters, its gain rate tends to be reasonable.…”
Section: Decision Tree Algorithm Before and After Improvement Based O...mentioning
confidence: 99%
“…It can be seen that the gain rate not only considers the information gain of the training samples in attribute B , but also considers the amount of information generated by the v branches generated by the v attribute values of attribute B [ 17 ]. So in the above example, due to the existence of extended parameters, its gain rate tends to be reasonable.…”
Section: Decision Tree Algorithm Before and After Improvement Based O...mentioning
confidence: 99%
“…Quantized neural networks has made significant progress with training (Courbariaux, Bengio, and David 2015;Han, Mao, and Dally 2015;Zhu et al 2016;Rastegari et al 2016;Mishra et al 2017;Zmora et al 2018;Cheng et al 2018;Krishnamoorthi 2018;Li, Dong, and Wang 2019). The research of post-training quantization is conducted for scenarios when training is not available (Krishnamoorthi 2018;Meller et al 2019;Banner, Nahshan, and Soudry 2019;Zhao et al 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Quantization is a promising method for creating more energy-efficient deep learning systems (Han, Mao, and Dally 2015;Hubara et al 2017;Zmora et al 2018;Cheng et al 2018). By approximating real-valued weights and activations using low-bit numbers, quantized neural networks (QNNs) trained with state-of-the-art algorithms (e.g., Courbariaux, Bengio, and David 2015;Rastegari et al 2016;Louizos et al 2018;Li, Dong, and Wang 2019) can be shown to perform similarly as their full-precision counterparts (e.g., Jung et al 2019;Li, Dong, and Wang 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Quantized neural networks has made significant progress with training (Courbariaux et al, 2015;Han et al, 2015;Zhu et al, 2016;Rastegari et al, 2016;Mishra et al, 2017;Zmora et al, 2018;Cheng et al, 2018;Krishnamoorthi, 2018;Li et al, 2019). The research of post-training quantization is conducted for scenarios when training is not available (Krishnamoorthi, 2018;Meller et al, 2019;Banner et al, 2019;Zhao et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Quantization is a promising method for creating more energy-efficient deep learning systems (Han et al, 2015;Hubara et al, 2017;Zmora et al, 2018;Cheng et al, 2018). By approximating real-valued weights and activations using low-bit numbers, quantized neural networks (QNNs) trained with state-of-the-art algorithms (e.g., Courbariaux et al, 2015;Rastegari et al, 2016;Louizos et al, 2018;Li et al, 2019) can be shown to perform similarly as their full-precision counterparts (e.g., Jung et al, 2019;Li et al, 2019).…”
Section: Introductionmentioning
confidence: 99%