2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00078
|View full text |Cite
|
Sign up to set email alerts
|

HadaNets: Flexible Quantization Strategies for Neural Networks

Abstract: On-board processing elements on UAVs are currently inadequate for training and inference of Deep Neural Networks. This is largely due to the energy consumption of memory accesses in such a network. HadaNets introduce a flexible train-from-scratch tensor quantization scheme by pairing a full precision tensor to a binary tensor in the form of a Hadamard product. Unlike wider reduced precision neural network models, we preserve the train-time parameter count, thus out-performing XNOR-Nets without a traintime memo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…Many of them aim to incur negligible compute and parameter overhead. For example, a channelwise realvalued rescaling of the binarized tensors can effectively mitigate the quantization loss [2,7,42]. Connecting the unquantized input activations of a binarized convolutional layer to its output with a shortcut enhances the gradient flow and the model representation capacity [35].…”
Section: Related Workmentioning
confidence: 99%
“…Many of them aim to incur negligible compute and parameter overhead. For example, a channelwise realvalued rescaling of the binarized tensors can effectively mitigate the quantization loss [2,7,42]. Connecting the unquantized input activations of a binarized convolutional layer to its output with a shortcut enhances the gradient flow and the model representation capacity [35].…”
Section: Related Workmentioning
confidence: 99%