2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00154
|View full text |Cite
|
Sign up to set email alerts
|

AdderNet: Do We Really Need Multiplications in Deep Learning?

Abstract: Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper addition… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
108
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 154 publications
(112 citation statements)
references
References 28 publications
0
108
0
Order By: Relevance
“…Over the years, these basic model families produced several versions [74] and they have been extensively used by other researchers to develop modified and hybrid models [76] . Recent studies attempted to improve performance of the base models by proposing new layers and filters such as Sparse Shift Filter [77] , Asymmetric Convolution Block [78] , Adder Networks [79] , Virtual Pooling [80] , Discrete Wavelet Transform [81] , and HetConv [82] , etc. Some recent substantial models have been developed based on the base models, such as Res2Net [83] and Wide ResNet [84] using the ResNet model; while Log Dense Net [85] and Sparse Net [86] using the DenseNet model.…”
Section: Model Developmentmentioning
confidence: 99%
“…Over the years, these basic model families produced several versions [74] and they have been extensively used by other researchers to develop modified and hybrid models [76] . Recent studies attempted to improve performance of the base models by proposing new layers and filters such as Sparse Shift Filter [77] , Asymmetric Convolution Block [78] , Adder Networks [79] , Virtual Pooling [80] , Discrete Wavelet Transform [81] , and HetConv [82] , etc. Some recent substantial models have been developed based on the base models, such as Res2Net [83] and Wide ResNet [84] using the ResNet model; while Log Dense Net [85] and Sparse Net [86] using the DenseNet model.…”
Section: Model Developmentmentioning
confidence: 99%
“…Since min / max operations are not differentiable, PConv, LMorph and SMorph layers with continuous structure were proposed [65], [66]. Chen et al [67] modify only convolutional layers to use the L1-norm instead of covariation. It dramatically simplifies convolutional layers; however, other layers still use multiplication.…”
Section: B Alternative Neurons/layersmentioning
confidence: 99%
“…audio features are improved continuously to achieve adequate representation and be suitable to feature extraction in deep learning. (2). a more suitable classifier is proposed for audio classification.…”
Section: ⅵ Conclusionmentioning
confidence: 99%
“…Previous research of audio recognition was based on template matching [2], feature parameter matching [3] and hidden Markov methods [4]. The above methods achieve good performance in the early speech recognition systems.…”
Section: Introductionmentioning
confidence: 99%