2017 25th Signal Processing and Communications Applications Conference (SIU) 2017
DOI: 10.1109/siu.2017.7960263
|View full text |Cite
|
Sign up to set email alerts
|

An energy efficient additive neural network

Abstract: In recent years, machine learning techniques based on neural networks for mobile computing become increasingly popular. Classical multi-layer neural networks require matrix multiplications at each stage. Multiplication operation is not an energy efficient operation and consequently it drains the battery of the mobile device. In this paper, we propose a new energy efficient neural network with the universal approximation property over space of Lebesgue integrable functions. This network, called, additive neural… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 25 publications
0
7
0
Order By: Relevance
“…Unlike [26] where we define sign(0) = 1 or sign(0) = −1 to take advantage of bit-wise operations, we utilize the standard signum function for better precision here. First, we introduce our original MF dot product [24], [25]. It is defined as…”
Section: B Multiplication-free (Mf) Dot Productsmentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike [26] where we define sign(0) = 1 or sign(0) = −1 to take advantage of bit-wise operations, we utilize the standard signum function for better precision here. First, we introduce our original MF dot product [24], [25]. It is defined as…”
Section: B Multiplication-free (Mf) Dot Productsmentioning
confidence: 99%
“…We recently introduced a family of operators related with 1 -norm to extract features from image regions and to design Additive neural Networks (AddNet) in a wide range of computer vision applications [24]- [27]. We call the new family of operators Energy-Efficient (EEF) operators because they do not require any multiplications which consume more energy compared to additions and binary operations in most processors.…”
Section: Introductionmentioning
confidence: 99%
“…al. introduced multiplication-free (MF) kernel to replace regular convolution in CNNs [38,39,40,41,42]. The MF kernel requires no multiplication but only additions and sign operations.…”
Section: Multiplication-free Depthwise Separable "Convolutions" (Mf-d...mentioning
confidence: 99%
“…In this paper, we introduce a binary layer based on the fast Walsh-Hadamard transform to slim and speed up deep neural networks with 1 × 1 convolutions. Moreover, the recent literature [38,39,40,41,42] developed an energy-efficient neuron called multiplication-free (MF) kernel, which does not require any multiplications. We establish the relation between the MF operator and 2-by-2 Hadamard transform, and we fuse this idea to propose the depthwise separable multiplication-free convolution layer.…”
Section: Introductionmentioning
confidence: 99%
“…In our system, we use an l 1 norm-based neural network, called additive neural network (AddNet), which replaces the regular multiplication operator with a new computationally efficient operator called multiplication-free (mf)-operator. Afrasiyabi et al show that mf-operator-based neural networks perform as well as regular neural networks on MNIST dataset and CIFAR-10 dataset [20]. Instead of multiplications, the mf-operator performs sign multiplications and addition operations in a typical neu-ron.…”
Section: Introductionmentioning
confidence: 99%