2018
DOI: 10.48550/arxiv.1812.03410
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Binary Input Layer: Training of CNN models with binary input data

Abstract: For the efficient execution of deep convolutional neural networks (CNN) on edge devices, various approaches have been presented which reduce the bit width of the network parameters down to 1 bit. Binarization of the first layer was always excluded, as it leads to a significant error increase. Here, we present the novel concept of binary input layer (BIL), which allows the usage of binary input data by learning bit specific binary weights. The concept is evaluated on three datasets (PAMAP2, SVHN, CIFAR-10). Our… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 3 publications
0
2
0
Order By: Relevance
“…Consequently, it increases the number of parameters and MACs in the input layer by nearly 16×. Dürichen et al [10] discuss two other options. The first one is using the 8-bit fixed-point representation of a pixel, named DBID.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Consequently, it increases the number of parameters and MACs in the input layer by nearly 16×. Dürichen et al [10] discuss two other options. The first one is using the 8-bit fixed-point representation of a pixel, named DBID.…”
Section: Related Workmentioning
confidence: 99%
“…The evaluation of binarizing the input layer is in Table 3. Prior works [10] attempted direct unpacking of the 8-bit fixed-point input data, dubbed as DBID, and adding an additional binary pointwise convolutional layer between the unpacked input data and the first layer to increase the number of channels, dubbed as BIL. We implement and compare our proposed method against these techniques on the ResNet-20 BNN introduced in Section 3.1.…”
mentioning
confidence: 99%