2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.48
|View full text |Cite
|
Sign up to set email alerts
|

Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration

Abstract: State-of-the-art convolutional neural networks are enormously costly in both compute and memory, demanding massively parallel GPUs for execution. Such networks strain the computational capabilities and energy available to embedded and mobile processing platforms, restricting their use in many important applications. In this paper, we push the boundaries of hardware-effective CNN design by proposing BCNN with Separable Filters (BCNNw/SF), which applies Singular Value Decomposition (SVD) on BCNN kernels to furth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 23 publications
0
8
0
Order By: Relevance
“…For MNIST, the CNN(lite) model contains only one convolutional layer with 40 kernels. The CNN(lite) model for SVHN has 2 convolutional layers (8)(9)(10)(11)(12)(13)(14)(15)(16)(17). The CNN(lite) model for CIFAR-10 also has 2 convolutional layers (8)(9).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For MNIST, the CNN(lite) model contains only one convolutional layer with 40 kernels. The CNN(lite) model for SVHN has 2 convolutional layers (8)(9)(10)(11)(12)(13)(14)(15)(16)(17). The CNN(lite) model for CIFAR-10 also has 2 convolutional layers (8)(9).…”
Section: Resultsmentioning
confidence: 99%
“…In the BNN [10] paper, the classification on MNIST is done with a binarized multilayer perceptron network (MLP). We adopt the binarized convolutional neural network (BCNN) in [10] for SVHN to perform the classification and reproduce the same accuracy as shown in [17] on MNIST.…”
Section: Resultsmentioning
confidence: 99%
“…Hubara et al [59] proposed a method to train the Quantized Neural Network (QNN). Lin et al [67] proposed a Binarized CNN (BCNN) with a separable filter in binary quantization network, which applies Singular Value Decomposition (SVD) on BCNN kernels. Unlike simple matrix approximation, Hou et al [68] proposed a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t.…”
Section: Referencesmentioning
confidence: 99%
“…Thus, we would like to make a distinction between explicitly Ternary Networks, using 2-bit representation, and Binary-Sparse Networks, leveraging XNOR multiplication and size compression, like ours. As an example of the latter, Lin et al [53], exploited Singular Value Decomposition to reduce kernel sizes in Binarized Neural Networks (BNNs). Certain software and hardware implementations rely on operand-gating XNOR multiplication [21,31]; however, they still require 2-bits of information per weight: value and mask.…”
Section: Sparse Binary Networkmentioning
confidence: 99%