2019
DOI: 10.1109/tnnls.2018.2852335
|View full text |Cite
|
Sign up to set email alerts
|

NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps

Abstract: Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
177
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 237 publications
(197 citation statements)
references
References 32 publications
(46 reference statements)
0
177
0
2
Order By: Relevance
“…Second, exploiting the sparsity of the event tensors (most values of which are zeros) could additionally improve the computational efficiency by a large margin. One promising direction in that regard would be to use sparse convolutions [62] or hardware accelerators designed to efficiently process sparse inputs [63]. Finally, we believe one of the most alluring characteristics of our method is its ability to summarize a large number of events into one high-quality image.…”
Section: Computational Efficiencymentioning
confidence: 99%
“…Second, exploiting the sparsity of the event tensors (most values of which are zeros) could additionally improve the computational efficiency by a large margin. One promising direction in that regard would be to use sparse convolutions [62] or hardware accelerators designed to efficiently process sparse inputs [63]. Finally, we believe one of the most alluring characteristics of our method is its ability to summarize a large number of events into one high-quality image.…”
Section: Computational Efficiencymentioning
confidence: 99%
“…There are several methods out there describing hardware accelerators which exploit feature map sparsity to reduce computation: Cnvlutin [8], SCNN [9], Cambricon-X [10], NullHop [11], Eyeriss [12], EIE [13]. Their focus is on power gating or skipping some of the operations and memory accesses.…”
Section: Related Workmentioning
confidence: 99%
“…This has led to the emergence of a plethora of hardware acceleration engines. The solutions range from mainstream devices adapted for neural network computations, such as GPUs, to custom processor hardware optimised for neural network acceleration [11,25,1,12,8,13]. Bringing the computation closer to the sensor offers distinct advantages in terms of data reduction and power efficiency.…”
Section: Introductionmentioning
confidence: 99%