2017 IEEE International Solid-State Circuits Conference (ISSCC) 2017
DOI: 10.1109/isscc.2017.7870350
|View full text |Cite
|
Sign up to set email alerts
|

14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
124
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 277 publications
(129 citation statements)
references
References 3 publications
1
124
0
Order By: Relevance
“…The peak performance in terms of operations per second 2 of one CHIPMUNK chip is 32.2 Gop/s (at 1.24 V) and the peak energy efficiency (3.08 Gop/s/mW) is reached at 0.75 V. Table 1 compares architectural parameters and synthetic results between CHIPMUNK and the existing VLSI and FPGA-based implementations for which performance and energy numbers have been published. Our work reaches comparable performance with the DNPU proposed by Shin et al [14]. Performance is obviously below that claimed by Google TPU [10], but this is mostly due to the different size.…”
Section: Silicon Prototype and Comparison With State-of-the-artsupporting
confidence: 46%
“…The peak performance in terms of operations per second 2 of one CHIPMUNK chip is 32.2 Gop/s (at 1.24 V) and the peak energy efficiency (3.08 Gop/s/mW) is reached at 0.75 V. Table 1 compares architectural parameters and synthetic results between CHIPMUNK and the existing VLSI and FPGA-based implementations for which performance and energy numbers have been published. Our work reaches comparable performance with the DNPU proposed by Shin et al [14]. Performance is obviously below that claimed by Google TPU [10], but this is mostly due to the different size.…”
Section: Silicon Prototype and Comparison With State-of-the-artsupporting
confidence: 46%
“…Both implementations in [22] and [20] have a higher power efficiency than Nullhop, but provide consistently lower performances (<350 GOp/s) using more MAC units. They also require a larger area (16 mm 2 ), but this is justified by their support for Recurrent Neural Networks and variable bit precision.…”
Section: Memory Power Consumption Estimationmentioning
confidence: 99%
“…The first type focuses on the traditional ANNs. They are custom architectures [11,12,15,21,23,24,38,39,46,47,56,57,59,61,64,67,68] to accelerate mature ANN models. We usually call this type NN accelerators.…”
Section: Nn Chipsmentioning
confidence: 99%
“…, 2 N −1 −1 2 P } where P represents the point position. This method is used by DNPU [61], Strip [38], TianJi-ANN [60], etc. • Fraction encoding:…”
Section: Graph Tuningmentioning
confidence: 99%