ESSCIRC 2017 - 43rd IEEE European Solid State Circuits Conference 2017
DOI: 10.1109/esscirc.2017.8094575
|View full text |Cite
|
Sign up to set email alerts
|

OCEAN: An on-chip incremental-learning enhanced processor with gated recurrent neural network accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…While z t can be computed in parallel as it is independent of h t and r t . The same hardware can be shared for computing r t and z t to save hardware resources [97].…”
Section: ) Compute-specificmentioning
confidence: 99%
See 2 more Smart Citations
“…While z t can be computed in parallel as it is independent of h t and r t . The same hardware can be shared for computing r t and z t to save hardware resources [97].…”
Section: ) Compute-specificmentioning
confidence: 99%
“…Papers that do not discuss any flexibility aspects are omitted from Table 10. In A4 [97], the architecture should be able to support various models, but the number of cells and layers the architecture can support are not mentioned in the paper. Hence, we cannot deduce how the implementation could support variations in the RNN model.…”
Section: B Flexibilitymentioning
confidence: 99%
See 1 more Smart Citation
“…One of the most popular approaches to obtain more energy efficient inference for neural networks is through custom hardware accelerators, targeting field-programmable gate arrays (FPGAs) [15,19,39] or application-specific integrated circuit (ASICs) [3,6,28,40]. These are custom-built architectures that optimize the most energy-intensive operations involved in the inference process (typically multiply-and-accumulate loops).…”
Section: Custom Hardware Designsmentioning
confidence: 99%
“…To lower this burden, hardware accelerators have been developed targeting Keyword Spotting. [2] shows an ASIC with 8 execution engines for accelerating RNNs, showing its application for realtime KWS, consuming 6.6 mW at 20 MHz/0.8 V. A compact memory storage is presented in [3] through a programmable processor with 270 kB on-chip weight which executes realtime KWS with 300 μW power consumption. In [4] a SIMD processor evaluates DNNs for automatic speech recognition tasks including a small vocabulary recognizer which achieves 172 μW for real time power consumption.…”
Section: Introductionmentioning
confidence: 99%