2019
DOI: 10.1109/access.2019.2924340
|View full text |Cite
|
Sign up to set email alerts
|

EERA-KWS: A 163 TOPS/W Always-on Keyword Spotting Accelerator in 28nm CMOS Using Binary Weight Network and Precision Self-Adaptive Approximate Computing

Abstract: This paper proposed an energy-efficient reconfigurable accelerator for keyword spotting (EERA-KWS) based on binary weight network (BWN) and fabricated in 28-nm CMOS technology. This keyword spotting system consists of two parts: the feature extraction based on melscale frequency cepstral coefficients (MFCC) and the keywords classification based on a BWN model, which is trained through the Google's Speech Commands database and deployed on our custom. To reduce the power consumption while maintaining the system … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 19 publications
(41 reference statements)
1
9
0
Order By: Relevance
“…Changren Zhou et al have presented similar results in the literature. These authors' have observed that the elongations at break of NCPs were lower than matrix.…”
Section: Resultssupporting
confidence: 63%
See 2 more Smart Citations
“…Changren Zhou et al have presented similar results in the literature. These authors' have observed that the elongations at break of NCPs were lower than matrix.…”
Section: Resultssupporting
confidence: 63%
“…In general, the inclusion of CNCs caused great amendments on the viscoelastic behavior of NCs compared to matrix. The dynamic mechanical properties of the NCPs are strongly dependent on the filler orientation and concentration of nano reinforcements' as reported . The storage module of a NCP normally determines the effect of reinforcement in a polymer matrix .…”
Section: Resultsmentioning
confidence: 92%
See 1 more Smart Citation
“…Since the VAD is for power savings, this could be integrated into this work as well. In this comparison, this work consumes less power than other works with similar model architecture [2], [5], [6] even though we process the entire raw data for the predicted results. [3], [4] use RNN as the model architecture, but [3] does not include feature extraction on the chip.…”
Section: ) Results and Comparisonmentioning
confidence: 99%
“…Guo et al [4] proposes hybrid digital circuits and 16 64x64 SRAM-based in-memory computing (IMC) macros with 3-bit ADC for recurrent neural network-based KWS. Liu et al [5] uses precision self-adaptive computing for a binary weight network to reduce power, and [6] uses mixed mode computing for low-power KWS. In summary, they reduce power consumption through recurrent models, quantized/binary neural network models, or voice activity detection.…”
Section: Introductionmentioning
confidence: 99%