2021 IEEE International Solid- State Circuits Conference (ISSCC) 2021
DOI: 10.1109/isscc42613.2021.9366062
|View full text |Cite
|
Sign up to set email alerts
|

9.8 A 25mm2 SoC for IoT Devices with 18ms Noise-Robust Speech-to-Text Latency via Bayesian Speech Denoising and Attention-Based Sequence-to-Sequence DNN Speech Recognition in 16nm FinFET

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(9 citation statements)
references
References 4 publications
0
9
0
Order By: Relevance
“…Some simulation statistics are not reported by the designers (N/A). keypair [58], gsm [59], HLSCNN [60], FlexNLP [61], Dataflow [62], and Opticalflow [63] do not feature any subaccelerators with a batch size greater than one. One OOB bug was found in gsm and one initialization bug was found in keypair.…”
Section: Appendix D Results (Extended)mentioning
confidence: 99%
See 1 more Smart Citation
“…Some simulation statistics are not reported by the designers (N/A). keypair [58], gsm [59], HLSCNN [60], FlexNLP [61], Dataflow [62], and Opticalflow [63] do not feature any subaccelerators with a batch size greater than one. One OOB bug was found in gsm and one initialization bug was found in keypair.…”
Section: Appendix D Results (Extended)mentioning
confidence: 99%
“…Average runtimes result from dividing the time to detect all bugs by the number of bugs. † keypair[58], gsm[59], HLSCNN[60], FlexNLP[61], Dataflow[62], and Opticalflow[63] all time out for A-QED FC and do not contain any sub-accelerators with batch size greater than one. One OOB bug was detected in gsm and one initialization bug in keypair.…”
mentioning
confidence: 99%
“…The current existing deep learning accelerators [3]- [7], [21] are usually designed to meet high-performance needs with large number of processing elements and different data flows. However, these state-of-the-art hardware accelerators have low hardware utilization while executing the time-domain separation model including the four main key operations of the 1-D convolution, the 1-D depthwise dilated convolution, 1-D 1×1 convolution and 1-D transposed convolution.…”
Section: Deep Learning Acceleratorsmentioning
confidence: 99%
“…To achieve high robustness KWS, one method is to include all the possible SNR levels and noise types in the AI model training, which causes the increasing of model size and is challenging for ultra-low-power applications. To overcome the noise problem in KWS, Wang [2] employed a simpler voice extracting method called divisive energy normalization (DN), and developed a normalized acoustic feature extractor chip (NAFE) for analog signal processing. The frontend of NAFE is composed by a low-noise amplifier (LNA), a bandpass filter (BFP), a half-wave rectifier (HWR) and an integrate-and-fire (IAF) encoder, and extract the pre-normalized features (preNF).…”
Section: Ai Chips For Voice Applicationsmentioning
confidence: 99%