2017
DOI: 10.48550/arxiv.1703.05390
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 0 publications
0
14
0
Order By: Relevance
“…Since then, multiple off-the-shelf CNN backbones have been widely applied to KWS tasks, such as deep residual network (ResNet) [2], separable CNN [3,4,5,6], temporal CNN [7] and SincNet [8]. There are also other efforts to boost performance of CNN models for KWS by combining other deep learning models, such as recurrent neural network (RNN) [9], bidirectional long short-term memory (BiLSTM) [10] and streaming layers [11]. However, although the off-the-shelf CNN backbones that existing KWS studies usually relied on have been demonstrated to be effective in image classification tasks, they are not specifically designed for KWS tasks and might not be the perfect architecture for KWS tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Since then, multiple off-the-shelf CNN backbones have been widely applied to KWS tasks, such as deep residual network (ResNet) [2], separable CNN [3,4,5,6], temporal CNN [7] and SincNet [8]. There are also other efforts to boost performance of CNN models for KWS by combining other deep learning models, such as recurrent neural network (RNN) [9], bidirectional long short-term memory (BiLSTM) [10] and streaming layers [11]. However, although the off-the-shelf CNN backbones that existing KWS studies usually relied on have been demonstrated to be effective in image classification tasks, they are not specifically designed for KWS tasks and might not be the perfect architecture for KWS tasks.…”
Section: Introductionmentioning
confidence: 99%
“…The Adam optimization is an efficient stochastic optimization that has been suggested and it combines the advantages of two popular methods: AdaGrad, which works well with sparse gradients, and RMSProp, which has an excellent performance in on-line and non-stationary settings. Recent works by Zhang et al (2019), Peng et al (2018), Bansal et al (2016) and Arik et al (2017) have presented and proven that Adam optimizer provides better performance than others in terms of both theoretical and practical perspectives. Therefore in this paper, we use Adam as the optimizer in our neural network simulations.…”
Section: Feedforward Neural Networkmentioning
confidence: 99%
“…In this paper, we take motivation from [1] to design CNNs for KWS use case. We propose a CNN based approach since CNNs have shown better performance than DNNs and also has a smaller model size.…”
Section: Introductionmentioning
confidence: 99%