2016
DOI: 10.1016/j.specom.2015.12.003
|View full text |Cite
|
Sign up to set email alerts
|

Maxout neurons for deep convolutional and LSTM neural networks in speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
38
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 83 publications
(38 citation statements)
references
References 10 publications
0
38
0
Order By: Relevance
“…These improvements over the traditional CNN and RNN can help to learn more precise time domain or frequency domain information. For example, in [21], the convolutional maxout neural networks (CMNN) and recurrent maxout neural networks (RMNN) use local spectral properties within frames and long-term dependencies among frames. In our experiments, we compare the results achieved by our models to these already strong baselines of these other models.…”
Section: Experimental Results For Different Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…These improvements over the traditional CNN and RNN can help to learn more precise time domain or frequency domain information. For example, in [21], the convolutional maxout neural networks (CMNN) and recurrent maxout neural networks (RMNN) use local spectral properties within frames and long-term dependencies among frames. In our experiments, we compare the results achieved by our models to these already strong baselines of these other models.…”
Section: Experimental Results For Different Modelsmentioning
confidence: 99%
“…The solutions we propose include local window BLSTM, gated recurrent units, and residual architecture-based models. This work expands on our previous work [18][19][20][21]. Experiments are carried out on the Babel benchmark datasets, for low resource keyword search evaluations.…”
Section: Introductionmentioning
confidence: 94%
See 1 more Smart Citation
“…e long-short-term memory neural network (LSTM) has memory cells, which can extract deep features from a small number of samples. It is suitable for processing time series and has achieved sound application effects in many fields [18,19]. However, the parameters of the LSTM model are usually determined by experience, so the subjectivity is strong and will affect the fitting ability of the model.…”
Section: Introductionmentioning
confidence: 99%
“…Taking the classical BP neural network as an example, obtaining highprecision features becomes more difficult while the layers are few. If the number of layers is excessive, then the gradient may disappear, and the local optimal solution is another defect that is difficult to overcome [9].…”
Section: Introductionmentioning
confidence: 99%