2014 IEEE Spoken Language Technology Workshop (SLT) 2014
DOI: 10.1109/slt.2014.7078572
|View full text |Cite
|
Sign up to set email alerts
|

Spoken language understanding using long short-term memory neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
198
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 298 publications
(199 citation statements)
references
References 25 publications
1
198
0
Order By: Relevance
“…RNN also produces a sequence of locally normalized output distributions, one for each word position. Thus, it can suffer from a label bias problem [6]. LSTM RNN has been shown to perform more effectively than the standard RNN at finding and exploiting longrange dependencies in the data [6].…”
Section: Rnn and Lstm Rnn Modelsmentioning
confidence: 99%
See 3 more Smart Citations
“…RNN also produces a sequence of locally normalized output distributions, one for each word position. Thus, it can suffer from a label bias problem [6]. LSTM RNN has been shown to perform more effectively than the standard RNN at finding and exploiting longrange dependencies in the data [6].…”
Section: Rnn and Lstm Rnn Modelsmentioning
confidence: 99%
“…Thus, it can suffer from a label bias problem [6]. LSTM RNN has been shown to perform more effectively than the standard RNN at finding and exploiting longrange dependencies in the data [6]. One difference from the standard RNN is that the LSTM uses a memory cell with gate units, which are linear activation functions.…”
Section: Rnn and Lstm Rnn Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…Different researchers use slightly different LSTM variants (Graves, 2013;Yao et al, 2014;Jozefowicz et al, 2015). We implemented the version of LSTM described by the following set of equations:…”
Section: Lstmmentioning
confidence: 99%