Interspeech 2013 2013
DOI: 10.21437/interspeech.2013-596
|View full text |Cite
|
Sign up to set email alerts
|

Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
154
0
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 310 publications
(156 citation statements)
references
References 16 publications
1
154
0
1
Order By: Relevance
“…RNNs with different architectures have been explored in many studies, considering their promising performance in sequence modeling elsewhere. In 2013, [Mesnil et al 2013] compared recurrent neural networks, including Elman-type and Jordan-type networks and bi-directional Jordan-type RNNs. Two years later, [Mesnil et al 2015] implemented Elman-type and Jordan-type networks and also their variations.…”
Section: 21mentioning
confidence: 99%
See 1 more Smart Citation
“…RNNs with different architectures have been explored in many studies, considering their promising performance in sequence modeling elsewhere. In 2013, [Mesnil et al 2013] compared recurrent neural networks, including Elman-type and Jordan-type networks and bi-directional Jordan-type RNNs. Two years later, [Mesnil et al 2015] implemented Elman-type and Jordan-type networks and also their variations.…”
Section: 21mentioning
confidence: 99%
“…Deep belief network (DBN) [Mesnil et al 2013] To explore RNN RNN [Yao et al 2013] To explore RNN-LM RNN-LM [Yao et al 2014b] Label dependencies, label bias problem RNN, CRF [Yao et al 2014a] Gradient diminishing and exploding problem, label dependencies, label bias problem LSTM, regression model, deep learning [Liu and Lane 2015] Label dependencies RNN, sampling approach [Mesnil et al 2015] To explore RNN RNN Vanishing and exploding gradient RNN, external memory [Kurata et al 2016] Label dependencies LSTM, encoder-labeler To explore past and future information Bi-directional RNN, ranking loss function [Vu 2016] To explore CNN CNN [Zhu and Yu 2017] To explore attention mechanism Bi-directional LSTM, LSTM, encoder-decoder, focus mechanism [Dai et al 2018] Unseen slots CRF [Gong et al 2019] To explore MTL MTL, segment tagging, NER [Louvan and Magnini 2018] To explore MTL MTL, NER, bi-LSTM, CRF [Shin et al 2018] To better labelling common words Encoder-decoder attention, delexicalised sentence generation [Wang et al 2018a] Imbalanced data DNN, reinforcement learning [Zhao and Feng 2018] OOV GRU, attention, pointer network [Gong et al 2019] To explore MTL MTL, segment tagging, NER To extend original SLU to H2H conversations Bi-LSTM, different knowledge sources [Shen et al 2019b…”
Section: Papermentioning
confidence: 99%
“…The majority of works on task-oriented semantic parsing focused on non-compositional user queries (Mesnil et al, 2013;Liu and Lane, 2016;Goo et al, 2018;, which turns the parsing task into a combination of intent detection and slot filling. Recently, Gupta et al (2018) Figure 3: The architecture of X2Parser.…”
Section: Task-oriented Semantic Parsingmentioning
confidence: 99%
“…Because slot filling requires token-level annotations of semantic frame, while these methods can only provide sentence-level labels. Spoken Language understanding, including slot filling and intent detection tasks, has drawn a lot of research attention recently (Yao et al 2013(Yao et al , 2014Mesnil et al 2013Mesnil et al , 2015Chen et al 2016a,b;Goo et al 2018;Haihong et al 2019;Liu et al 2019). In this paper, we only focus on the slot filling task.…”
Section: Related Workmentioning
confidence: 99%