2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6853573
|View full text |Cite
|
Sign up to set email alerts
|

Contextual domain classification in spoken language understanding systems using recurrent neural network

Abstract: In a multi-domain, multi-turn spoken language understanding session, information from the history often greatly reduces the ambiguity of the current turn. In this paper, we apply the recurrent neural network (RNN) to exploit contextual information for query domain classification. The Jordan-type RNN directly sends the vector of output distribution to the next query turn as additional input features to the convolutional neural network (CNN). We evaluate our approach against SVM with and without contextual featu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
58
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 86 publications
(58 citation statements)
references
References 17 publications
(20 reference statements)
0
58
0
Order By: Relevance
“…To overcome the error propagation and further improve understanding performance, the contextual information has been shown useful (Bhargava et al, 2013;Xu and Sarikaya, 2014;Sun et al, 2016). Prior work incorporated the dialogue history into the recurrent neural networks (RNN) for improving domain classification, intent prediction, and slot filling (Xu and Sarikaya, 2014;Shi et al, 2015;Chen et al, 2016c). Recently, and Zhang et al (2018) demonstrated that modeling speaker role information can learn the notable variance in speaking habits during conversations in order to benefit understanding.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome the error propagation and further improve understanding performance, the contextual information has been shown useful (Bhargava et al, 2013;Xu and Sarikaya, 2014;Sun et al, 2016). Prior work incorporated the dialogue history into the recurrent neural networks (RNN) for improving domain classification, intent prediction, and slot filling (Xu and Sarikaya, 2014;Shi et al, 2015;Chen et al, 2016c). Recently, and Zhang et al (2018) demonstrated that modeling speaker role information can learn the notable variance in speaking habits during conversations in order to benefit understanding.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the average Fl score distribution over different domains, the paired t-test shows that semisupervised RTSVM gets significant improvement from the unlabeled data (p -value = 0.035). In a multi-domain language understanding systems [37], a user query is firstly classified into different domains. A domain dependent slot tagger is further applied to extract the slot labels from the query.…”
Section: Results On Atismentioning
confidence: 99%
“…An intent is defined as the type of content the user is seeking. This task is part of the spoken language understanding problem (Li et al, 2009;Tur and De Mori, 2011;Kim et al, 2015c;Mesnil et al, 2015;Kim et al, 2015a;Xu and Sarikaya, 2014;Kim et al, 2015b;Kim et al, 2015d). The amount of training data we used ranges from 12k to 120k (in number of queries) across different domains, the test data was from 2k to 20k.…”
Section: Methodsmentioning
confidence: 99%