Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue 2019
DOI: 10.18653/v1/w19-5932
|View full text |Cite
|
Sign up to set email alerts
|

Dialog State Tracking: A Neural Reading Comprehension Approach

Abstract: Dialog state tracking is used to estimate the current belief state of a dialog given all the preceding conversation. Machine reading comprehension, on the other hand, focuses on building systems that read passages of text and answer questions that require some understanding of passages. We formulate dialog state tracking as a reading comprehension task to answer the question what is the state of the current dialog? after reading conversational context. In contrast to traditional state tracking methods where th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
81
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 133 publications
(94 citation statements)
references
References 23 publications
(39 reference statements)
0
81
0
Order By: Relevance
“…However, this approach falls short when facing previously unseen values at run time. Besides, there are also some works formulating the DST task as a reading comprehension task [28,29].…”
Section: Dialog State Trackingmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this approach falls short when facing previously unseen values at run time. Besides, there are also some works formulating the DST task as a reading comprehension task [28,29].…”
Section: Dialog State Trackingmentioning
confidence: 99%
“…The former type does not assume a fixed vocabulary for the slot, which means the model cannot predict the values by classification. For free-form slot, one could generate the value directly [4,31] or predict the span of the value in the utterance [28,32]. In generative methods, they often use a decoder to generate the value of a slot word by word from a large vocabulary.…”
Section: Dialog State Trackingmentioning
confidence: 99%
“…NBT-CNN O(mn) MD-DST (Rastogi et al, 2017) O(n) GLAD O(mn) StateNet PSI (Ren et al, 2018) O(n) TRADE (Wu et al, 2019) O(n) HyST (Goel et al, 2019) O(n) DSTRead (Gao et al, 2019) O(n) domain dialogue state tracking dataset, MultiWoZ , a representation of dialogue state consists of a hierarchical structure of domain, slot, and value is proposed. This is a more practical scenario since dialogues often include multiple domains simultaneously.…”
Section: Dst Models Itcmentioning
confidence: 99%
“…However, TRADE does not report its performance on the WoZ2.0 dataset which does not have the name slot. DSTRead (Gao et al, 2019) formulate the dialogue state tracking task as a reading comprehension problem by asking slot specified questions to the BERT model and find the answer span in the dialogue history for each of the pre-defined combined slot. Thus its inference time complexity is still O(n).…”
Section: Related Workmentioning
confidence: 99%
“…The models used for comparison include NBT-DNN , NBT-CNN (Mrksic et al, 2017), Scalable (Rastogi et al, 2017), MemN2N (Liu and Perez, 2017), PtrNet (Xu andHu, 2018), LargeScale (Ramadan et al, 2018), GLAD (Ramadan et al, 2018), GCE (Nouri and Hosseini-Asl, 2018), StateNetPSI (Ren et al, 2018), SUMBT , HyST (Goel et al, 2019), DSTRead+JST (Gao et al, 2019), TRADE (Wu et al, 2019), COMER (Ren et al, 2019), DSTQA (Zhou and Small, 2019), MERET (Huang et al, 2020) and SST (Chen et al, 2020).…”
Section: Evaluation Metrics and Compared Modelsmentioning
confidence: 99%