2019
DOI: 10.48550/arxiv.1905.08743
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(25 citation statements)
references
References 31 publications
0
25
0
Order By: Relevance
“…As noted by Razumovskaia et al (2021), there are two main designs for ToD systems: modular ToD systems and end-to-end ToD systems. In modular ToD systems, dialogue state tracking is an important component that parses the user's goal from the dialogue utterances (Wu et al, 2019b;Heck et al, 2020;Hosseini-Asl et al, 2020;Lin et al, 2020b). Among these popular models, Transformer-DST (Zeng and Nie, 2020) is one of the state-of-the-art models on both MultiWoZ 2.0 and MultiWoZ 2.1 4 .…”
Section: Multilingual Tod Systemmentioning
confidence: 99%
“…As noted by Razumovskaia et al (2021), there are two main designs for ToD systems: modular ToD systems and end-to-end ToD systems. In modular ToD systems, dialogue state tracking is an important component that parses the user's goal from the dialogue utterances (Wu et al, 2019b;Heck et al, 2020;Hosseini-Asl et al, 2020;Lin et al, 2020b). Among these popular models, Transformer-DST (Zeng and Nie, 2020) is one of the state-of-the-art models on both MultiWoZ 2.0 and MultiWoZ 2.1 4 .…”
Section: Multilingual Tod Systemmentioning
confidence: 99%
“…The performance of NBT is much better than previous DST methods. Inspired by this seminal work, a lot of neural DST approaches based on long short-term memory (LSTM) network [34,[40][41][42]59] and bidirectional gated recurrent unit (BiGRU) network [22,31,35,39,55,57] have been proposed for further improvements. These methods define DST as either a classification problem or a generation problem.…”
Section: Related Workmentioning
confidence: 99%
“…According to [11], about 32% of the state annotations have been corrected in MultiWOZ 2.1. Since hospital and police are not included in the validation set and test set, following previous works [25,27,43,55,60], we use only the remaining 5 domains in the experiments. The resulting datasets contain 17 distinct slots and 30 domain slot pairs.…”
Section: Slot Value Matchingmentioning
confidence: 99%
“…Traditional DST methods can be divided into two major types: open-vocabulary (Le, Socher, and Hoi 2020;Goel, Paul, and Hakkani-Tür 2019;Wu et al 2019) and predefined-ontology (Lee, Lee, and Kim 2019;Shan et al 2020). The former one generates slot value at each turn by a generative model such as the decoder stack in RNN and Transformer, and the latter predefines the dialogue ontology and simplifies the DST models into a classification problem.…”
Section: Related Workmentioning
confidence: 99%
“…NADST: Generate dialogue state at each turn by a non-autoregressive decoder model (Le, Socher, and Hoi 2020). TRADE: Using an Encoding Decoding model to generate slot-value label (Wu et al 2019).…”
Section: Experiments Baseline Modelsmentioning
confidence: 99%