2021
DOI: 10.1162/tacl_a_00352
|View full text |Cite
|
Sign up to set email alerts
|

Conversation Graph: Data Augmentation, Training, and Evaluation for Non-Deterministic Dialogue Management

Abstract: Task-oriented dialogue systems typically rely on large amounts of high-quality training data or require complex handcrafted rules. However, existing datasets are often limited in size con- sidering the complexity of the dialogues. Additionally, conventional training signal in- ference is not suitable for non-deterministic agent behavior, namely, considering multiple actions as valid in identical dialogue states. We propose the Conversation Graph (ConvGraph), a graph-based representation of dialogues that can b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…The extra training data is coherent logically but creates more variety in surface formats, thus provides a significant performance boost for end-to-end response generation. The proposed Multi-Response Data Augmentation (MRDA) beats recent work (Gritta et al, 2021) using Most Frequent Sampling in a single-turn setting without annotated states.…”
Section: Dialoguementioning
confidence: 88%
See 3 more Smart Citations
“…The extra training data is coherent logically but creates more variety in surface formats, thus provides a significant performance boost for end-to-end response generation. The proposed Multi-Response Data Augmentation (MRDA) beats recent work (Gritta et al, 2021) using Most Frequent Sampling in a single-turn setting without annotated states.…”
Section: Dialoguementioning
confidence: 88%
“…Then we enable this dictionary to create additional data during training, which allows a language model to learn a balanced distribution. In the following sections, we will briefly introduce the task of single-turn dialogue response generation, the baseline augmentation approach called Most Frequent Sampling (Gritta et al, 2021), and the proposed Multi-Response Data Augmentation.…”
Section: Data Augmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…CoCo (Li et al, 2021) trains a conditional user-utterance generation model, then generates synthetic turns by modifying belief states using a rule-based system and conditioning the model on the modified belief state. Gritta et al (2021) create a working graph of TOD datasets where each edge is a dialogue act and create synthetic dialogues by traversing alternative paths; however, their framework requires user acts to work with. Critically, none of the above techniques exploit the belief state annotations of TODs within an n-shot scenario.…”
Section: Introductionmentioning
confidence: 99%