Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.302
|View full text |Cite
|
Sign up to set email alerts
|

Dialogue Summaries as Dialogue States (DS2), Template-Guided Summarization for Few-shot Dialogue State Tracking

Abstract: Annotating task-oriented dialogues is notorious for the expensive and difficult data collection process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this paper, we hypothesize that dialogue summaries are essentially unstructured dialogue states; hence, we propose to reformulate dialogue state tracking as a dialogue summarization problem. To elaborate, we train a text-totext language model with synthetic templatebased dialogue summaries, generated by a set of rules. Then, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 32 publications
(56 reference statements)
0
1
0
Order By: Relevance
“…To address zero-shot DST in unseen domains, previous cross-domain transfer strategies based on small models typically leverage extra dialogue corpora in similar domains Lin et al, 2021b;Su et al, 2021) or redefining DST in terms of other types of tasks, such as question answering (Lin et al, 2021c) or summarization (Shin et al, 2022) to find appropriate additional training data. Despite these efforts, their overall zero-shot performance remains relatively low.…”
Section: Dialogue State Trackingmentioning
confidence: 99%
“…To address zero-shot DST in unseen domains, previous cross-domain transfer strategies based on small models typically leverage extra dialogue corpora in similar domains Lin et al, 2021b;Su et al, 2021) or redefining DST in terms of other types of tasks, such as question answering (Lin et al, 2021c) or summarization (Shin et al, 2022) to find appropriate additional training data. Despite these efforts, their overall zero-shot performance remains relatively low.…”
Section: Dialogue State Trackingmentioning
confidence: 99%
“…Another set of approaches aim to improve zeroshot performance by exploiting external knowledge and datasets from other natural language tasks before fine-tuning a model for DST. For instance, Gao et al (2020); Li et al (2021); Lin et al (2021a) pre-train models on reading comprehension data, Shin et al (2022) reformulate DST as a dialogue summarization task with external annotated data, and Hudeček et al (2021) use semantic analysis and named entity recognition to identify slots. In contrast, our approach does not require any extra datasets or training efforts .…”
Section: Related Workmentioning
confidence: 99%
“…DS2 (Shin et al, 2022) treats DST as a dialogue summarization task, and fine-tunes T5-large and BART models with synthetic summary templates. (Hu et al, 2022) reformulates DST as a text-to-SQL task and transforms relevant in-context examples to SQL queries and prompts a Codex model without any fine-tuning.…”
Section: Comparison Baselinesmentioning
confidence: 99%
See 2 more Smart Citations