2021
DOI: 10.48550/arxiv.2109.07506
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dialogue State Tracking with a Language Model using Schema-Driven Prompting

Abstract: Task-oriented conversational systems often use dialogue state tracking to represent the user's intentions, which involves filling in values of pre-defined slots. Many approaches have been proposed, often using task-specific architectures with special-purpose classifiers. Recently, good results have been obtained using more general architectures based on pretrained language models. Here, we introduce a new variation of the language modeling approach that uses schema-driven prompting to provide task-aware histor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 25 publications
0
1
0
Order By: Relevance
“…We also evaluate the performance of LDST using the complete training data, and compare it with the following strong baselines, including SGDbaseline , TRADE , DS-DST (Zhang et al, 2019), TripPy (Heck et al, 2020), Seq2Seq-DU (Feng et al, 2020), MetaASSIST (Ye et al, 2022b), SDP-DST (Lee et al, 2021), TOATOD (Bang et al, 2023b), DiCoS-DST (Guo et al, 2022b), D3ST (Zhao et al, 2022), paDST (Ma et al, 2019). And the results are shown on Table 5.…”
Section: Results Of Fine-tuning With Full Training Datamentioning
confidence: 99%
See 2 more Smart Citations
“…We also evaluate the performance of LDST using the complete training data, and compare it with the following strong baselines, including SGDbaseline , TRADE , DS-DST (Zhang et al, 2019), TripPy (Heck et al, 2020), Seq2Seq-DU (Feng et al, 2020), MetaASSIST (Ye et al, 2022b), SDP-DST (Lee et al, 2021), TOATOD (Bang et al, 2023b), DiCoS-DST (Guo et al, 2022b), D3ST (Zhao et al, 2022), paDST (Ma et al, 2019). And the results are shown on Table 5.…”
Section: Results Of Fine-tuning With Full Training Datamentioning
confidence: 99%
“…Recent advancements in parameter-efficient fine-tuning (PEFT) techniques have effectively alleviated this problem, such as LoRA (Hu et al, 2021) and Prefix Tuning (Liu et al, 2021). For instance, both Lee et al (2021) and proposed a prompt-tuning method that leverages domain-specific prompts and context information to improve the performance of DST task. Meanwhile, Ma et al (2023) and introduced the prefix tuning approach, which involves modifying the input prompt by adding specific tokens at the beginning of the dialogue, aiming to enhance the efficiency of model fine-tuning.…”
Section: Llms For Dialogue State Trackingmentioning
confidence: 99%
See 1 more Smart Citation
“…However, evidence collection systems proposed by prior studies [Minhas et al(2022), Ku et al(2008)] either rely on rule-based methods or require a large amount of manually labeled training data. Future studies can adopt methods from recent NLP works from the domain of dialogue state tracking , Gao et al(2019), Lee et al(2021)] and event argument extraction [Du and Cardie(2020), Lyu et al(2021)]. We also suggest future system design practitioners consider using pre-trained language models for more effective and robust event information extraction, by increasing the model's ability in domain adaptation and commonsense reasoning [Yang et al(2019)].…”
Section: Designing Conversational Agents For High-quality Reportingmentioning
confidence: 99%
“…Dialogue State Tracking (DST) is crucial in Task-Oriented Dialogue (TOD) systems to understand and manage user intentions (Wu et al, 2019;Hosseini-Asl et al, 2020;Heck et al, 2020;Lee et al, 2021;Zhao et al, 2022). Collecting and annotating dialogue states at the turn level is challenging and expensive (Budzianowski et al, 2018), and commercial applications often need to expand the schema and include new domains.…”
Section: Introductionmentioning
confidence: 99%