Proceedings of the 1st Workshop on Document-Grounded Dialogue and Conversational Question Answering (DialDoc 2021) 2021
DOI: 10.18653/v1/2021.dialdoc-1.8
|View full text |Cite
|
Sign up to set email alerts
|

Cascaded Span Extraction and Response Generation for Document-Grounded Dialog

Abstract: This paper summarizes our entries to both subtasks of the first DialDoc shared task which focuses on the agent response prediction task in goal-oriented document-grounded dialogs. The task is split into two subtasks: predicting a span in a document that grounds an agent turn and generating an agent response based on a dialog and grounding document. In the first subtask, we restrict the set of valid spans to the ones defined in the dataset, use a biaffine classifier to model spans, and finally use an ensemble o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Baselines For knowledge identification, we compare UniGDD with several strong baselines, including BERTQA (Devlin et al, 2019), BERT-PR (Daheim et al, 2021), RoBERTa-PR (Daheim et al, 2021), Multi-Sentence (Wu et al, 2021), and DI-ALKI (Wu et al, 2021). These models formulate knowledge identification as the machine reading comprehension task and extract the grounding span from the document.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Baselines For knowledge identification, we compare UniGDD with several strong baselines, including BERTQA (Devlin et al, 2019), BERT-PR (Daheim et al, 2021), RoBERTa-PR (Daheim et al, 2021), Multi-Sentence (Wu et al, 2021), and DI-ALKI (Wu et al, 2021). These models formulate knowledge identification as the machine reading comprehension task and extract the grounding span from the document.…”
Section: Methodsmentioning
confidence: 99%
“…These models formulate knowledge identification as the machine reading comprehension task and extract the grounding span from the document. For response generation, we compare UniGDD with several pipeline methods, including DIALKI+BART (Wu et al, 2021) that uses DIALKI to conduct knowledge identification, followed by BART (Lewis et al, 2020) to conduct response generation and RoBERTa-PR+BART (Daheim et al, 2021). We also build a strong baseline model RoBERTa+T5 which uses the same pretrained generative model as ours.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the generation, we use the noisy channel model instead of the RAG model. For a more detailed analysis of the effect of the RAG and Noisy Channel Model, we refer readers to Thulke et al [13] and Daheim et al [21].…”
Section: A Dstc9mentioning
confidence: 99%
“…In Doc2dial, for the task of knowledge identification, we compare CausalDD with several strong baselines, including UniGDD , BERTQA (Kenton and Toutanova, 2019), BERT-PR (Daheim et al, 2021), RoBERTa-PR (Daheim et al, 2021), Multi-Sentence (Wu et al, 2021), and DIALKI (Wu et al, 2021). The other models formulate knowledge identification as a machine reading comprehension task and extract the grounding span from the document.…”
Section: B2 Baselinesmentioning
confidence: 99%