Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.140
|View full text |Cite
|
Sign up to set email alerts
|

DIALKI: Knowledge Identification in Conversational Systems through Dialogue-Document Contextualization

Abstract: Identifying relevant knowledge to be used in conversational systems that are grounded in long documents is critical to effective response generation. We introduce a knowledge identification model that leverages the document structure to provide dialogue-contextualized passage encodings and better locate knowledge relevant to the conversation. An auxiliary loss captures the history of dialogue-document connections. We demonstrate the effectiveness of our model on two document-grounded conversational datasets an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(19 citation statements)
references
References 27 publications
0
19
0
Order By: Relevance
“…Baselines For knowledge identification, we compare UniGDD with several strong baselines, including BERTQA (Devlin et al, 2019), BERT-PR (Daheim et al, 2021), RoBERTa-PR (Daheim et al, 2021), Multi-Sentence (Wu et al, 2021), and DI-ALKI (Wu et al, 2021). These models formulate knowledge identification as the machine reading comprehension task and extract the grounding span from the document.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Baselines For knowledge identification, we compare UniGDD with several strong baselines, including BERTQA (Devlin et al, 2019), BERT-PR (Daheim et al, 2021), RoBERTa-PR (Daheim et al, 2021), Multi-Sentence (Wu et al, 2021), and DI-ALKI (Wu et al, 2021). These models formulate knowledge identification as the machine reading comprehension task and extract the grounding span from the document.…”
Section: Methodsmentioning
confidence: 99%
“…Supporting Document As shown in Figure 1, the goal-oriented document-grounded dialogue problem is commonly formulated as a sequential process including two sub-tasks: knowledge identification (KI) and response generation (RG) (Feng, 2021). Given the dialogue context and supporting document, knowledge identification aims to identify a text span in the document as the grounding knowledge for the next agent response, which is often formulated as a conversational reading comprehension task (Feng, 2021;Wu et al, 2021). Response generation then aims at generating a proper agent response according to the dialogue context and the selected knowledge.…”
Section: Dialogue Contextmentioning
confidence: 99%
See 1 more Smart Citation
“…The k highest and lowest quality samples are used as positive and negative samples for the retriever training. The most relevant retrieval works on dialogue focus on tasks like knowledge identification (Wu et al, 2021) and response selection (Yuan et al, 2019;Han et al, 2021). However, their tasks and settings are different from ours.…”
Section: Related Workmentioning
confidence: 99%
“…It is not practical to represent full dialogue context for multiple exemplars in the prompt. A simple solution is to just include the N recent turns in the dialogue history (Lei et al, 2018;Budzianowski and Vulić, 2019;Wu et al, 2021). We adopt a new approach that takes advantage of the fact that the dialogue state is a summary of the dialogue history.…”
Section: Dst Task Framingmentioning
confidence: 99%