Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1138
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Diffusion for Neural Dialogue Generation

Abstract: End-to-end neural dialogue generation has shown promising results recently, but it does not employ knowledge to guide the generation and hence tends to generate short, general, and meaningless responses. In this paper, we propose a neural knowledge diffusion (NKD) model to introduce knowledge into dialogue generation. This method can not only match the relevant facts for the input utterance but diffuse them to similar entities. With the help of facts matching and entity diffusion, the neural dialogue generatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
123
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 166 publications
(124 citation statements)
references
References 19 publications
0
123
1
Order By: Relevance
“…This is opposite to the conclusion achieved by Liu et al [23]. One possible reason is that the tasks between ours and Liu et al's [23] are different, i.e., Liu et al aim at producing texts based on code, while we focus on generating texts for dialogues and modeling code is different from modeling dialogue texts [53], [54]. The higher BLEU-4 score of the proposed RRGen model than that of the NMT model explains that the response generated by the RRGen model is more similar to developers' response than the response generated by the NMT model.…”
Section: Evaluation Using An Automatic Metriccontrasting
confidence: 68%
“…This is opposite to the conclusion achieved by Liu et al [23]. One possible reason is that the tasks between ours and Liu et al's [23] are different, i.e., Liu et al aim at producing texts based on code, while we focus on generating texts for dialogues and modeling code is different from modeling dialogue texts [53], [54]. The higher BLEU-4 score of the proposed RRGen model than that of the NMT model explains that the response generated by the RRGen model is more similar to developers' response than the response generated by the NMT model.…”
Section: Evaluation Using An Automatic Metriccontrasting
confidence: 68%
“…This is part of the semantics challenge we discussed in Section 3. Since natural language understanding in open domains is extremely challenging, knowledge grounding provides to some degree the ability of understanding language in dialog context, as shown in several preliminary studies [47,130,135].…”
Section: Discussion and Future Trendsmentioning
confidence: 99%
“…Recent works on KB-based end-to-end QA systems such as (Yin et al, 2015;He et al, 2017a;Liu et al, 2018a) generate full-length answers with neural pointer networks (Gülçehre et al, 2016;Vinyals et al, 2015;He et al, 2017b) after retrieving facts from a knowledge base (KB). Dialogue systems such as (Liu et al, 2018b;Lian et al, 2019) extract information from knowledge bases to formulate a response. Systems such as (Fu and Feng, 2018) uses KB based key-value memory after extracting information from documents or external KBs.…”
Section: Related Workmentioning
confidence: 99%