2021
DOI: 10.1609/aaai.v35i16.17658
|View full text |Cite
|
Sign up to set email alerts
|

A Controllable Model of Grounded Response Generation

Abstract: Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses. Attempts to boost informativeness alone come at the expense of factual accuracy, as attested by pretrained language models' propensity to "hallucinate" facts. While this may be mitigated by access to background knowledge, there is scant guarantee of relevance and informativeness in generated responses. We propose a framework tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(7 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…The hallucinations in the model can also be mitigated by applying inductive attention (e.g., [106]) or retrieval-based methods (e.g., [107]). Ji et al [10] provide a detailed survey on hallucinations in LLMs with further suggestions.…”
Section: Hallucinations and Obsolete Informationmentioning
confidence: 99%
“…The hallucinations in the model can also be mitigated by applying inductive attention (e.g., [106]) or retrieval-based methods (e.g., [107]). Ji et al [10] provide a detailed survey on hallucinations in LLMs with further suggestions.…”
Section: Hallucinations and Obsolete Informationmentioning
confidence: 99%
“…The first type is post-processing based methods, which introduce a corrector to boost the factuality of output text (Dong et al, 2020;Cao et al, 2020;Song et al, 2020) or utilize an additional scoring module to rerank the candidate outputs obtained via beam search (Zhao et al, 2020;Harkous et al, 2020;Chen et al, 2021). The second type aims to utilize external models to obtain relation triplets (Cao et al, 2018), key information (Saito et al, 2020;Wu et al, 2021) or graph structures (Zhu et al, 2021) from the source text, and then use them to guide model generation. The third type mainly resorts to various learning methods, such as using unlikelihood training (Li et al, 2020) in dialogue generation, reinforcement learning (Rebuffel et al, 2020) in table-to-text generation and contrastive learning (Cao & Wang, 2021) in text summarization.…”
Section: Related Workmentioning
confidence: 99%
“…Previous work has either used structured external knowledge source (Liu et al, 2018;Young et al, 2018;Su et al, 2020a) or unstructured data. introduced a document grounded dataset for text conversations, and proposed to extract lexical control phrases to do controllable grounded response generation, while Zhang et al (2021) jointly trained a retriever and generator so that annotated text reference parallel data are not needed.…”
Section: Related Workmentioning
confidence: 99%