2020
DOI: 10.48550/arxiv.2005.00613
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Controllable Model of Grounded Response Generation

Abstract: Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process. This control is essential to ensure that users' semantic intents are satisfied and to impose a degree of specificity on generated outputs. Attempts to boost informativeness alone come at the expense of factual accuracy, as attested by GPT-2's propensity to "hallucinate" facts. While this may be mitigated by access to background knowledge, there is scant guarantee of relev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 18 publications
0
13
0
Order By: Relevance
“…The credibility of a text contains multiple aspects such as coherency, clarity, veracity, etc. For non-pretraining methods, previous works explored many approaches to promote several aspects of credible generation: Plan-and-write [35] uses a hierarchical method that first plans a storyline and then generates a story based on it to improve the text coherency; CGRG [36] uses grounding knowledge to generate veritable answers. With the development of pre-training technology, large-scale language models provide a simple yet powerful solution for credible text generation.…”
Section: Credible Text Generationmentioning
confidence: 99%
“…The credibility of a text contains multiple aspects such as coherency, clarity, veracity, etc. For non-pretraining methods, previous works explored many approaches to promote several aspects of credible generation: Plan-and-write [35] uses a hierarchical method that first plans a storyline and then generates a story based on it to improve the text coherency; CGRG [36] uses grounding knowledge to generate veritable answers. With the development of pre-training technology, large-scale language models provide a simple yet powerful solution for credible text generation.…”
Section: Credible Text Generationmentioning
confidence: 99%
“…The skeleton could also be multiple keyphrases. The keyphrases are extracted based on word frequency (Ippolito et al, 2019;Tan et al, 2020;Wu et al, 2020), an off-the-shelf keyword extraction method (Peng et al, 2018;Goldfarb-Tarrant et al, 2019;Yao et al, 2019;Rashkin et al, 2020;Zhang et al, 2020), a sentence compression dataset and reinforcement learning (Xu et al, 2018), or image caption datasets and ConceptNet (Lin et al, 2020). Most of the studies focus on modeling the longterm dependency among the keyphrases and/or forcing the generation to contain the keyphrases.…”
Section: Related Workmentioning
confidence: 99%
“…Researchers have also explored the possibility of using other latent semantic forms such as topic (Wang et al 2017), sentence function (Ke et al 2018;Bi et al 2019), frame semantics (Gupta et al 2020), lexical phrases (Wu et al 2020), or goals in conversation (Tang et al 2019). Previous works have mainly focused on fusing the information of much simpler semantic forms into generation, thus altering few words in the returned response and might not be as effective.…”
Section: Utilizing Latent Semantics Formsmentioning
confidence: 99%