Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.54
|View full text |Cite
|
Sign up to set email alerts
|

Language Generation with Multi-Hop Reasoning on Commonsense Knowledge Graph

Abstract: Despite the success of generative pre-trained language models on a series of text generation tasks, they still suffer in cases where reasoning over underlying commonsense knowledge is required during generation. Existing approaches that integrate commonsense knowledge into generative pre-trained language models simply transfer relational knowledge by post-training on individual knowledge triples while ignoring rich connections within the knowledge graph. We argue that exploiting both the structural and semanti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
74
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(74 citation statements)
references
References 29 publications
0
74
0
Order By: Relevance
“…Note that GLUCOSE-GPT2 and COINS are using the same knowledge resource, hence the clear performance increase of COINS (+4.92 BLEU score) indicates that jointly learning to generate contextualized inferences rules and missing sentences in a recursive manner can enhance generation quality. 10 (ii) Similar to Ji et al (2020) ). (iii) For COINS, general rules (GR) boost performance more than specific rules, indicating that the sentence generation model generalizes well.…”
Section: Resultsmentioning
confidence: 94%
See 1 more Smart Citation
“…Note that GLUCOSE-GPT2 and COINS are using the same knowledge resource, hence the clear performance increase of COINS (+4.92 BLEU score) indicates that jointly learning to generate contextualized inferences rules and missing sentences in a recursive manner can enhance generation quality. 10 (ii) Similar to Ji et al (2020) ). (iii) For COINS, general rules (GR) boost performance more than specific rules, indicating that the sentence generation model generalizes well.…”
Section: Resultsmentioning
confidence: 94%
“…Automatic Metrics. For Story Ending Generation (SEG) we follow the metrics used in Guan et al (2019); Ji et al (2020): they use BLEU-1/2 to measure n-gram overlap between generated and human-written story endings, and Distinct-n (Li et al, 2016) to measure the generation diversity using maximum mutual information.…”
Section: A3 Story Ending Generation Taskmentioning
confidence: 99%
“…We acquire structured knowledge and rationale definitions from ConceptNet 5 and dictionary source 6 separately. For ConceptNet, we extract knowledge with Breadth-First-Search (BFS) algorithm as described in (Ji et al, 2020). For dictionary, we extract the definition of rationales by following (Chen et al, 2020a).…”
Section: Contrastive Explanation Generationmentioning
confidence: 99%
“…Pre-trained LMs enhanced with commonsense knowledge have also been the models of choice for other text generation tasks, e.g. dialogue generation (Zhou et al, 2018), story ending generation (Guan et al, 2020), or abductive NLI (Ji et al, 2020b). While these models aim at generating explanations for a single statement, or completing a given sequence of sentences, we investigate how to make use of LMs to generate a sentence that fills in implicit knowledge between two sentences.…”
Section: Related Workmentioning
confidence: 99%