Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.269
|View full text |Cite
|
Sign up to set email alerts
|

Retrieval Enhanced Model for Commonsense Generation

Abstract: Commonsense generation is a challenging task of generating a plausible sentence describing an everyday scenario using provided concepts. Its requirement of reasoning over commonsense knowledge and compositional generalization ability even puzzles strong pre-trained language generation models. We propose a novel framework using retrieval methods to enhance both the pre-training and fine-tuning for commonsense generation. We retrieve prototype sentence candidates by concept matching and use them as auxiliary inp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(20 citation statements)
references
References 19 publications
(23 reference statements)
0
12
0
Order By: Relevance
“…To investigate the effectiveness of our framework, we conduct extensive experiments on the CommonGen benchmark, where experimental results and in-depth analysis demon-strate the superiority of our framework. Specifically, KGR 4 significantly surpasses the previous best result (Wang et al 2021)…”
Section: Rethinkmentioning
confidence: 88%
See 3 more Smart Citations
“…To investigate the effectiveness of our framework, we conduct extensive experiments on the CommonGen benchmark, where experimental results and in-depth analysis demon-strate the superiority of our framework. Specifically, KGR 4 significantly surpasses the previous best result (Wang et al 2021)…”
Section: Rethinkmentioning
confidence: 88%
“…Therefore, the performance of this line of work would be limited. More recently, Fan et al (2020) and Wang et al (2021) introduce retrievers to search auxiliary information from external plain sentences, which contains enormous daily scenarios. Intuitively, the implicit commonsense knowledge within plain sentences is more abundant than that in human-annotated knowledge bases.…”
Section: Rethinkmentioning
confidence: 99%
See 2 more Smart Citations
“…There are two major approaches to enhance the vanilla PTM's ability of commonsense reasoning on generation. The first approach is to introduce explicit knowledge from external sources such as ConceptNet and retrieved prototypes (Fan et al, 2020;Wang et al, 2021), which can facilitate GSR by either building connections between related concepts or providing adjunct words for the input. The second approach is to explicitly teach models to reason over the concepts via new pre-training objectives (Zhou et al, 2021).…”
Section: Generative Commonsense Reasoningmentioning
confidence: 99%