Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.249
|View full text |Cite
|
Sign up to set email alerts
|

KFCNet: Knowledge Filtering and Contrastive Learning for Generative Commonsense Reasoning

Abstract: Pre-trained language models have led to substantial gains over a broad range of natural language processing (NLP) tasks, but have been shown to have limitations for natural language generation tasks with high-quality requirements on the output, such as commonsense generation and ad keyword generation. In this work, we present a novel Knowledge Filtering and Contrastive learning Network (KFCNet) which references external knowledge and achieves better generation performance. Specifically, we propose a BERTbased … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…RE-T5 (Wang et al, 2021) reinforces the input value by using a retriever to import sentences related to the concepts from external knowledge. KFCNet (Li et al, 2021) presents the state-ofthe-art performance in CommonGen by removing low-quality sentences in external knowledge and applying contrastive learning, respectively. However, there does not exist Korean dataset for generative commonsense reasoning and advanced research as well.…”
Section: Commonsensementioning
confidence: 99%
“…RE-T5 (Wang et al, 2021) reinforces the input value by using a retriever to import sentences related to the concepts from external knowledge. KFCNet (Li et al, 2021) presents the state-ofthe-art performance in CommonGen by removing low-quality sentences in external knowledge and applying contrastive learning, respectively. However, there does not exist Korean dataset for generative commonsense reasoning and advanced research as well.…”
Section: Commonsensementioning
confidence: 99%
“…Existing methods employ the Pre-trained Languege Models (PLMs) such as BART (Lewis et al, 2020), GPT-2 (Radford et al, 2019) as the backbone to solve this problem. They Fan et al, 2020;Wang et al, 2021;Li et al, 2021) usually take the concatenated concepts words as the inputs. However, such processing of inputs * This work was done during an internship at Tencent.…”
Section: Introductionmentioning
confidence: 99%