2019
DOI: 10.48550/arxiv.1909.09743
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Teaching Pretrained Models with Commonsense Reasoning: A Preliminary KB-Based Approach

Abstract: Recently, pretrained language models (e.g., BERT) have achieved great success on many downstream natural language understanding tasks and exhibit a certain level of commonsense reasoning ability. However, their performance on commonsense tasks is still far from that of humans. As a preliminary attempt, we propose a simple yet effective method to teach pretrained models with commonsense reasoning by leveraging the structured knowledge in ConceptNet, the largest commonsense knowledge base (KB). Specifically, the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 19 publications
(27 reference statements)
0
5
0
Order By: Relevance
“…Lv et al (2019) and Lin et al (2019) extract knowledge from ConceptNet and Wikipedia to construct graphs, then use Graph Convolutional Network (Kipf and Welling, 2016) for modeling and inference. Other methods (Zhong et al, 2018;Ma et al, 2019;Ye et al, 2019;Li et al, 2019c) use knowledge bases as another corpus for pre-training, and then refining the models on task-specific contents.…”
Section: Commonsense Reasoning Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Lv et al (2019) and Lin et al (2019) extract knowledge from ConceptNet and Wikipedia to construct graphs, then use Graph Convolutional Network (Kipf and Welling, 2016) for modeling and inference. Other methods (Zhong et al, 2018;Ma et al, 2019;Ye et al, 2019;Li et al, 2019c) use knowledge bases as another corpus for pre-training, and then refining the models on task-specific contents.…”
Section: Commonsense Reasoning Methodsmentioning
confidence: 99%
“…Therefore, it is difficult to capture such knowledge solely from the raw texts. Some other works propose to leverage knowledge bases to extract related commonsense knowledge (Lin et al, 2019;Lv et al, 2019;Kipf and Welling, 2016;Ye et al, 2019;Li et al, 2019c;Ma et al, 2019). However, the construction of a knowledge base is expensive, and the contained knowledge is too limited to fulfill the requirement.…”
Section: Introductionmentioning
confidence: 99%
“…Secondly, since the development of pre-training that pretrained language models boost massive language tasks, a second approach study improving the implicit reasoning capability of PLMs. Li et al [87] train PLMs with an intermediate task of multiple-choice QA crafted from ConceptNet to improve their commonsense reasoning capabilities. EIGEN [88] generates event influence graphs as augmented data and feeds them into the PLM encoders to force them to learn the structured information.…”
Section: Reasoning Qamentioning
confidence: 99%
“…(3) Automatic reasoning, a systematic process of deriving previously unknown conclusions from given formal representations of knowledge (Lenat et al, 1990;Newell and Simon, 1956), has been a longstanding goal of AI research. In the NLP community, a modern view of this problem , where the formal representations of knowledge are substituted by the natural language statements, has recently received increasing attention, 1 yielding multiple exploratory research directions: mathematical reasoning (Rabe et al, 2021), symbolic reasoning (Yang and Deng, 2021), and com-monsense reasoning (Li et al, 2019). Impressive signs of progress have been reported in teaching PLMs to gain reasoning ability rather than just memorising knowledge facts Talmor et al, 2020), suggesting that PLMs could serve as effective reasoners for identifying analogies and inferring facts not explicitly/directly seen in the data Ushio et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…In particular, deductive reasoning 2 is one of the most promising directions (Sanyal et al, 2022;Talmor et al, 2020;Li et al, 2019). By definition, deduction yields valid conclusions, which must be true given that their premises are true (Johnson-Laird, 1999).…”
Section: Introductionmentioning
confidence: 99%