Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning 2020
DOI: 10.18653/v1/2020.deelio-1.9
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks

Abstract: Pretrained language models have excelled at many NLP tasks recently; however, their social intelligence is still unsatisfactory. To enable this, machines need to have a more general understanding of our complicated world and develop the ability to perform commonsense reasoning besides fitting the specific downstream tasks. External commonsense knowledge graphs (KGs), such as ConceptNet, provide rich information about words and their relationships. Thus, towards general commonsense learning, we propose two appr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 22 publications
0
14
0
Order By: Relevance
“…The last row in Table 3 shows the results when ensembling 5 models with majority vote, including See all Choices, Cross-Segment Attention, RoBERTa + GPT2, Multiple-Choice Datasets, and one system using external concept knowledge graph that has a classification accuracy of 79.2% [29]. This ensemble outperforms all the individual models.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The last row in Table 3 shows the results when ensembling 5 models with majority vote, including See all Choices, Cross-Segment Attention, RoBERTa + GPT2, Multiple-Choice Datasets, and one system using external concept knowledge graph that has a classification accuracy of 79.2% [29]. This ensemble outperforms all the individual models.…”
Section: Resultsmentioning
confidence: 99%
“…There are plenty of common sense resources with different focuses [1,2,3,5,26,27,28], which may be very helpful for our model to learn more broad and general concepts of our world. In [29], we investigated how to incorporate knowledge graph into the pretrained models.…”
Section: Using External Resourcesmentioning
confidence: 99%
“…Wang et al (2020b), for example, retrieve multi-hop knowledge paths from ConceptNet for fine-tuning LMs for multiple choice question answering. Chang et al (2020) and Bosselut et al (2021) incorporate knowledge paths from ConceptNet into pre-trained LMs for solving the SocialIQA task . However, all these approaches evaluate the effectiveness of integrating commonsense knowledge indirectly on downstream tasks, and do not explicitly evaluate the impact and relevance of knowledge for a specific system prediction.…”
Section: Related Workmentioning
confidence: 99%
“…SocialIQA Task Most previous works on So-cialIQA task involve either large size of pre-trained models, and datasets (Khashabi et al, 2020;Lourie et al, 2021) or complicated models that heavily rely on external knowledge bases (Shen et al, 2020;Shwartz et al, 2020;Mitra et al, 2019;Ji et al, 2020a,b;Chang et al, 2020). Among them, Uni-fiedQA (Khashabi et al, 2020) achieved impressive performance by fine-tuning 11B T5 model (Raffel et al, 2019) with 17 existing QA datasets.…”
Section: Related Workmentioning
confidence: 99%