2020
DOI: 10.48550/arxiv.2005.05240
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Commonsense Evidence Generation and Injection in Reading Comprehension

Abstract: Human tackle reading comprehension not only based on the given context itself but often rely on the commonsense beyond. To empower the machine with commonsense reasoning, in this paper, we propose a Commonsense Evidence Generation and Injection framework in reading comprehension, named CEGI. The framework injects two kinds of auxiliary commonsense evidence into comprehensive reading to equip the machine with the ability of rational thinking. Specifically, we build two evidence generators: the first generator a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 18 publications
(33 reference statements)
0
1
0
Order By: Relevance
“…Unlike individual PLMs and graph encoders, KG-augmented models take both text and graph inputs. The KG-augmented model's graph encoder usually computes graph embeddings via attention pooling of nodes/paths, and the attention weights can be used to explain which nodes/paths in the input KG are salient (Lin et al, 2019;Feng et al, 2020;Liu et al, 2020;Yan et al, 2020). These KG explanations can be interpreted as identifying knowledge in the KG that is complementary to the knowledge encoded in the PLM.…”
Section: Creating Model Explanationsmentioning
confidence: 99%
“…Unlike individual PLMs and graph encoders, KG-augmented models take both text and graph inputs. The KG-augmented model's graph encoder usually computes graph embeddings via attention pooling of nodes/paths, and the attention weights can be used to explain which nodes/paths in the input KG are salient (Lin et al, 2019;Feng et al, 2020;Liu et al, 2020;Yan et al, 2020). These KG explanations can be interpreted as identifying knowledge in the KG that is complementary to the knowledge encoded in the PLM.…”
Section: Creating Model Explanationsmentioning
confidence: 99%