Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.369
|View full text |Cite
|
Sign up to set email alerts
|

Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

Abstract: Commonsense question answering (QA) requires background knowledge which is not explicitly stated in a given context. Prior works use commonsense knowledge graphs (KGs) to obtain this knowledge for reasoning. However, relying entirely on these KGs may not suffice, considering their limited coverage and the contextual dependence of their knowledge. In this paper, we augment a general commonsense QA framework with a knowledgeable path generator. By extrapolating over existing paths in a KG with a state-of-the-art… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
46
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(48 citation statements)
references
References 26 publications
2
46
0
Order By: Relevance
“…Recently, pre-trained LMs have been augmented with external knowledge from commonsense knowledge bases such as ConceptNet, which provides more explicit knowledge grounding and improves their performance on downstream tasks that require reasoning abilities. Wang et al (2020b), for example, retrieve multi-hop knowledge paths from ConceptNet for fine-tuning LMs for multiple choice question answering. Chang et al (2020) and Bosselut et al (2021) incorporate knowledge paths from ConceptNet into pre-trained LMs for solving the SocialIQA task .…”
Section: Related Workmentioning
confidence: 99%
“…Recently, pre-trained LMs have been augmented with external knowledge from commonsense knowledge bases such as ConceptNet, which provides more explicit knowledge grounding and improves their performance on downstream tasks that require reasoning abilities. Wang et al (2020b), for example, retrieve multi-hop knowledge paths from ConceptNet for fine-tuning LMs for multiple choice question answering. Chang et al (2020) and Bosselut et al (2021) incorporate knowledge paths from ConceptNet into pre-trained LMs for solving the SocialIQA task .…”
Section: Related Workmentioning
confidence: 99%
“…ALBERT + Path Generator (Wang et al, 2020) propose a multi-hop knowledge path generator to generate structured evidence dynamically according to the question. They use a pre-trained language model as the backbone, leveraging a large amount of unstructured knowledge stored in the language model to supplement the incompleteness of the knowledge base.…”
Section: Baselinesmentioning
confidence: 99%
“…(2) The knowledge base is incomplete (Wang et al, 2020), which will inevitably cause the descriptive knowledge of some events cannot be obtained from the KB. Thus, the model should have the ability to obtain and encode such knowledge, even if it does not exist in the KB.…”
Section: Introductionmentioning
confidence: 99%