Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.232
|View full text |Cite
|
Sign up to set email alerts
|

Improving Commonsense Question Answering by Graph-based Iterative Retrieval over Multiple Knowledge Sources

Abstract: In order to facilitate natural language understanding, the key is to engage commonsense or background knowledge. However, how to engage commonsense effectively in question answering systems is still under exploration in both research academia and industry. In this paper, we propose a novel question-answering method by integrating multiple knowledge sources, i.e. Con-ceptNet, Wikipedia, and the Cambridge Dictionary, to boost the performance. More concretely, we first introduce a novel graph-based iterative know… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…Various studies have assessed the efficacy of external knowledge in natural language processing tasks, such as commonsense question answering (Chen et al 2020) and machine reading comprehension (Pan et al 2019;Qiu et al 2019). Researchers have also introduced external knowledge in other tasks such as language generation (Ji et al 2020).…”
Section: Knowledge-enhanced Reasoningmentioning
confidence: 99%
“…Various studies have assessed the efficacy of external knowledge in natural language processing tasks, such as commonsense question answering (Chen et al 2020) and machine reading comprehension (Pan et al 2019;Qiu et al 2019). Researchers have also introduced external knowledge in other tasks such as language generation (Ji et al 2020).…”
Section: Knowledge-enhanced Reasoningmentioning
confidence: 99%
“…However, many tasks require multiple steps of reasoning to reach the correct answer (e.g., Mihaylov et al, 2018;Yang et al, 2018;Khot et al, 2020). A common approach is to retrieve relevant commonsense knowledge from knowledge bases (KBs) such as ConceptNet (Speer et al, 2017) and ATOMIC (Sap et al, 2019a;Hwang et al, 2021), in order to enhance the neural model and explicate the reasoning steps (e.g., Bauer et al, 2018;Xia et al, 2019;Lin et al, 2019;Guan et al, 2019;Chen et al, 2020;. More recent work used the COMET model 3).…”
Section: Knowledge-enhanced Modelsmentioning
confidence: 99%
“…Recent work has focused on different aspects of multi-hop reasoning for question answering and related natural language understanding tasks. One line of work has incorporated highly structured knowledge graphs into language understanding by combining graphical methods with language models (Lin et al 2019;Ji et al 2020;Yasunaga et al 2021), augmenting language model inputs with relational knowledge (Zhang et al 2019;Chen et al 2020;Xu et al 2021), and applying language models to relational knowledge to infer multi-hop reasoning paths through knowledge graphs (Wang et al 2020). Others have further explored training language models with semi-structured relational knowledge (Sap et al 2019;Bosselut et al 2019;Mostafazadeh et al 2020;Hwang et al 2021), i.e., where nodes are natural language sentences rather than canonicalized concepts, to later use for generating multi-hop explanations in natural language (Shwartz et al 2020;Bosselut et al 2021).…”
Section: Related Workmentioning
confidence: 99%