Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing 2019
DOI: 10.18653/v1/d19-6003
|View full text |Cite
|
Sign up to set email alerts
|

Towards Generalizable Neuro-Symbolic Systems for Commonsense Question Answering

Abstract: Non-extractive commonsense QA remains a challenging AI task, as it requires systems to reason about, synthesize, and gather disparate pieces of information, in order to generate responses to queries. Recent approaches on such tasks show increased performance, only when models are either pre-trained with additional information or when domain-specific heuristics are used, without any special consideration regarding the knowledge resource type. In this paper, we perform a survey of recent commonsense QA methods a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
60
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(64 citation statements)
references
References 34 publications
2
60
0
Order By: Relevance
“…In all our implemented models, we use pre-trained LMs as text encoders for s for fair comparison. We do compare our models with those (Ma et al, 2019;Lv et al, 2019;Khashabi et al, 2020) augmented by other text-form external knowledge (e.g., Wikipedia), although we stick to our focus of encoding structured KG. Specifically, we fine-tune BERT-BASE, BERT-LARGE (Devlin et al, 2019), and ROBERTA (Liu et al, 2019b) for multiple-choice questions.…”
Section: Compared Methodsmentioning
confidence: 99%
“…In all our implemented models, we use pre-trained LMs as text encoders for s for fair comparison. We do compare our models with those (Ma et al, 2019;Lv et al, 2019;Khashabi et al, 2020) augmented by other text-form external knowledge (e.g., Wikipedia), although we stick to our focus of encoding structured KG. Specifically, we fine-tune BERT-BASE, BERT-LARGE (Devlin et al, 2019), and ROBERTA (Liu et al, 2019b) for multiple-choice questions.…”
Section: Compared Methodsmentioning
confidence: 99%
“…Knowledge Enhanced Methods KagNet (Lin et al, 2019) represents external knowledge as a graph, and then uses graph convolution and LSTM for inference. Ma et al (2019) adopt the BERTbased option comparison network (OCN) for answer prediction, and propose an attention mechanism to perform knowledge integration using relevant triples. Lv et al (2020) propose a GNN-based inference model on conceptual network relationships and heterogeneous graphs of Wikipedia sentences.…”
Section: Related Workmentioning
confidence: 99%
“…In this section, we describe the method to extract knowledge facts from knowledge graph in details. Once the knowledge is determined, we can choose the appropriate integration mechanism for further knowledge injection, such as attention mechanism (Sun et al, 2018;Ma et al, 2019), pre-training tasks (He et al, 2019) and multi-task training . Given a question Q and a candidate answer O, we first identify the entity and its type in the text by entity linking.…”
Section: Knowledge Acquisitionmentioning
confidence: 99%
“…Our proposed model solves these problems Group 2 comprises models that use additional external knowledge while using RoBERTa as the encoding layer, similar to the proposed model. KagNet [22], and HyKAS 2.0 [23] use ConceptNet as the external knowledge source, similar to the proposed model. RoBERTa + KE uses wiki docs as the external knowledge source to search for sentences while RoBERTa + IR uses the Open Mind Common Sense (OMCS) [24] and searches for sentences using a search engine.…”
Section: Model Performance For Commonsenseqamentioning
confidence: 99%