Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.99
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering

Abstract: Existing work that augment question answering (QA) models with external knowledge (e.g., knowledge graphs) either struggle to model multi-hop relations efficiently, or lack transparency into the model's prediction rationale. In this paper, we propose a novel knowledge-aware approach that equips pretrained language models (PTLMs) with a multi-hop relational reasoning module, named multi-hop graph relation network (MHGRN). It performs multi-hop, multi-relational reasoning over subgraphs extracted from external k… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
136
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 133 publications
(136 citation statements)
references
References 28 publications
0
136
0
Order By: Relevance
“…Results (as mean and standard deviation) are computed over 4 experimental runs with different random seeds (top score in boldface, second score underlined). Parts of the results for baselines are reported from our another work (Feng et al, 2020…”
Section: Baselinesmentioning
confidence: 99%
“…Results (as mean and standard deviation) are computed over 4 experimental runs with different random seeds (top score in boldface, second score underlined). Parts of the results for baselines are reported from our another work (Feng et al, 2020…”
Section: Baselinesmentioning
confidence: 99%
“…For CommonsenseQA (Table 2), our HGN ranks first among comparable approaches and shows remarkable improvement over PathGenerator and the LM Finetuning approach (ALBERT (Lan et al, 2020)). Higher-ranking (Schlichtkrull et al, 2018b) 65.56 82.42 GAT (Velickovic et al, 2018) 65.88 82.78 GN (Battaglia et al, 2018) 65.52 82.06 GconAttn (Wang et al, 2019a) 65.17 82.35 MHGRN (Feng et al, 2020) 65.92 83.07 PathGenerator 64 and AL-BERT+KD additionally use concept definitions from dictionaries. ALBERT+DESC-KCR and AL-BERT+KCR leverage "question concept" annotations, which are used during the construction of the CommmonsenseQA dataset and allow the model to learn shortcuts that don't generalize to other datasets.…”
Section: Resultsmentioning
confidence: 99%
“…Most common approaches include adapting entity embeddings learned by models such as BERT by providing additional knowledge from different ontologies that define relations between entities. This can be done either by using templates to convert the relations to text before finetuning embeddings (Weissenborn et al, 2017;Lauscher et al, 2020;, by combining relational information from knowledge graphs with text embeddings (Mihaylov and Frank, 2018;Chen et al, 2018;Zhang et al, 2019;Yang et al, 2019a;, or by jointly learning knowledge graph and textual embeddings (Peters et al, 2019;Feng et al, 2020). These ontologies are either generic like WordNet (Miller, 1995), Concept-Net Singh, 2004), andWikidata (Vrandečić andKrötzsch, 2014), or more specific to a particular domain like the UMLS (Bodenreider, 2004).…”
Section: Related Workmentioning
confidence: 99%