Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.372
|View full text |Cite
|
Sign up to set email alerts
|

JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering

Abstract: Existing KG-augmented models for commonsense question answering primarily focus on designing elaborate Graph Neural Networks (GNNs) to model knowledge graphs (KGs). However, they ignore (i) the effectively fusing and reasoning over question context representations and the KG representations, and (ii) automatically selecting relevant nodes from the noisy KGs during reasoning. In this paper, we propose a novel model, JointLK, which solves the above limitations through the joint reasoning of LM and GNN and the dy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Furthermore, several research endeavors aim to amplify representation and inference capabilities by linking LLM and KG. Sun et al [29] utilized attention mechanism to facilitate interaction between all tokens of the input text and KG entities, thus augmenting the output of the language model with domain-specific knowledge from the KG.…”
Section: Kg-powered News Recommendation Modelmentioning
confidence: 99%
“…Furthermore, several research endeavors aim to amplify representation and inference capabilities by linking LLM and KG. Sun et al [29] utilized attention mechanism to facilitate interaction between all tokens of the input text and KG entities, thus augmenting the output of the language model with domain-specific knowledge from the KG.…”
Section: Kg-powered News Recommendation Modelmentioning
confidence: 99%
“…A practical KGQA model should also have strong generalization abilities to language variations and different reformulations for the same logical form. Some works [65][66][67] utilize generative models to address the issue of coverage and have achieved good performance on GrailQA.…”
Section: Robustnessmentioning
confidence: 99%
“…The two modalities are then fused in the final step to render a QA prediction. But this shallow fusion does not facilitate interactions between the two modalities, and several methods (Sun et al 2022;Zhang et al 2022) to fuse LM and KG in the earlier layers have recently been introduced as well.…”
Section: Related Workmentioning
confidence: 99%
“…They aimed to find more appropriate ways to exchange information between the two modalities, via special tokens and nodes (Zhang et al 2022) or cross-attention (Sun et al 2022). But the approaches have a modality-specific encoder GNN and the information exchange occurs only on fusion layers resulting in modest information exchange.…”
Section: Introductionmentioning
confidence: 99%