Multi-hop path reasoning over knowledge base aims at finding answer entities for an input question by walking along a path of triples from graph structure data, which is a crucial branch in the knowledge base question answering (KBQA) research field. Previous studies rely on deep neural networks to simulate the way humans solve multi-hop questions, which do not consider the latent relation information contained in connected edges, and lack of measuring the correlation between specific relations and the input question. To address these challenges, we propose an edge-aware graph neural network for multi-hop path reasoning task. First, a query node is directly added to the candidate subgraph retrieved from the knowledge base, which constructs what we term a query graph. This graph construction strategy makes it possible to enhance the information flow between the question and the nodes for the subsequent message passing steps. Second, question-related information contained in the relations is added to the entity node representations during graph updating; meanwhile, the relation representations are updated. Finally, the attention mechanism is used to weight the contribution from neighbor nodes so that only the information of neighbor nodes related to the query can be injected into new node representations. Experimental results on MetaQA and PathQuestion-Large (PQL) benchmarks demonstrate that the proposed model achieves higher Hit@1 and F1 scores than the baseline methods by a large margin. Moreover, ablation studies show that both the graph construction and the graph update algorithm contribute to performance improvement.
Zero-shot multilingual fact-checking, which aims to discover and infer subtle clues from the retrieved relevant evidence to verify the given claim in cross-language and cross-domain scenarios, is crucial for optimizing a free, trusted, wholesome global network environment. Previous works have made enlightening and practical explorations in claim verification, while the zero-shot multilingual task faces new challenging gap issues: neglecting authenticity-dependent learning between multilingual claims, lacking heuristic checking, and a bottleneck of insufficient evidence. To alleviate these gaps, a novel Joint Prompt and Evidence Inference Network (PEINet) is proposed to verify the multilingual claim according to the human fact-checking cognitive paradigm. In detail, firstly, we leverage the language family encoding mechanism to strengthen knowledge transfer among multi-language claims. Then, the prompt turning module is designed to infer the falsity of the fact, and further, sufficient fine-grained evidence is extracted and aggregated based on a recursive graph attention network to verify the claim again. Finally, we build a unified inference framework via multi-task learning for final fact verification. The newly achieved state-of-the-art performance on the released challenging benchmark dataset that includes not only an out-of-domain test, but also a zero-shot test, proves the effectiveness of our framework, and further analysis demonstrates the superiority of our PEINet in multilingual claim verification and inference, especially in the zero-shot scenario.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.