The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Findings of the Association for Computational Linguistics: EACL 2023 2023
DOI: 10.18653/v1/2023.findings-eacl.87
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing the Effectiveness of the Underlying Reasoning Tasks in Multi-hop Question Answering

Xanh Ho,
Anh-Khoa Duong Nguyen,
Saku Sugawara
et al.

Abstract: To explain the predicted answers and evaluate the reasoning abilities of models, several studies have utilized underlying reasoning (UR) tasks in multi-hop question answering (QA) datasets. However, it remains an open question as to how effective UR tasks are for the QA task when training models on both tasks in an endto-end manner. In this study, we address this question by analyzing the effectiveness of UR tasks (including both sentence-level and entitylevel tasks) in three aspects: (1) QA performance, (2) r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…Traditionally, researchers (Qiu et al, 2019;Tu et al, 2019;Fang et al, 2020) have applied graph neural networks (GNN) to this task. In recent years, with the growing capabilities of large language models (LLMs), several works propose using prompting to address this task in a few-or zero-shot way (Wei et al, 2022;Ho et al, 2023).…”
Section: Introductionmentioning
confidence: 99%
“…Traditionally, researchers (Qiu et al, 2019;Tu et al, 2019;Fang et al, 2020) have applied graph neural networks (GNN) to this task. In recent years, with the growing capabilities of large language models (LLMs), several works propose using prompting to address this task in a few-or zero-shot way (Wei et al, 2022;Ho et al, 2023).…”
Section: Introductionmentioning
confidence: 99%