2022
DOI: 10.48550/arxiv.2203.13947
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…Despite this major difference in learning paradigm, most GPT-3-based models proposed here outperform previous results by significant margins on the SQuAD dataset -even the least performant samples M s (lowerbound) achieve competitive results. For Fairytale QA, however, only the best samples M s (upperbound) outperform previous results (Xu et al, 2022), indicating margins for improvement on question selection strategies for future work.…”
Section: Resultsmentioning
confidence: 77%
See 2 more Smart Citations
“…Despite this major difference in learning paradigm, most GPT-3-based models proposed here outperform previous results by significant margins on the SQuAD dataset -even the least performant samples M s (lowerbound) achieve competitive results. For Fairytale QA, however, only the best samples M s (upperbound) outperform previous results (Xu et al, 2022), indicating margins for improvement on question selection strategies for future work.…”
Section: Resultsmentioning
confidence: 77%
“…Datasets and model: We adopt two question generation datasets with distinctive characteristics, namely SQuAD (Rajpurkar et al, 2016) and Fairytale QA (Xu et al, 2022). SQuAD was originally proposed as an extractive QA dataset.…”
Section: Problem Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…There are many forms of reading comprehension tasks such as cloze tests (Bajgar et al, 2016;Ma et al, 2018), question answering (Richardson et al, 2013;Kočiskỳ et al, 2018;Yang and Choi, 2019;Lal et al, 2021;Xu et al, 2022), and text summarization (Ladhak et al, 2020;Kryściński et al, 2021;Chen et al, 2021). Most of these tasks are built on very short stories or can be solved in segments of a story, thus presenting limited challenges to understanding the elements, especially the characters, of the story.…”
Section: Assessment Of Narrative Comprehensionmentioning
confidence: 99%
“…The most popular form of narrative comprehension evaluation is through question answering, starting from the early work of MCTest (Richardson et al, 2013), to the more recent crowd-sourced tasks like Narra-tiveQA (Kočiskỳ et al, 2018), FriendsQA Choi, 2019), TellMeWhy (Lal et al, 2021) and FairytaleQA (Xu et al, 2022). Among them, the MCTest and TellMeWhy conduct multi-choice question answering on short stories.…”
Section: Question Answeringmentioning
confidence: 99%