Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1241
|View full text |Cite
|
Sign up to set email alerts
|

QuAC: Question Answering in Context

Abstract: We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
598
1
2

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 526 publications
(602 citation statements)
references
References 21 publications
(29 reference statements)
1
598
1
2
Order By: Relevance
“…Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to (Choi et al, 2018;Rajpurkar et al, 2016). Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences.…”
Section: Evaluation Metricmentioning
confidence: 99%
“…Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to (Choi et al, 2018;Rajpurkar et al, 2016). Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences.…”
Section: Evaluation Metricmentioning
confidence: 99%
“…In two recent ConvQA datasets, QuAC [2] and CoQA [27], Con-vQA is formalized as an answer span prediction problem similar in SQuAD [25,26]. Specifically, given a question, a passage, and the conversation history preceding the question, the task is to predict a span in the passage that answers the question.…”
Section: Introductionmentioning
confidence: 99%
“…On the aspect of history selection, existing models [2,27] select conversation history with a simple heuristic that assumes immediate previous turns are more helpful than others. This assumption, however, is not necessarily true.…”
Section: Introductionmentioning
confidence: 99%
“…Our work is closely related to two lines of work: context-dependent sentence analysis and reinforcement learning. From the perspective of context-dependent sentence analysis, our work is related to researches like reading comprehension in dialogue (Reddy et al, 2019;Choi et al, 2018), dialogue state tracking (Williams et al, 2013), conversational question answering in knowledge base (Saha et al, 2018;Guo et al, 2018), context-dependent logic forms (Long et al, 2016), and non-sentential utterance resolution in opendomain question answering (Raghu et al, 2015;Kumar and Joshi, 2017). The main difference is that we focus on the context-dependent queries in NLIDB which contain complex scenarios.…”
Section: Related Workmentioning
confidence: 99%