2021
DOI: 10.1007/978-3-030-72610-2_4
|View full text |Cite
|
Sign up to set email alerts
|

DaNetQA: A Yes/No Question Answering Dataset for the Russian Language

Abstract: DaNetQA, a new question-answering corpus, follows BoolQ [2] design: it comprises natural yes/no questions. Each question is paired with a paragraph from Wikipedia and an answer, derived from the paragraph. The task is to take both the question and a paragraph as input and come up with a yes/no answer, i.e. to produce a binary output. In this paper, we present a reproducible approach to DaNetQA creation and investigate transfer learning methods for task and language transferring. For task transferring we levera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…Originally, DaNetQA had a limited number of examples: 392, 295, 295 (train/val/test). We extended the dataset following the methodology described in [9], and converted a subset of MuSeRC into the yes/no QA setting, labeled by crowd-workers afterward. The new task contains 1750, 821, and 805 examples (train/val/test).…”
Section: Danetqamentioning
confidence: 99%
“…Originally, DaNetQA had a limited number of examples: 392, 295, 295 (train/val/test). We extended the dataset following the methodology described in [9], and converted a subset of MuSeRC into the yes/no QA setting, labeled by crowd-workers afterward. The new task contains 1750, 821, and 805 examples (train/val/test).…”
Section: Danetqamentioning
confidence: 99%
“…We address the yes/no QA task, that is, questions that can be answered with either a yes or a no. Transformer-based models and transfer learning are the cutting-edge technologies in yes/no QA (Yin et al 2020;Ignatov 2021). The main existing research insights are that: i) adapting pretrained language models to other tasks improves the accuracy in yes/no QA and ii) the higher the similarity of these tasks to yes/no QA, the better the results.…”
Section: Introductionmentioning
confidence: 99%