2020
DOI: 10.48550/arxiv.2010.02605
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DaNetQA: a yes/no Question Answering Dataset for the Russian Language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Originally, DaNetQA had a limited number of examples: 392, 295, 295 (train/val/test). We extended the dataset following the methodology described in [9], and converted a subset of MuSeRC into the yes/no QA setting, labeled by crowd-workers afterward. The new task contains 1750, 821, and 805 examples (train/val/test).…”
Section: Danetqamentioning
confidence: 99%
“…Originally, DaNetQA had a limited number of examples: 392, 295, 295 (train/val/test). We extended the dataset following the methodology described in [9], and converted a subset of MuSeRC into the yes/no QA setting, labeled by crowd-workers afterward. The new task contains 1750, 821, and 805 examples (train/val/test).…”
Section: Danetqamentioning
confidence: 99%
“…In the third place we have Russian, which a version of SQuAD [75], a dataset for open-domain QA over Wikidata [134], a boolean QA dataset [91], and datasets for cloze-style commonsense reasoning and multi-choice, multi-hop RC [81].…”
Section: Monolingual Resourcesmentioning
confidence: 99%