2021
DOI: 10.1007/978-3-030-77385-4_32
|View full text |Cite
|
Sign up to set email alerts
|

RuBQ 2.0: An Innovated Russian Question Answering Dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Other KGQA datasets are Free917 (Cai and Yates, 2013), WebQuestions (Berant et al, 2013), Com-plexQuestions (Bao et al, 2016), SimplesQuestions (Bordes et al, 2015), GraphQuestions (Su et al, 2016), WebQuestionsSP (Yih et al, 2016), 30MFactoidQA (Serban et al, 2016), ComplexWebQuestions (Talmor and Berant, 2018), PathQuestion (Zhou et al, 2018), MetaQA (Zhang et al, 2018), TempQuestions (Jia et al, 2018), TimeQuestions (Jia et al, 2021), Cron-Questions (Saxena et al, 2021), FreebaseQA (Jiang et al, 2019), Compositional Freebase Questions (CFQ) (Keysers et al, 2019), Compositional Wikidata Questions (CWQ) (Cui et al, 2021), RuBQ (Korablinov and Braslavski, 2020;Rybin et al, 2021), GrailQA (Gu et al, 2021), Event-QA (Souza Costa et al, 2020), Sim-pleDBpediaQA (Azmy et al, 2018), CLC-QuAD (Zou et al, 2021), KQA Pro (Shi et al, 2020), SimpleQues-tionsWikidata (Diefenbach et al, 2017), DBNQA (Yin et al, 2019), etc. These datasets do not fulfill our current criteria and thus are not part of the initial version of the KGQA leaderboard.…”
Section: Kgqa Datasetsmentioning
confidence: 99%
“…Other KGQA datasets are Free917 (Cai and Yates, 2013), WebQuestions (Berant et al, 2013), Com-plexQuestions (Bao et al, 2016), SimplesQuestions (Bordes et al, 2015), GraphQuestions (Su et al, 2016), WebQuestionsSP (Yih et al, 2016), 30MFactoidQA (Serban et al, 2016), ComplexWebQuestions (Talmor and Berant, 2018), PathQuestion (Zhou et al, 2018), MetaQA (Zhang et al, 2018), TempQuestions (Jia et al, 2018), TimeQuestions (Jia et al, 2021), Cron-Questions (Saxena et al, 2021), FreebaseQA (Jiang et al, 2019), Compositional Freebase Questions (CFQ) (Keysers et al, 2019), Compositional Wikidata Questions (CWQ) (Cui et al, 2021), RuBQ (Korablinov and Braslavski, 2020;Rybin et al, 2021), GrailQA (Gu et al, 2021), Event-QA (Souza Costa et al, 2020), Sim-pleDBpediaQA (Azmy et al, 2018), CLC-QuAD (Zou et al, 2021), KQA Pro (Shi et al, 2020), SimpleQues-tionsWikidata (Diefenbach et al, 2017), DBNQA (Yin et al, 2019), etc. These datasets do not fulfill our current criteria and thus are not part of the initial version of the KGQA leaderboard.…”
Section: Kgqa Datasetsmentioning
confidence: 99%
“…However, languages besides English and Chinese are not covered and the work does not provide a deeper analysis of the issues with the SPARQL query generation process faced when working with Wikidata. The RuBQ benchmark series [17,26] which was initially based on questions from Russian quizzes (totaling 2,910 questions) has also been translated to English via machine translation. The SPARQL queries over Wikidata were generated automatically and manually validated by the authors.…”
Section: Multilinguality In Kgqamentioning
confidence: 99%
“…Today, the research in the field of KGQA is strongly dependent on data, and it suffers from the lack of multilingual benchmarks [6], [7]. To the best of our knowledge, only three KGQA benchmarks exist that tackle multiple languages: QALD [3], RuBQ [4], and CWQ [5].…”
Section: Multilingual Kgqa Benchmarksmentioning
confidence: 99%
“…One of them is the lack of available benchmarks. To the best of our knowledge, there are only 3 mKGQA datasets in the research community (QALD-9 [3], RuBQ 2.0 [4], and CWQ [5]) and all of them are not fulfilling completely the practical needs of researchers and developers (see Section II). Hence, even if one develops an mKGQA system, there are not so many opportunities for full-fledged evaluation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation