2021
DOI: 10.48550/arxiv.2108.03509
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Compositional Generalization in Multilingual Semantic Parsing over Wikidata

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Other KGQA datasets are Free917 (Cai and Yates, 2013), WebQuestions (Berant et al, 2013), Com-plexQuestions (Bao et al, 2016), SimplesQuestions (Bordes et al, 2015), GraphQuestions (Su et al, 2016), WebQuestionsSP (Yih et al, 2016), 30MFactoidQA (Serban et al, 2016), ComplexWebQuestions (Talmor and Berant, 2018), PathQuestion (Zhou et al, 2018), MetaQA (Zhang et al, 2018), TempQuestions (Jia et al, 2018), TimeQuestions (Jia et al, 2021), Cron-Questions (Saxena et al, 2021), FreebaseQA (Jiang et al, 2019), Compositional Freebase Questions (CFQ) (Keysers et al, 2019), Compositional Wikidata Questions (CWQ) (Cui et al, 2021), RuBQ (Korablinov and Braslavski, 2020;Rybin et al, 2021), GrailQA (Gu et al, 2021), Event-QA (Souza Costa et al, 2020), Sim-pleDBpediaQA (Azmy et al, 2018), CLC-QuAD (Zou et al, 2021), KQA Pro (Shi et al, 2020), SimpleQues-tionsWikidata (Diefenbach et al, 2017), DBNQA (Yin et al, 2019), etc. These datasets do not fulfill our current criteria and thus are not part of the initial version of the KGQA leaderboard.…”
Section: Kgqa Datasetsmentioning
confidence: 99%
“…Other KGQA datasets are Free917 (Cai and Yates, 2013), WebQuestions (Berant et al, 2013), Com-plexQuestions (Bao et al, 2016), SimplesQuestions (Bordes et al, 2015), GraphQuestions (Su et al, 2016), WebQuestionsSP (Yih et al, 2016), 30MFactoidQA (Serban et al, 2016), ComplexWebQuestions (Talmor and Berant, 2018), PathQuestion (Zhou et al, 2018), MetaQA (Zhang et al, 2018), TempQuestions (Jia et al, 2018), TimeQuestions (Jia et al, 2021), Cron-Questions (Saxena et al, 2021), FreebaseQA (Jiang et al, 2019), Compositional Freebase Questions (CFQ) (Keysers et al, 2019), Compositional Wikidata Questions (CWQ) (Cui et al, 2021), RuBQ (Korablinov and Braslavski, 2020;Rybin et al, 2021), GrailQA (Gu et al, 2021), Event-QA (Souza Costa et al, 2020), Sim-pleDBpediaQA (Azmy et al, 2018), CLC-QuAD (Zou et al, 2021), KQA Pro (Shi et al, 2020), SimpleQues-tionsWikidata (Diefenbach et al, 2017), DBNQA (Yin et al, 2019), etc. These datasets do not fulfill our current criteria and thus are not part of the initial version of the KGQA leaderboard.…”
Section: Kgqa Datasetsmentioning
confidence: 99%
“…Today, the research in the field of KGQA is strongly dependent on data, and it suffers from the lack of multilingual benchmarks [6], [7]. To the best of our knowledge, only three KGQA benchmarks exist that tackle multiple languages: QALD [3], RuBQ [4], and CWQ [5].…”
Section: Multilingual Kgqa Benchmarksmentioning
confidence: 99%
“…One of them is the lack of available benchmarks. To the best of our knowledge, there are only 3 mKGQA datasets in the research community (QALD-9 [3], RuBQ 2.0 [4], and CWQ [5]) and all of them are not fulfilling completely the practical needs of researchers and developers (see Section II). Hence, even if one develops an mKGQA system, there are not so many opportunities for full-fledged evaluation.…”
Section: Introductionmentioning
confidence: 99%
“…MCWQ (Cui et al, 2022), a multilingual variant of CFQ, is the first adaptation into multiple languages of a compositional generalisation benchmark. It was created with the use of neural machine translation.…”
Section: Introductionmentioning
confidence: 99%