Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019) 2019
DOI: 10.18653/v1/d19-6130
|View full text |Cite
|
Sign up to set email alerts
|

X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension

Abstract: Although the vast majority of knowledge bases (KBs) are heavily biased towards English, Wikipedias do cover very different topics in different languages. Exploiting this, we introduce a new multilingual dataset (X-WikiRE), framing relation extraction as a multilingual machine reading problem. We show that by leveraging this resource it is possible to robustly transfer models cross-lingually and that multilingual support significantly improves (zero-shot) relation extraction, enabling the population of low-reso… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 7 publications
(17 citation statements)
references
References 27 publications
1
16
0
Order By: Relevance
“…The improvement is +1.04%, +0.89% and +1.47% in average F 1 score compared to XLM-15, XLM-R base and XLM-R large , respectively. We also evaluate on the less widely used crosslingual QA dataset X-WikiRE (Abdou et al, 2019) for which we observe similar result trends, and 0.55% improvement in terms of average F 1 score on zero-shot QA. More details can be found in the Appendix (Section A.1).…”
Section: Zero-and Few-shot Cross-lingual Nlimentioning
confidence: 58%
See 3 more Smart Citations
“…The improvement is +1.04%, +0.89% and +1.47% in average F 1 score compared to XLM-15, XLM-R base and XLM-R large , respectively. We also evaluate on the less widely used crosslingual QA dataset X-WikiRE (Abdou et al, 2019) for which we observe similar result trends, and 0.55% improvement in terms of average F 1 score on zero-shot QA. More details can be found in the Appendix (Section A.1).…”
Section: Zero-and Few-shot Cross-lingual Nlimentioning
confidence: 58%
“…Few-shot learning methods have initially been introduced within the area of image classification (Vinyals et al, 2016;Ravi and Larochelle, 2017;Finn et al, 2017), but have recently also been applied to NLP tasks such as relation extraction (Han et al, 2018), text classification Rethmeier and Augenstein, 2020) and machine translation (Gu et al, 2018). Specifically, in NLP, these few-shot learning approaches include: (i) the transformation of the problem into a different task (e.g., relation extraction is transformed to question answering (Levy et al, 2017;Abdou et al, 2019)); or (ii) meta-learning (Andrychowicz et al, 2016;Finn et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…For the pretraining process, they collect pairs of English sentences based on the shared entities, annotated by an entity linking system. On the other hand, we propose a multilingual approach that utilizes Wikipedia and Wikidata, which are already available for many languages and have been successfully used for tasks such as multilingual question answering (Abdou et al, 2019) and named entity recognition (Nothman et al, 2013).…”
Section: Related Workmentioning
confidence: 99%