Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1056
|View full text |Cite
|
Sign up to set email alerts
|

Translate and Label! An Encoder-Decoder Approach for Cross-lingual Semantic Role Labeling

Abstract: We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language. Unlike annotation projection techniques, our model does not need parallel data during inference time. Our approach can be applied in monolingual, multilingual and cross-lingual settings and is able to produce dependencybased and span-based SRL annotations. We benchmark the labeling performance of our model in different monolingual an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 15 publications
(18 citation statements)
references
References 48 publications
0
18
0
Order By: Relevance
“…He et al (2019) propose a biaffine scorer with syntax rules to prune of candidates, achieving SOTA independently in all languages from CoNLL-09. Mulcaire et al (2018) and Daza and Frank (2019) train a single model using input data from different languages and obtain modest improvements, especially for languages where less monolingual training data is available. In this sense, our X-SRL corpus contributes with more compatible training data across languages, and aims to improve the performance of jointly trained multilingual models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…He et al (2019) propose a biaffine scorer with syntax rules to prune of candidates, achieving SOTA independently in all languages from CoNLL-09. Mulcaire et al (2018) and Daza and Frank (2019) train a single model using input data from different languages and obtain modest improvements, especially for languages where less monolingual training data is available. In this sense, our X-SRL corpus contributes with more compatible training data across languages, and aims to improve the performance of jointly trained multilingual models.…”
Section: Related Workmentioning
confidence: 99%
“…Table 7 summarizes the results. The model of Daza and Frank (2019) is an Encoder-Decoder model that was designed for multilingual SRL. It performs poorly when trained on monolingual data but improves significantly when trained with more data (multilingual setting).…”
Section: Training Srl Systems On X-srlmentioning
confidence: 99%
“…Translation-based approaches have been gaining popularity in cross-lingual dependency parsing (Rasooli and Collins, 2015;Tiedemann, 2015; and have recently been applied to SRL (Fei et al, 2020). Daza and Frank (2019b) propose a cross-lingual encoder-decoder model that simultaneously translates and generates sentences with semantic role annotations in a resource-poor target language. Rather than creating annotations or models for a target language, other work aims to exploit the similarities between languages.…”
Section: Related Workmentioning
confidence: 99%
“…Translation-based approaches (Täckström et al, 2012;Fei et al, 2020;Rasooli and Collins, 2015) aim to alleviate the noise brought by the the source-side labeler by directly translating the gold-standard data into the target language. A third alternative is model transfer where a source-language model is modified in a way that it can be directly applied to a new language, e.g., by employing cross-lingual word representations (Täckström et al, 2012;Swayamdipta et al, 2016;Daza and Frank, 2019a) and universal POS tags (McDonald et al, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…Both the input and output sequence are variable-length, and can be effectively represented in this model. The encoder-decoder model can be applied to NLP task and achieve promising performance [30]- [32].…”
Section: B Encoder-decoder Modelmentioning
confidence: 99%