Proceedings of the Workshop on Multilingual and Cross­-Lingual Methods in NLP 2016
DOI: 10.18653/v1/w16-1205
|View full text |Cite
|
Sign up to set email alerts
|

Cross-lingual alignment transfer: a chicken-and-egg story?

Abstract: In this paper, we challenge a basic assumption of many cross-lingual transfer techniques: the availability of word aligned parallel corpora, and consider ways to accommodate situations in which such resources do not exist. We show experimentally that, here again, weakly supervised cross-lingual learning techniques can prove useful, once adapted to transfer knowledge across pairs of languages.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…More recent works focusing on reordering relied on statistics of various linguistic properties such as POS-tags Eisner, 2016, 2018;Liu et al, 2020a) and syntactic relations (Rasooli and Collins, 2019). Such statistics can be taken from typological datasets such as WALS (Meng et al, 2019) or extracted from large corpora (Aufrant et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…More recent works focusing on reordering relied on statistics of various linguistic properties such as POS-tags Eisner, 2016, 2018;Liu et al, 2020a) and syntactic relations (Rasooli and Collins, 2019). Such statistics can be taken from typological datasets such as WALS (Meng et al, 2019) or extracted from large corpora (Aufrant et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Our work is in line with the proposed solutions to source-sentence reordering, namely treebank reordering, which aim to rearrange the word order of source sentences by linearly permuting the nodes of their dependency-parse trees. Aufrant et al (2016) and Wang and Eisner (2018) suggested permuting existing dependency treebanks to make their surface POS-sequence statistics close to those of the target language, in order to improve the performance of delexicalized dependency parsers in the zero-shot scenario. While some improvements were reported, these approaches rely on short POS n-grams and do not capture many important patterns.…”
Section: Related Workmentioning
confidence: 99%