Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1270
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Lingual Induction and Transfer of Verb Classes Based on Word Vector Space Specialisation

Abstract: Existing approaches to automatic VerbNetstyle verb classification are heavily dependent on feature engineering and therefore limited to languages with mature NLP pipelines. In this work, we propose a novel cross-lingual transfer method for inducing VerbNets for multiple languages. To the best of our knowledge, this is the first study which demonstrates how the architectures for learning word embeddings can be applied to this challenging syntactic-semantic task. Our method uses cross-lingual translation pairs t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
18
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 20 publications
(20 citation statements)
references
References 60 publications
(71 reference statements)
2
18
0
Order By: Relevance
“…Also, given the cross-lingual potential of VerbNet classification, the next natural step would be to use this type of classification to support multilingual NLP. Recent work on cross-lingual word embeddings has demonstrated that they can support crosslingual projection methods (e.g., Guo et al 2015;Ammar et al 2016;Upadhyay et al 2016;Vulić et al 2017). An avenue worth pursuing would be to investigate how verb classes obtained via our linguistically informed translation method compare to more pragmatically driven Brown-style clusters, as in the work of Täckström et al (2012) and Ammar et al (2016)-such comparative study could shed more light on the usefulness of linguistically motivated approaches to the transfer of linguistic structure across languages.…”
Section: Discussionmentioning
confidence: 99%
“…Also, given the cross-lingual potential of VerbNet classification, the next natural step would be to use this type of classification to support multilingual NLP. Recent work on cross-lingual word embeddings has demonstrated that they can support crosslingual projection methods (e.g., Guo et al 2015;Ammar et al 2016;Upadhyay et al 2016;Vulić et al 2017). An avenue worth pursuing would be to investigate how verb classes obtained via our linguistically informed translation method compare to more pragmatically driven Brown-style clusters, as in the work of Täckström et al (2012) and Ammar et al (2016)-such comparative study could shed more light on the usefulness of linguistically motivated approaches to the transfer of linguistic structure across languages.…”
Section: Discussionmentioning
confidence: 99%
“…C2 C3 UNSUPERVISED unsupervised all tested, we always report the best one S1-S4 (FULL) ORTHG-SUPER provided -length normalization only (partial S1) ORTHG+SL+SYM provided symmetric: mutual nearest neighbours length normalization only (partial S1) FULL-SUPER provided -S1-S4 (FULL) FULL+SL provided (Artetxe et al, 2018b) with dropout S1-S4 (FULL) FULL+SL+NOD provided (Artetxe et al, 2018b) w/o dropout S1-S4 (FULL) FULL+SL+SYM provided symmetric: mutual nearest neighbours, w/o dropout S1-S4 (FULL) Vulić et al, 2017). PanLex currently spans around 1,300 language varieties with over 12M expressions: it offers some support and supervision also for low-resource language pairs (Adams et al, 2017).…”
Section: C1mentioning
confidence: 99%
“…AttentiveConvNet over (x, y) resembles the conventional hypernymy classifiers which take two representation vectors (one for x, the other for y) as input and output the label. Note that AttentiveConvNet puts filter weights over (x, y) to learn more abstract representations; this actually is in common with some literature such as (Fu et al, 2014;Vulic and Mrksic, 2017;Glavas and Ponzetto, 2017), which utilize weights to project generic word representations into specified representations towards hypernymy annotations.…”
Section: Four-way Attentiveconvnetmentioning
confidence: 74%
“…Fu et al (2014) first use the skip-gram model to learn generic term embeddings from a large Chinese encyclopedia corpus, then learn a projection function from the generic space to hypernymy space by annotated hypernymy pairs. Other work trying to specify the generic word embeddings to hypernymy detection task include (Vulic and Mrksic, 2017;Glavas and Ponzetto, 2017).…”
Section: Mining Distributional Context From Textmentioning
confidence: 99%
See 1 more Smart Citation