Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1245
|View full text |Cite
|
Sign up to set email alerts
|

A Structured Variational Autoencoder for Contextual Morphological Inflection

Abstract: Statistical morphological inflectors are typically trained on fully supervised, type-level data. One remaining open research question is the following: How can we effectively exploit raw, token-level data to improve their performance? To this end, we introduce a novel generative latent-variable model for the semi-supervised learning of inflection generation. To enable posterior inference over the latent variables, we derive an efficient variational inference procedure based on the wake-sleep algorithm. We expe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…This neural model did not surpass the CRF model with their designed features. The results of and Moeller and Hulden (2018) indicate that for conditions where we have extremely limited amount of labeled data, nonneural models with linguistic feature engineering still have an advantage, a finding also supported by Wolf-Sonkin et al (2018) for morphological inflection within context. Cotterell and Schütze (2018) incorporate semantics into segmentation.…”
Section: Surface Morphological Segmentationmentioning
confidence: 79%
“…This neural model did not surpass the CRF model with their designed features. The results of and Moeller and Hulden (2018) indicate that for conditions where we have extremely limited amount of labeled data, nonneural models with linguistic feature engineering still have an advantage, a finding also supported by Wolf-Sonkin et al (2018) for morphological inflection within context. Cotterell and Schütze (2018) incorporate semantics into segmentation.…”
Section: Surface Morphological Segmentationmentioning
confidence: 79%
“…For our purposes the existing approaches can be characterised by their application scenarios and assumptions about available datasets. Interesting work has been done within the neural, supervised and semi-supervised frameworks, e.g., (Ahlberg et al, 2015), (Ahlberg et al, 2014), (Koskenniemi et al, 2018), (Silfverberg et al, 2018), (Wolf-Sonkin et al, 2018), (Kirov and Cotterell, 2018), (Faruqui et al, 2016), (Faruqui et al, 2015), (Aharoni and Goldberg, 2016), (Cotterell et al, 2017). Much of this work assumes availability of partially labelled data, such as word paradigms and/or clean datasets, such as lists of 'headwords' (lemmas) from which paradigms are generated.…”
Section: Previous Workmentioning
confidence: 99%
“…Inflection generation in context is a novel SIGMORPHON challenge and, generally, a less studied problem. Recently, Wolf-Sonkin et al (2018) propose a context-aware deep generative graphical model that generates sequences of inflected words.…”
Section: State Of the Artmentioning
confidence: 99%
“…The CoNLL-SIGMORPHON 2018 Shared Task on Universal Morphological Reinflection (Cotterell et al, 2018) focuses on inflection generation at the type level (Task I) and in context (Task II). Both tasks feature three settings depending on the maximum number of training examples: low, medium, and high.…”
Section: Introductionmentioning
confidence: 99%