Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1004
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Generative Model of Vowel Formant Typology

Abstract: What makes some types of languages more probable than others? For instance, we know that almost all spoken languages contain the vowel phoneme /i/; why should that be? The field of linguistic typology seeks to answer these questions and, thereby, divine the mechanisms that underlie human language. In our work, we tackle the problem of vowel system typology, i.e., we propose a generative probability model of which vowels a language contains. In contrast to previous work, we work directly with the acoustic infor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Generative models have been in the vanguard of unsupervised learning. Techniques such as GMMs [62], Boltzmann machines [63], variational autoencoders [64], and generative adversarial networks (GANs) [65] have been successfully applied in numerous computer vision [66], speech recognition and generation [67], and natural language [68, 69] tasks. Those algorithms try to capture inner data probabilistic distribution to generate new similar data [70].…”
Section: Related Workmentioning
confidence: 99%
“…Generative models have been in the vanguard of unsupervised learning. Techniques such as GMMs [62], Boltzmann machines [63], variational autoencoders [64], and generative adversarial networks (GANs) [65] have been successfully applied in numerous computer vision [66], speech recognition and generation [67], and natural language [68, 69] tasks. Those algorithms try to capture inner data probabilistic distribution to generate new similar data [70].…”
Section: Related Workmentioning
confidence: 99%
“…Feature prediction is a commonly used task in evaluating how well a given model is able to explain the typological features of languages (Daumé III and Campbell, 2007;Malaviya et al, 2017;Cotterell and Eisner, 2018;Ponti et al, 2018;Bjerva et al, 2019a). This is an important task which can highlight the extent to which a model has learned interdependencies between languages and features.…”
Section: Predicting Typological Featuresmentioning
confidence: 99%