2019
DOI: 10.31234/osf.io/fc4wh
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Early phonetic learning without phonetic categories -- Insights from large-scale simulations on realistic input

Abstract: Before they even speak, infants become attuned to the sounds of the language(s) they hear, processing native phonetic contrasts more easily than non-native ones. For example, between 6-8 months and 10-12 months, infants learning American English get better at distinguishing English [ɹ] and [l], as in ‘rock’ vs ‘lock’, relative to infants learning Japanese. Influential accounts of this early phonetic learning phenomenon initially proposed that infants group sounds into native vowel- and consonant-like phonetic … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(10 citation statements)
references
References 17 publications
(41 reference statements)
0
10
0
Order By: Relevance
“…Prominent among the few models that operate with raw phonetic data are Gaussian mixture models for category-learning or phoneme extraction (Lee and Glass, 2012 ; Schatz et al, 2019 ). Schatz et al ( 2019 ) propose a Dirichlet process Gaussian mixture model that learns categories from raw acoustic input in an unsupervised learning task. The model is trained on English and Japanese data and the authors show that the asymmetry in perceptual [l]~[r] distinction between English and Japanese falls out automatically from their model.…”
Section: Previous Workmentioning
confidence: 99%
“…Prominent among the few models that operate with raw phonetic data are Gaussian mixture models for category-learning or phoneme extraction (Lee and Glass, 2012 ; Schatz et al, 2019 ). Schatz et al ( 2019 ) propose a Dirichlet process Gaussian mixture model that learns categories from raw acoustic input in an unsupervised learning task. The model is trained on English and Japanese data and the authors show that the asymmetry in perceptual [l]~[r] distinction between English and Japanese falls out automatically from their model.…”
Section: Previous Workmentioning
confidence: 99%
“…Shariatmadari, 2015); see also Dick Van Dyke’s Cockney chimney sweep in Mary Poppins ). These types of sociolinguistic effects are uncontroversial, and have been well known since at least Labov’s (1963) famous Martha’s Vineyard study (see Hay, 2018, for a recent review), and are perhaps why exemplar accounts have had more influence in phonetics and phonology than in other areas of language acquisition (Pierrehumbert, 2001, 2002; see also Port & Leary, 2005; Ramscar & Port, 2016 on the impossibility of a discrete inventory of phonemes, and Schatz, Feldman, Goldwater, Cao, & Dupoux [submitted] for a computational model that simulates findings from infant phonetics research without making use of phonetic categories). Yet these effects are neglected entirely by mainstream accounts of word learning, inflectional morphology and syntax (see earlier sections) – and, indeed, by most of the computational exemplar models reviewed so far – which start from the assumption that learners represent an idealized word form (e.g.…”
Section: Phonetics and Phonologymentioning
confidence: 99%
“…For example, tools can allow for representations to be inferred (e.g. from behavioural data, or the performance of DNNs) which can subsequently be correlated against further test sets of brain and behavioural data (Battleday et al, 2019;Houlsby et al, 2013;Hsu et al, 2019;Ma & Peters, 2020;Sanders & Nosofsky, 2020;Schatz et al, 2019;Yamins et al, 2014;Zheng et al, 2019). From the perspective described here, this appears to be a promising direction of research, since it offers the possibility of ultimately empirically constraining the search space for representations, and might even lead to the development of tools for objectively testing some representational choices, in some domains at least.…”
Section: Model Comparisonmentioning
confidence: 99%