2010
DOI: 10.1111/j.1551-6709.2009.01077.x
|View full text |Cite|
|
Sign up to set email alerts
|

Cue Integration With Categories: Weighting Acoustic Cues in Speech Using Unsupervised Learning and Distributional Statistics

Abstract: During speech perception, listeners make judgments about the phonological category of sounds by taking advantage of multiple acoustic cues for each phonological contrast. Perceptual experiments have shown that listeners weight these cues differently. How do listeners weight and combine acoustic cues to arrive at an overall estimate of the category for a speech sound? Here, we present several simulations using a mixture of Gaussians models that learn cue weights and combine cues on the basis of their distributi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
217
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 181 publications
(222 citation statements)
references
References 76 publications
5
217
0
Order By: Relevance
“…The trading of these cues in this study is consistent with the results of Alexander and Kluender (2009), which showed similar patterns for people with hearing impairment. It is reasonable to speculate that the limitations of sound frequency coding by CI processors results in gross reductions in reliability of formant peaks, leading listeners to shift weight to more reliable cues, as suggested by Toscano and McMurray (2010).…”
Section: Discussionmentioning
confidence: 99%
“…The trading of these cues in this study is consistent with the results of Alexander and Kluender (2009), which showed similar patterns for people with hearing impairment. It is reasonable to speculate that the limitations of sound frequency coding by CI processors results in gross reductions in reliability of formant peaks, leading listeners to shift weight to more reliable cues, as suggested by Toscano and McMurray (2010).…”
Section: Discussionmentioning
confidence: 99%
“…This raises the question of what mechanism underlies this flexibility. We test the hypothesis that implicit learning, which plays an important role in human skill acquisition (e.g., Plunkett & Juola, 1999;Toscano & McMurray, 2010 on language acquisition; Botvinick & Plaut, 2004; on sequential motor skill acquisition), also operates during language processing in adults (Chang, Dell, & Bock, 2006;Chang, Dell, Bock, & Griffin, 2000;Jaeger & Snider, in press;Kaschak, Kutta, & Coyle, 2012;Reitter, Keller, & Moore, 2011).…”
mentioning
confidence: 99%
“…The variance of a particular cue for a particular category is closely related to how reliable that cue is at distinguishing one category from another (Allen & J. L. Miller, 2004;Clayards et al, 2008;Newman et al, 2001;Toscano & McMurray, 2010): for two categories with fixed means, increasing the variance of both categories means that their distributions will overlap more and, on average, observing that particular cue will be less informative about the intended category. Thus for a cue which varies in reliability from one situation to the next (with relatively stable category means), the ideal adapter should in general be more likely to adjust category variance than means.…”
Section: Recalibration By Category Shift or Expansion?mentioning
confidence: 99%
“…Distributional learning has also been proposed as a mechanism by which linguistic categories are learned during language acquisition in infants (e.g., Aslin, Saffran, & Newport, 1998;Gómez & Gerken, 2000;Wonnacott, Newport, & Tanenhaus, 2008) and adults (e.g., Pajak & Levy, 2011;. This includes the acquisition of phonetic categories (e.g., McMurray, Aslin, & Toscano, 2009;Toscano & McMurray, 2010;Vallabha et al, 2007). For example, infants have demonstrated sensitivity to the distribution of acoustic cues to phonetic categories as early as 6 months of age (Maye, Werker, & Gerken, 2002).…”
Section: Parallels Between Acquisition and Adaptation: Both Can Be Unmentioning
confidence: 99%