2005
DOI: 10.1016/j.cogpsych.2005.05.001
|View full text |Cite
|
Sign up to set email alerts
|

Perceptual learning for speech: Is there a return to normal?

Abstract: Recent work on perceptual learning shows that listenersÕ phonemic representations dynamically adjust to reflect the speech they hear (Norris, McQueen, & Cutler, 2003). We investigate how the perceptual system makes such adjustments, and what (if anything) causes the representations to return to their pre-perceptual learning settings. Listeners are exposed to a speaker whose pronunciation of a particular sound (either /s/ or /S/) is ambiguous (e.g., halfway between /s/ and /S/). After exposure, participants are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

33
439
8
2

Year Published

2006
2006
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 282 publications
(484 citation statements)
references
References 49 publications
33
439
8
2
Order By: Relevance
“…In fact, just such a role for lexical context was explicitly suggested by McClelland and Elman [2]. And indeed, in accordance with this, several recent experiments [30][31][32][33][34][35] have demonstrated that lexical influences can also guide tuning of speech perception. When listeners heard a perceptually ambiguous /s/-/f/ sound at the end of an utterance that would be a word if completed with /s/, they identified the sound as /s/.…”
Section: Tuning Of Speech Perceptionsupporting
confidence: 50%
See 1 more Smart Citation
“…In fact, just such a role for lexical context was explicitly suggested by McClelland and Elman [2]. And indeed, in accordance with this, several recent experiments [30][31][32][33][34][35] have demonstrated that lexical influences can also guide tuning of speech perception. When listeners heard a perceptually ambiguous /s/-/f/ sound at the end of an utterance that would be a word if completed with /s/, they identified the sound as /s/.…”
Section: Tuning Of Speech Perceptionsupporting
confidence: 50%
“…Here the same lexical feedback that influences identification of ambiguous speech sounds provides the guidance for tuning the mapping between acoustic and speech sound representations. The TRACE model also accounts for the pattern of generalization seen in several other studies [33], based on the idea that generalization of the tuning effect will be determined by the acoustic similarity between the learned sounds and novel sounds [37].…”
Section: Tuning Of Speech Perceptionmentioning
confidence: 99%
“…The apparent contradiction between the lexically-guided retuning results and the results of exp. 2 may be due to the specificity of the adaptation; Kraljic and Samuel (2005) found that perceptually guided lexical retuning for fricatives along an [s]-[S] continuum transferred from a female training voice to a male test voice, but not in the opposite direction. The authors attribute this asymmetry to the fact that the female training stimuli were close to the frequency of the male test items, while the male training stimuli were far from the female test stimuli, suggesting that transfer may depend on acoustic similarity.…”
Section: Resultsmentioning
confidence: 99%
“…It is interesting that the remapping during production generalized to the CFC processes during perception, given that perceptual retuning is notoriously specific (Kraljic and Samuel, 2005;Reinisch et al, 2014). This is somewhat true as well for adaptation to altered feedback.…”
mentioning
confidence: 99%
“…Moreover, this perceptual learning effect proved to be talker-specific and stable over time (Eisner & McQueen, 2005. Investigating whether there is "a return to normal", Kraljic and Samuel (2005) found that only canonical pronunciation of both phonemes (/s/ and /ʃ/ in this case) appeared to be able to reset the phonemic categories to pre-learning parameters. Importantly, these unambiguous instances had to be uttered by the same speaker listeners had been trained on.…”
Section: Speaker-specific Learningmentioning
confidence: 99%