2006
DOI: 10.1016/j.neuroimage.2005.10.002
|View full text |Cite
|
Sign up to set email alerts
|

Perceiving identical sounds as speech or non-speech modulates activity in the left posterior superior temporal sulcus

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

13
95
0
1

Year Published

2006
2006
2015
2015

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 124 publications
(112 citation statements)
references
References 45 publications
13
95
0
1
Order By: Relevance
“…Unfortunately, the results of these studies are not entirely consistent. When intelligible stimuli were contrasted with unintelligible stimuli, the studies led by Mottonen et al (2006) and by Meyer et al (2005) revealed activation in the left posterior STS/MTG, whereas Liebenthal et al (2003) did not report any areas of increased activation for conditions of increased intelligibility for the sine-wave stimuli, but instead showed deactivation peaks in HG bilaterally and in the posterior STG 23 mm away from our speech peak voxel (À51, À30, +19). The differences in results between the studies may be caused either by the fact that only a minority of listeners in the Liebenthal et al study perceived the stimuli as speech, even after training, or that the experimental task did not require participants to make use of their linguistic knowledge to perform well.…”
Section: Perceptual Learning Studiesmentioning
confidence: 56%
See 1 more Smart Citation
“…Unfortunately, the results of these studies are not entirely consistent. When intelligible stimuli were contrasted with unintelligible stimuli, the studies led by Mottonen et al (2006) and by Meyer et al (2005) revealed activation in the left posterior STS/MTG, whereas Liebenthal et al (2003) did not report any areas of increased activation for conditions of increased intelligibility for the sine-wave stimuli, but instead showed deactivation peaks in HG bilaterally and in the posterior STG 23 mm away from our speech peak voxel (À51, À30, +19). The differences in results between the studies may be caused either by the fact that only a minority of listeners in the Liebenthal et al study perceived the stimuli as speech, even after training, or that the experimental task did not require participants to make use of their linguistic knowledge to perform well.…”
Section: Perceptual Learning Studiesmentioning
confidence: 56%
“…A number of studies have exploited learning in order to contrast acoustically identical stimuli that are perceived as nonspeech by naïve listeners, but perceived as intelligible speech by trained listeners (Mottonen et al, 2006;Meyer et al, 2005;Liebenthal, Binder, Piorkowski, & Remez, 2003). These studies used sine-wave speech and scanned listeners first before training when stimuli were unintelligible, and then again after a period of training.…”
Section: Perceptual Learning Studiesmentioning
confidence: 99%
“…Several recent studies comparing phonetic sounds to acoustically matched nonphonetic sounds (Dehaene-Lambertz et al, 2005;Liebenthal et al, 2005;Mottonen et al, 2006) or to noise (Binder et al, 2000;Rimol et al, 2005) have shown activation specifically in this brain region. Two factors might explain these discordant findings.…”
Section: Processing Of Speech Compared To Unfamiliar Rotated Speech Smentioning
confidence: 99%
“…Typically, though, once subjects are told that these sounds are actually derived from speech, they cannot switch back to a non-speech mode again and continue to hear the sounds as speech. Functional brain imaging studies have provided converging evidence that for listeners in speech mode, there is stronger activity in the left superior temporal sulcus than for listeners in non-speech mode (Möttönen et al, 2006). Moreover, if SWS sounds are combined with lipread speech, naïve subjects in non-speech mode show no or only negligible intersensory integration (lipread information biasing speech sound identification), while subjects who learned to perceive the same auditory stimuli as speech do integrate the auditory and visual stimuli in a similar manner as natural speech (Tuomainen et al, 2005).…”
Section: -Introductionmentioning
confidence: 99%