2000
DOI: 10.3758/bf03205546
|View full text |Cite
|
Sign up to set email alerts
|

Speech perception without hearing

Abstract: In this study of visual phonetic speech perception without accompanying auditory speech stimuli, adults with normal hearing (NH; n = 96) and with severely to profoundly impaired hearing (IH;n = 72) identified consonant-vowel (CV)nonsense syllables and words in isolation and in sentences. The measures of phonetic perception were the proportion of phonemes correct and the proportion of transmitted feature information for CVs, the proportion of phonemes correct for words, and the proportion of phonemes correct an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

26
173
2
2

Year Published

2000
2000
2010
2010

Publication Types

Select...
7
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 201 publications
(209 citation statements)
references
References 64 publications
26
173
2
2
Order By: Relevance
“…It has been hypothesized that visual cues enhance speech understanding because they provide segmental and suprasegmental information that is complementary (i.e., place of articulation) to the acoustic speech cues, and because they reduce the attentional demands placed on the auditory signal (8). Audiovisual spoken word recognition therefore appears to be more than the simple addition of auditory and visual information (9). A well-known example of the robustness of audiovisual spoken word recognition is the "McGurk" effect (10).…”
Section: Auditory-visual Speech Integrationmentioning
confidence: 99%
“…It has been hypothesized that visual cues enhance speech understanding because they provide segmental and suprasegmental information that is complementary (i.e., place of articulation) to the acoustic speech cues, and because they reduce the attentional demands placed on the auditory signal (8). Audiovisual spoken word recognition therefore appears to be more than the simple addition of auditory and visual information (9). A well-known example of the robustness of audiovisual spoken word recognition is the "McGurk" effect (10).…”
Section: Auditory-visual Speech Integrationmentioning
confidence: 99%
“…On the reading comprehension subtest, students read short passages and then answered multiple-choice questions. Previous research at GU in Bernstein, Demorest, and Tucker (2000) showed a significant positive correlation (r values between .26 and .40) between reading (as measured with the GU EPT) and lipreading performance in deaf adults with English as a first language. The vocabulary and reading comprehension subtests of the SAT-8 (Psychological Corporation, 1989) were substituted for the EPT measures at the CA sites and were administered to all participants.…”
Section: Materials and Proceduresmentioning
confidence: 99%
“…Although a great deal of information can be gleaned from static faces, the motion of dynamic faces contains information about identity and emotion not present in static faces (Ambadar, Schooler & Cohn, 2005;Hill & Johnson, 2001;Knappmeyer, Thornton & Bülthoff, 2003;O'Toole, Roark, & Abdi, 2002;Lander & Bruce, 2000). Facial motion also contains linguistic information, as evidenced by the fact that silent speechreading is possible (Bernstein, Demorest & Tucker, 2000). Rarely though, is this visual speech information present in the complete absence of auditory speech information and audiovisual speech perception is the natural manner of communication.…”
Section: Introductionmentioning
confidence: 99%