2015
DOI: 10.1097/aud.0000000000000131
|View full text |Cite
|
Sign up to set email alerts
|

Trimodal Speech Perception

Abstract: These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a sec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
3
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 35 publications
1
3
0
1
Order By: Relevance
“…There was less benefit from the combination of video and audio (representing a redundancy effect) for 7 consonants in EHA group and 11 consonants in the ENH group in the silent condition. This finding is in line with the notion that the benefits of audiovisual presentation over auditory presentation are greatest under degraded listening conditions, such as noise (see Moradi et al., 2013 ) or hearing loss (see Sheffield, Schuchman, & Bernstein, 2015 ), when access to critical auditory cues for the identification of consonants is impoverished by background noise or a reduction in auditory acuity due to hearing loss. The addition of visual cues to a degraded auditory signal is a major source of disambiguation, as it provides complementary cues about the place of articulation ( Summerfield, 1987 ) and indicates where and when to expect the onset and offset of a given consonant (see Best, Ozmeral, & Shinn-Cunningham, 2007 ).…”
Section: Discussionsupporting
confidence: 86%
“…There was less benefit from the combination of video and audio (representing a redundancy effect) for 7 consonants in EHA group and 11 consonants in the ENH group in the silent condition. This finding is in line with the notion that the benefits of audiovisual presentation over auditory presentation are greatest under degraded listening conditions, such as noise (see Moradi et al., 2013 ) or hearing loss (see Sheffield, Schuchman, & Bernstein, 2015 ), when access to critical auditory cues for the identification of consonants is impoverished by background noise or a reduction in auditory acuity due to hearing loss. The addition of visual cues to a degraded auditory signal is a major source of disambiguation, as it provides complementary cues about the place of articulation ( Summerfield, 1987 ) and indicates where and when to expect the onset and offset of a given consonant (see Best, Ozmeral, & Shinn-Cunningham, 2007 ).…”
Section: Discussionsupporting
confidence: 86%
“…Regarding spatial segregation of target speech from competing sounds, BiCI users function well if they can use monaural head shadow, that is, if they can hear the target well with a good signal-to-noise ratio. However, children with BiCIs have not shown source segregation benefits that depend on binaural integration cues (such as binaural squelch) when presented with stimuli in the sound field through their clinical processors ( Misurelli & Litovsky, 2015 ; Sheffield, Schuchman, & Bernstein, 2015 ; Van Deun, van Wieringen, & Wouters, 2010 ). Thus, the SRM measured to date has most likely resulted from monaural head shadow.…”
Section: Introductionmentioning
confidence: 99%
“…Previous studies have provided evidence that top–down processing (e.g., context effect, phonemic restoration, cue-integration) plays a role in speech recognition when listeners receive spectrally degraded speech signals (e.g., Başkent, 2012 ; Kong & Braida, 2011 ; Kong et al., 2015 ; Loebach, Pisoni, & Svirsky, 2010 ; Peng, Chatterjee, & Lu, 2012 ; Sheffield, Schuchman, & Bernstein, 2015 ; Yang & Zeng, 2013 ). The current study extends that work by demonstrating a shift of cue-weighting strategy in a listening situation where one of the cues is perceptually degraded.…”
Section: Discussionmentioning
confidence: 99%