2008
DOI: 10.1121/1.2839013
|View full text |Cite
|
Sign up to set email alerts
|

A glimpsing account for the benefit of simulated combined acoustic and electric hearing

Abstract: The benefits of combined electric and acoustic stimulation ͑EAS͒ in terms of speech recognition in noise are well established; however the underlying factors responsible for this benefit are not clear. The present study tests the hypothesis that having access to acoustic information in the low frequencies makes it easier for listeners to glimpse the target. Normal-hearing listeners were presented with vocoded speech alone ͑V͒, low-pass ͑LP͒ filtered speech alone, combined vocoded and LP speech ͑LP+ V͒ and with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

10
64
0

Year Published

2010
2010
2016
2016

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(74 citation statements)
references
References 39 publications
10
64
0
Order By: Relevance
“…Second, harmonicity cues contained in the low-frequency acoustic signal may improve listeners' ability to segment syllable, word, and phrase boundaries, thereby helping them to accurately decode spectrally degraded signals from the CI ear (Spitzer et al, 2009;Zhang et al, 2010;Kong et al, 2015). As discussed by Li and Loizou (2008) and Dorman and Gifford (2008), low-frequency fine-structure cues improve listeners' access to robust acoustic landmarks (Stevens, 2002), such as the onset of voicing, that mark syllable structure and word boundaries. Third, a process known as "glimpsing" may contribute to EAS benefit when speech occurs in competing backgrounds (Cooke, 2006;Kong and Carlyon, 2007;Li and Loizou, 2008;Brown and Bacon, 2009a,b).…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…Second, harmonicity cues contained in the low-frequency acoustic signal may improve listeners' ability to segment syllable, word, and phrase boundaries, thereby helping them to accurately decode spectrally degraded signals from the CI ear (Spitzer et al, 2009;Zhang et al, 2010;Kong et al, 2015). As discussed by Li and Loizou (2008) and Dorman and Gifford (2008), low-frequency fine-structure cues improve listeners' access to robust acoustic landmarks (Stevens, 2002), such as the onset of voicing, that mark syllable structure and word boundaries. Third, a process known as "glimpsing" may contribute to EAS benefit when speech occurs in competing backgrounds (Cooke, 2006;Kong and Carlyon, 2007;Li and Loizou, 2008;Brown and Bacon, 2009a,b).…”
mentioning
confidence: 99%
“…As discussed by Li and Loizou (2008) and Dorman and Gifford (2008), low-frequency fine-structure cues improve listeners' access to robust acoustic landmarks (Stevens, 2002), such as the onset of voicing, that mark syllable structure and word boundaries. Third, a process known as "glimpsing" may contribute to EAS benefit when speech occurs in competing backgrounds (Cooke, 2006;Kong and Carlyon, 2007;Li and Loizou, 2008;Brown and Bacon, 2009a,b). When a competing signal is present, portions of the target speech are masked by the interfering sound, causing temporal and spectral interruptions in the audible speech stream.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Our results also showed that in quiet environments, for both CI and bimodal listening, adding low-frequency information to telephone speech makes no difference in terms of speech recognition, in contrast, extending high-frequency information to telephone speech significantly improved speech recognition. In noisy conditions, it is expected that bandwidth extension toward lower frequencies will benefit bimodal users due to the improved glimpsing and F0 representation facilitated by the use of an HA (Li & Loizou, 2008;Zhang et al, 2010a). The results of this study provide support for the design of algorithms that would extend higher frequency information, at least in quiet environments.…”
Section: Discussionmentioning
confidence: 56%
“…Although the score for the EAS setting in our experiment is close to scores reported by the other authors, the benefit observed is smaller. This is probably due to the larger number of channels in our CI simulation (which gives a better baseline performance for the CI only condition) and the use of babble noise instead of a single competing talker (which facilitates glimpsing the target, see Li and Loizou, 2008) of the other sex (which results in a very different F0 and an easier setting).…”
Section: A Benefit Of the Fundamental Frequency Cue In Simulated Easmentioning
confidence: 99%