2016
DOI: 10.1177/2331216516638516
|View full text |Cite
|
Sign up to set email alerts
|

Informational Masking in Normal-Hearing and Hearing-Impaired Listeners Measured in a Nonspeech Pattern Identification Task

Abstract: Individuals with sensorineural hearing loss (SNHL) often experience more difficulty with listening in multisource environments than do normal-hearing (NH) listeners. While the peripheral effects of sensorineural hearing loss certainly contribute to this difficulty, differences in central processing of auditory information may also contribute. To explore this issue, it is important to account for peripheral differences between NH and these hearing-impaired (HI) listeners so that central effects in multisource l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
10
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 68 publications
(111 reference statements)
0
10
2
Order By: Relevance
“…Still, it is clear that the acoustic features that support object formation rely on fine spectral and temporal features of sound, such as harmonic structure, interaural differences, timbre, and other features (Bregman, 1990;Carlyon, 2004;Darwin, 1997). It thus makes sense that listeners with elevated hearing thresholds, who have broader-than-normal cochlear tuning, poor temporal resolution, and reduced dynamic range, will have difficulty communicating in cocktail party settings (e.g., see Best, Mason, & Kidd, 2011;Best, Mason, Kidd, Iyer, & Brungart, 2015;Gallun, Diedesch, Kampel, & Jakien, 2013;Jakien, Kampel, Gordon, & Gallun, 2017;Roverud, Best, Mason, Swaminathan, & Kidd, 2016;Srinivasan, Jakien, & Gallun, 2016; see also the discussion in . However, even listeners with NHTs may differ in the fidelity with which their ears encode acoustic inputs, which may in turn affect their ability to extract auditory objects from a complex acoustic mixture.…”
Section: Individuals Differ In Their Ability To Encode Fine Temporal mentioning
confidence: 99%
“…Still, it is clear that the acoustic features that support object formation rely on fine spectral and temporal features of sound, such as harmonic structure, interaural differences, timbre, and other features (Bregman, 1990;Carlyon, 2004;Darwin, 1997). It thus makes sense that listeners with elevated hearing thresholds, who have broader-than-normal cochlear tuning, poor temporal resolution, and reduced dynamic range, will have difficulty communicating in cocktail party settings (e.g., see Best, Mason, & Kidd, 2011;Best, Mason, Kidd, Iyer, & Brungart, 2015;Gallun, Diedesch, Kampel, & Jakien, 2013;Jakien, Kampel, Gordon, & Gallun, 2017;Roverud, Best, Mason, Swaminathan, & Kidd, 2016;Srinivasan, Jakien, & Gallun, 2016; see also the discussion in . However, even listeners with NHTs may differ in the fidelity with which their ears encode acoustic inputs, which may in turn affect their ability to extract auditory objects from a complex acoustic mixture.…”
Section: Individuals Differ In Their Ability To Encode Fine Temporal mentioning
confidence: 99%
“…Roverud et al (2016) did not directly test the hypothesis that the asymmetries in hearing thresholds contributed to (or were correlated with) the asymmetries in attention across frequency in the HI group. Roverud et al (2016) proposed that the similar performance in selective and divided conditions may have been due to a failure to perceive the content at the two CFs as separate streams during the stimulus presentation (see Shinn-Cunningham & Best, 2008, for a related discussion). As a result, listeners may have always been performing the task by dividing attention-holding both patterns in memory and attempting to perform the selection of the desired pattern after each presentation.…”
mentioning
confidence: 95%
“…Counter to expectations, there was no significant difference in performance overall between the selective and divided attention tasks. Roverud et al. (2016) did not directly test the hypothesis that the asymmetries in hearing thresholds contributed to (or were correlated with) the asymmetries in attention across frequency in the HI group.…”
mentioning
confidence: 95%
See 2 more Smart Citations