Results from both dual tasks support the hypothesis that NR reduces listening effort and frees up cognitive resources for other tasks. Future hearing aid research should incorporate objective measurements of cognitive benefits.
The results suggest that listening effort decreases with increased spectral resolution. Moreover, these improvements are best reflected in objective measures of listening effort, such as RTs on a secondary task, rather than intelligibility scores or subjective effort measures.
This study compares two response-time measures of listening effort that can be combined with a clinical speech test for a more comprehensive evaluation of total listening experience; verbal response times to auditory stimuli (RT(aud)) and response times to a visual task (RTs(vis)) in a dual-task paradigm. The listening task was presented in five masker conditions; no noise, and two types of noise at two fixed intelligibility levels. Both the RTs(aud) and RTs(vis) showed effects of noise. However, only RTs(aud) showed an effect of intelligibility. Because of its simplicity in implementation, RTs(aud) may be a useful effort measure for clinical applications.
Auditory stream segregation was measured in cochlear implant (CI) listeners using a subjective "Yes-No" task in which listeners indicated whether a sequence of stimuli was perceived as two separate streams or not. Stimuli were brief, 50-ms pulse trains A and B, presented in an A_B_A_A_B_A... sequence, with 50 ms in between consecutive stimuli. All stimuli were carefully loudness-balanced prior to the experiments. The cochlear electrode location of A was fixed, while the location of B was varied systematically. Measures of electrode discrimination and subjective perceptual difference were also included for comparison. There was strong intersubject variation in the pattern of results. One of the participants participated in a second series of experiments, the results of which indicated that he was able to perceptually segregate stimuli that were different in cochlear electrode location, as well as stimuli that were different in temporal envelope. Although preliminary, these results suggest that it is possible for some cochlear implant listeners to perceptually segregate stimuli based on differences in cochlear location as well as temporal envelope.
External degradations in incoming speech reduce understanding, and hearing impairment further compounds the problem. While cognitive mechanisms alleviate some of the difficulties, their effectiveness may change with age. In our research, reviewed here, we investigated cognitive compensation with hearing impairment, cochlear implants, and aging, via (a) phonemic restoration as a measure of top-down filling of missing speech, (b) listening effort and response times as a measure of increased cognitive processing, and (c) visual world paradigm and eye gazing as a measure of the use of context and its time course. Our results indicate that between speech degradations and their cognitive compensation, there is a fine balance that seems to vary greatly across individuals. Hearing impairment or inadequate hearing device settings may limit compensation benefits. Cochlear implants seem to allow the effective use of sentential context, but likely at the cost of delayed processing. Linguistic and lexical knowledge, which play an important role in compensation, may be successfully employed in advanced age, as some compensatory mechanisms seem to be preserved. These findings indicate that cognitive compensation in hearing impairment can be highly complicated—not always absent, but also not easily predicted by speech intelligibility tests only.
TABLE SDC1. Details of records included in the meta-analysis broken down by relevant component studies. Author(s) and year Study n Group Age * (years †) Onset of deafness Age at CI activation (years †) Duration of CI use (years †) Prosody Language Stimuli Cues AFC Measure Comment f0 int dur Agrawal et al. (2012)
Speech perception is formed based on both the acoustic signal and listeners' knowledge of the world and semantic context. Access to semantic information can facilitate interpretation of degraded speech, such as speech in background noise or the speech signal transmitted via cochlear implants (CIs). This paper focuses on the latter, and investigates the time course of understanding words, and how sentential context reduces listeners' dependency on the acoustic signal for natural and degraded speech via an acoustic CI simulation.In an eye-tracking experiment we combined recordings of listeners' gaze fixations with pupillometry, to capture effects of semantic information on both the time course and effort of speech processing. Normal-hearing listeners were presented with sentences with or without a semantically constraining verb (e.g., crawl) preceding the target (baby), and their ocular responses were recorded to four pictures, including the target, a phonological (bay) competitor and a semantic (worm) and an unrelated distractor. The results show that in natural speech, listeners' gazes reflect their uptake of acoustic information, and integration of preceding semantic context. Degradation of the signal leads to a later disambiguation of phonologically similar words, and to a delay in integration of semantic information. Complementary to this, the pupil dilation data show that early semantic integration reduces the effort in disambiguating phonologically similar words. Processing degraded speech comes with increased effort due to the impoverished nature of the signal. Delayed integration of semantic information further constrains listeners' ability to compensate for inaudible signals.
In favorable listening conditions, cochlear-implant (CI) users can reach high speech recognition scores with as little as seven active electrodes. Here, we hypothesized that even when speech recognition is high, additional spectral channels may still benefit other aspects of speech perception, such as comprehension and listening effort. Twenty-five adult, postlingually deafened CI users, selected from two Dutch implant centers for high clinical word identification scores, participated in two experiments. Experimental conditions were created by varying the number of active electrodes of the CIs between 7 and 15. In Experiment 1, response times (RTs) on the secondary task in a dual-task paradigm were used as an indirect measure of listening effort, and in Experiment 2, sentence verification task (SVT) accuracy and RTs were used to measure speech comprehension and listening effort, respectively. Speech recognition was near ceiling for all conditions tested, as intended by the design. However, the dual-task paradigm failed to show the hypothesized decrease in RTs with increasing spectral channels. The SVT did show a systematic improvement in both speech comprehension and response speed across all conditions. In conclusion, the SVT revealed additional benefits in both speech comprehension and listening effort for conditions in which high speech recognition was already achieved. Hence, adding spectral channels may provide benefits for CI listeners that may not be reflected by traditional speech tests. The SVT is a relatively simple task that is easy to implement and may therefore be a good candidate for identifying such additional benefits in research or clinical settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.