This study aimed to clarify the basic auditory and cognitive processes that affect listeners' performance on two spatial listening tasks: sound localization and speech recognition in spatially complex, multi-talker situations. Twenty-three elderly listeners with mild-to-moderate sensorineural hearing impairments were tested on the two spatial listening tasks, a measure of monaural spectral ripple discrimination, a measure of binaural temporal fine structure (TFS) sensitivity, and two (visual) cognitive measures indexing working memory and attention. All auditory test stimuli were spectrally shaped to restore (partial) audibility for each listener on each listening task. Eight younger normal-hearing listeners served as a control group. Data analyses revealed that the chosen auditory and cognitive measures could predict neither sound localization accuracy nor speech recognition when the target and maskers were separated along the front-back dimension. When the competing talkers were separated along the left-right dimension, however, speech recognition performance was significantly correlated with the attentional measure. Furthermore, supplementary analyses indicated additional effects of binaural TFS sensitivity and average low-frequency hearing thresholds. Altogether, these results are in support of the notion that both bottom-up and top-down deficits are responsible for the impaired functioning of elderly hearing-impaired listeners in cocktail party-like situations.
The relationships between spatial speech recognition (SSR; the ability to understand speech in complex spatial environments), binaural temporal fine structure (TFS) sensitivity, and three cognitive tasks were assessed for 17 hearing-impaired listeners. Correlations were observed between SSR, TFS sensitivity, and two of the three cognitive tasks, which became non-significant when age effects were controlled for, suggesting that reduced TFS sensitivity and certain cognitive deficits may share a common age-related cause. The third cognitive measure was also significantly correlated with SSR, but not with TFS sensitivity or age, suggesting an independent non-age-related cause.
Knowledge of how executive functions relate to preferred hearing aid (HA) processing is sparse and seemingly inconsistent with related knowledge for speech recognition outcomes. This study thus aimed to find out if (1) performance on a measure of reading span (RS) is related to preferred binaural noise reduction (NR) strength, (2) similar relations exist for two different, non-verbal measures of executive function, (3) pure-tone average hearing loss (PTA), signal-to-noise ratio (SNR), and microphone directionality (DIR) also influence preferred NR strength, and (4) preference and speech recognition outcomes are similar. Sixty elderly HA users took part. Six HA conditions consisting of omnidirectional or cardioid microphones followed by inactive, moderate, or strong binaural NR as well as linear amplification were tested. Outcome was assessed at fixed SNRs using headphone simulations of a frontal target talker in a busy cafeteria. Analyses showed positive effects of active NR and DIR on preference, and negative and positive effects of, respectively, strong NR and DIR on speech recognition. Also, while moderate NR was the most preferred NR setting overall, preference for strong NR increased with SNR. No relation between RS and preference was found. However, larger PTA was related to weaker preference for inactive NR and stronger preference for strong NR for both microphone modes. Equivalent (but weaker) relations between worse performance on one non-verbal measure of executive function and the HA conditions without DIR were found. For speech recognition, there were relations between HA condition, PTA, and RS, but their pattern differed from that for preference. Altogether, these results indicate that, while moderate NR works well in general, a notable proportion of HA users prefer stronger NR. Furthermore, PTA and executive functions can account for some of the variability in preference for, and speech recognition with, different binaural NR and DIR settings.
Objective: It has been suggested that the next major advancement in hearing aid (HA) technology needs to include cognitive feedback from the user to control HA functionality. In order to enable automatic brainwave-steered HA adjustments, attentional processes underlying speech-in-noise perception in aided hearing-impaired individuals need to be better understood. Here, we addressed the influence of two important factors for the listening performance of HA users -hearing aid processing and motivation -by analysing ongoing neural responses during long-term listening to continuous noisy speech.Methods: Sixteen normal-hearing (NH) and 15 linearly aided hearing-impaired (aHI) participants listened to an audiobook recording embedded in realistic speech babble noise at individually adjusted signal-to-noise ratios (SNRs). A HA simulator was used for simulating a directional microphone setting as well as for providing individual amplification. To assess listening performance behaviourally, participants answered questions about the contents of the audiobook. We manipulated (1) the participants' motivation by offering a monetary reward for good listening performance in one half of the measurements and (2) the SNR by engaging/disengaging the directional microphone setting. During the speech-in-noise task, electroencephalography (EEG) signals were recorded using wireless, mobile hardware. EEG correlates of listening performance were investigated using EEG impulse responses, as estimated using the cross-correlation between the recorded EEG signal and the temporal envelope of the audiobook at the output of the HA simulator.Results: At the behavioural level, we observed better performance for the NH listeners than for the aHI listeners. Furthermore, the directional microphone setting led to better performance for both participant groups, and when the directional microphone setting was disengaged motivation also improved the performance of the aHI participants. Analysis of the EEG impulse responses showed faster N1-P2 responses for both groups and larger N2 peak amplitudes for the aHI group when the directional microphone setting was activated, but no physiological correlates of motivation. Significance:The results of this study indicate that motivation plays an important role for speech understanding in noise. In terms of neuro-steered HAs, our results suggest that the latency of attentional processes is influenced by HA-induced stimulus changes, which can potentially be used for inferring benefit from noise suppression processing automatically. Further research is necessary to identify the neural correlates of motivation as an exclusive top-down process and to combine such features with HA-driven ones for online HA adjustments.
To study the spatial hearing abilities of bilateral hearing-aid users in multi-talker situations, 20 subjects received fittings configured to preserve acoustic cues salient for spatial hearing. Following acclimatization, speech reception thresholds (SRTs) were measured for three competing talkers that were either co-located or spatially separated along the front-back or left-right dimension. In addition, the subjects' working memory and attentional abilities were measured. Left-right SRTs varied over more than 14 dB, while front-back SRTs varied over more than 8 dB. Furthermore, significant correlations were observed between left-right SRTs, age, and low-frequency hearing loss, and also between front-back SRTs, age, and high-frequency aided thresholds. Concerning cognitive effects, left-right performance was most strongly related to attentional abilities, while front-back performance showed a relation to working memory abilities. Altogether, these results suggest that, due to raised hearing thresholds and aging, hearing-aid users have reduced access to interaural and monaural spatial cues as well as a diminished ability to 'enhance' a target signal by means of top-down processing. These deficits, in turn, lead to impaired functioning in complex listening environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.