5SThe Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to Titchener (1908) who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.
Background-The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech recognition task that requires inhibition of competing sounds.
Cognitive and emotional challenges may elicit a physiological stress response that can include arousal of the sympathetic nervous system (fight or flight response) and withdrawal of the parasympathetic nervous system (responsible for recovery and rest). This article reviews studies that have used measures of electrodermal activity (skin conductance) and heart rate variability (HRV) to index sympathetic and parasympathetic activity during auditory tasks. In addition, the authors present results from a new study with normal-hearing listeners examining the effects of speaking rate on changes in skin conductance and high-frequency HRV (HF-HRV). Sentence repetition accuracy for normal and fast speaking rates was measured in noise using signal to noise ratios that were adjusted to approximate 80% accuracy (+3 dB fast rate; 0 dB normal rate) while monitoring skin conductance and HF-HRV activity. A significant increase in skin conductance level (reflecting sympathetic nervous system arousal) and a decrease in HF-HRV (reflecting parasympathetic nervous system withdrawal) were observed with an increase in speaking rate indicating sensitivity of both measures to increased task demand. Changes in psychophysiological reactivity with increased auditory task demand may reflect differences in listening effort, but other person-related factors such as motivation and stress may also play a role. Further research is needed to understand how psychophysiological activity during listening tasks is influenced by the acoustic characteristics of stimuli, task demands, and by the characteristics and emotional responses of the individual.
SHORT SUMMARY (précis) Sentence recognition by participants with and without hearing loss was measured in quiet and in babble noise while monitoring two autonomic nervous system measures: heart-rate variability and skin conductance. Heart-rate variability decreased under difficult listening conditions for participants with hearing loss, but not for participants with normal hearing. Skin conductance noise reactivity was greater for those with hearing loss, than for those with normal hearing, but did not vary with the signal-to-noise ratio. Subjective ratings of workload/stress obtained after each listening condition were similar for the two participant groups.
The purpose of this study was to determine the role of frequency selectivity and sequential stream segregation in the perception of simultaneous sentences by listeners with sensorineural hearing loss. Simultaneous sentence perception was tested in listeners with normal hearing and with sensorineural hearing loss using sentence pairs consisting of one sentence spoken by a male talker and one sentence spoken by a female talker. Listeners were asked to repeat both sentences and were scored on the number of words repeated correctly in each sentence. Separate scores were obtained for the first and second sentences repeated. Frequency selectivity was assessed using a notched-noise method in which thresholds for a 1,000 Hz pure-tone signal were measured in noise with spectral notch bandwidths of 0, 300, and 600 Hz. Sequential stream segregation was measured using tone sequences consisting of a fixed frequency (A) and a varying frequency tone (B). Tone sequences were presented in an ABA_ABA_... pattern starting at a frequency (B) either below or above the frequency of the fixed 1,000 Hz tone (A). Initially, the frequency difference was large and was gradually decreased until listeners indicated that they could no longer perceptually separate the two tones (fusion threshold). Scores for the first sentence repeated decreased significantly with increasing age. There was a strong relationship between fusion threshold and simultaneous sentence perception, which remained even after partialling out the effects of age. Smaller frequency differences at fusion thresholds were associated with higher sentence scores. There was no relationship between frequency selectivity and simultaneous sentence perception. Results suggest that the abilities to perceptually separate pitch patterns and separate sentences spoken simultaneously by different talkers are mediated by the same underlying perceptual and/or cognitive factors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.