Listening effort may be reduced when hearing aids improve access to the acoustic
signal. However, this possibility is difficult to evaluate because many
neuroimaging methods used to measure listening effort are incompatible with
hearing aid use. Functional near-infrared spectroscopy (fNIRS), which can be
used to measure the concentration of oxygen in the prefrontal cortex (PFC),
appears to be well-suited to this application. The first aim of this study was
to establish whether fNIRS could measure cognitive effort during listening in
older adults who use hearing aids. The second aim was to use fNIRS to determine
if listening effort, a form of cognitive effort, differed depending on whether
or not hearing aids were used when listening to sound presented at 35 dB SL
(flat gain). Sixteen older adults who were experienced hearing aid users
completed an auditory n-back task and a visual n-back task; both tasks were
completed with and without hearing aids. We found that PFC oxygenation increased
with n-back working memory demand in both modalities, supporting the use of
fNIRS to measure cognitive effort during listening in this population. PFC
oxygenation was weakly and nonsignificantly correlated with self-reported
listening effort and reaction time, respectively, suggesting that PFC
oxygenation assesses a dimension of listening effort that differs from these
other measures. Furthermore, the extent to which hearing aids reduced PFC
oxygenation in the left lateral PFC was positively correlated with age and
pure-tone average thresholds. The implications of these findings as well as
future directions are discussed.
Prior research has revealed a native-accent advantage, whereby nonnative-accented speech is more difficult to process than native-accented speech. Nonnative-accented speakers also experience more negative social judgments. In the current study, we asked three questions. First, does exposure to nonnative-accented speech increase speech intelligibility or decrease listening effort, thereby narrowing the native-accent advantage? Second, does lower intelligibility or higher listening effort contribute to listeners’ negative social judgments of speakers? Third and finally, does increased intelligibility or decreased listening effort with exposure to speech bring about more positive social judgments of speakers? To address these questions, normal-hearing adults listened to a block of English sentences with a native accent and a block with nonnative accent. We found that once participants were accustomed to the task, intelligibility was greater for nonnative-accented speech and increased similarly with exposure for both accents. However, listening effort decreased only for nonnative-accented speech, soon reaching the level of native-accented speech. In addition, lower intelligibility and higher listening effort was associated with lower ratings of speaker warmth, speaker competence, and willingness to interact with the speaker. Finally, competence ratings increased over time to a similar extent for both accents, with this relationship fully mediated by intelligibility and listening effort. These results offer insight into how listeners process and judge unfamiliar speakers.
Sensorimotor brain areas have been implicated in the recognition of emotion expressed on the face and through non-verbal vocalizations. However, no previous study has assessed whether sensorimotor cortices are recruited during the perception of emotion in speech, a signal that includes both audio (speech sounds) and visual (facial speech movements) components. To address this gap in the literature, we recruited 24 participants to listen to speech clips expressed in a way that was either a happy, sad, or neutral. These stimuli were also presented in one of three modalities: audio-only (hearing the voice but not seeing the face), video-only (seeing the face but not hearing the voice), or audiovisual. Brain activity was recorded using electroencephalography, subjected to independent component analysis, and source-localized. We found that the left pre-supplementary motor area was more active in response to happy and sad stimuli than neutral stimuli, as indexed by greater mu event-related desynchronization. This effect did not differ by the sensory modality of the stimuli. Activity levels in other sensorimotor brain areas did not differ by emotion, although they were greatest in response to visual-only and audiovisual stimuli. One possible explanation for the pre-SMA result is that this brain area may actively support speech emotion recognition by using our extensive experience expressing emotion to generate sensory predictions that in turn guide our perception.
Objectives: Understanding speech in noise can be highly effortful. Decreasing the signal-tonoise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy (fNIRS) to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases.Design: Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise (R-SPIN) Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using fNIRS.Results: Accuracy on the R-SPIN Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low.Conclusions: These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the DLPFC (e.g., cognitive control) and IFG (e.g., predicting the sensory consequences of articulatory gestures), respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.