Highlights d Narrative stimuli can synchronize fluctuations of heart rate between individuals d This interpersonal synchronization is modulated by attention and predicts memory d These effects on heart rate cannot be explained by modulation of respiratory patterns d Synchrony is lower in patients with disorders of consciousness
Objective Patients with traumatic brain injury who fail to obey commands after sedation‐washout pose one of the most significant challenges for neurological prognostication. Reducing prognostic uncertainty will lead to more appropriate care decisions and ensure provision of limited rehabilitation resources to those most likely to benefit. Bedside markers of covert residual cognition, including speech comprehension, may reduce this uncertainty. Methods We recruited 28 patients with acute traumatic brain injury who were 2 to 7 days sedation‐free and failed to obey commands. Patients heard streams of isochronous monosyllabic words that built meaningful phrases and sentences while their brain activity via electroencephalography (EEG) was recorded. In healthy individuals, EEG activity only synchronizes with the rhythm of phrases and sentences when listeners consciously comprehend the speech. This approach therefore provides a measure of residual speech comprehension in unresponsive patients. Results Seventeen and 16 patients were available for assessment with the Glasgow Outcome Scale Extended (GOSE) at 3 months and 6 months, respectively. Outcome significantly correlated with the strength of patients’ acute cortical tracking of phrases and sentences (r > 0.6, p < 0.007), quantified by inter‐trial phase coherence. Linear regressions revealed that the strength of this comprehension response (beta = 0.603, p = 0.006) significantly improved the accuracy of prognoses relative to clinical characteristics alone (eg, Glasgow Coma Scale [GCS], computed tomography [CT] grade). Interpretation A simple, passive, auditory EEG protocol improves prognostic accuracy in a critical period of clinical decision making. Unlike other approaches to probing covert cognition for prognostication, this approach is entirely passive and therefore less susceptible to cognitive deficits, increasing the number of patients who may benefit. ANN NEUROL 2021;89:646–656
Heart rate has natural fluctuations that are typically ascribed to autonomic function. Recent evidence suggests that conscious processing can affect the timing of the heartbeat. We hypothesized that heart rate is modulated by conscious processing and therefore dependent on attentional focus. To test this we leverage the observation that neural processes can be synchronized between subjects by presenting an identical narrative stimulus. As predicted, we find significant inter-subject correlation of the heartbeat (ISC-HR) when subjects are presented with an auditory or audiovisual narrative. Consistent with the conscious processing hypothesis, we find that ISC-HR is reduced when subjects are distracted from the narrative, and that higher heart rate synchronization predicts better recall of the narrative. Finally, patients with disorders of consciousness who are listening to a story have lower ISC, as compared to healthy individuals, and that individual ISC-HR might predict a patients’ prognosis. We conclude that heart rate fluctuations are partially driven by conscious processing, depend on attentional state, and may represent a simple metric to assess conscious state in unresponsive patients.
Several theories propose that emotions and self-awareness arise from the integration of internal and external signals and their respective precision-weighted expectations.Supporting these mechanisms, research indicates that the brain uses temporal cues from cardiac signals to predict auditory stimuli, and that these predictions and their prediction errors can be observed in the scalp heartbeat-evoked potential (HEP). We investigated the effect of precision modulations on these cross-modal predictive mechanisms, via attention and interoceptive ability. We presented auditory sequences at short (perceived synchronous) or long (perceived asynchronous) cardio-audio delays, with half of the trials including an omission. Participants attended to the cardio-audio synchronicity of the tones (internal attention) or the auditory stimuli alone (external attention). Comparing HEPs during omissions allowed for the observation of pure predictive signals, without contaminating auditory input.We observed an early effect of cardio-audio delay, reflecting a difference in heartbeat-driven expectations. We also observed a larger positivity to omissions of sounds perceived as synchronous than to omissions of sounds perceived as asynchronous when attending internally only, consistent with the role of attentional precision for enhancing predictions.These results provide support for attentionally-modulated cross-modal predictive coding, and suggest a potential tool for investigating its role in emotion and self-awareness.
Several theories propose that emotions and self-awareness arise from the integration of internal and external signals and their respective precision-weighted expectations. Supporting these mechanisms, research indicates that the brain uses temporal cues from cardiac signals to predict auditory stimuli, and that these predictions and their prediction errors can be observed in the scalp heartbeat-evoked potential (HEP). We investigated the effect of precision modulations on these cross-modal predictive mechanisms, via attention and interoceptive ability. We presented auditory sequences at short (perceived synchronous) or long (perceived asynchronous) cardio-audio delays, with half of the trials including an omission. Participants attended to the cardio-audio synchronicity of the tones (internal attention) or the auditory stimuli alone (external attention). Comparing HEPs during omissions allowed for the observation of pure predictive signals, without contaminating auditory input. We observed an early effect of cardio-audio delay, reflecting a difference in heartbeat-driven expectations. We also observed a larger positivity to omissions of sounds perceived as synchronous than to omissions of sounds perceived as asynchronous when attending internally only, consistent with the role of attentional precision for enhancing predictions. These results provide support for attentionally-modulated cross-modal predictive coding, and suggest a potential tool for investigating its role in emotion and self-awareness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.