To what extent do all brains work alike during natural conditions? We explored this question by letting five subjects freely view half an hour of a popular movie while undergoing functional brain imaging. Applying an unbiased analysis in which spatiotemporal activity patterns in one brain were used to "model" activity in another brain, we found a striking level of voxel-by-voxel synchronization between individuals, not only in primary and secondary visual and auditory areas but also in association cortices. The results reveal a surprising tendency of individual brains to "tick collectively" during natural vision. The intersubject synchronization consisted of a widespread cortical activation pattern correlated with emotionally arousing scenes and regionally selective components. The characteristics of these activations were revealed with the use of an open-ended "reverse-correlation" approach, which inverts the conventional analysis by letting the brain signals themselves "pick up" the optimal stimuli for each specialized cortical area.
Real-life activities, such as watching a movie or engaging in conversation, unfold over many minutes. In the course of such activities, the brain has to integrate information over multiple time scales. We recently proposed that the brain uses similar strategies for integrating information across space and over time. Drawing a parallel with spatial receptive fields, we defined the temporal receptive window (TRW) of a cortical microcircuit as the length of time before a response during which sensory information may affect that response. Our previous findings in the visual system are consistent with the hypothesis that TRWs become larger when moving from low-level sensory to high-level perceptual and cognitive areas. In this study, we mapped TRWs in auditory and language areas by measuring fMRI activity in subjects listening to a real-life story scrambled at the time scales of words, sentences, and paragraphs. Our results revealed a hierarchical topography of TRWs. In early auditory cortices (A1ϩ), brain responses were driven mainly by the momentary incoming input and were similarly reliable across all scrambling conditions. In areas with an intermediate TRW, coherent information at the sentence time scale or longer was necessary to evoke reliable responses. At the apex of the TRW hierarchy, we found parietal and frontal areas that responded reliably only when intact paragraphs were heard in a meaningful sequence. These results suggest that the time scale of processing is a functional property that may provide a general organizing principle for the human cerebral cortex.
Real-world events unfold at different time scales and, therefore, cognitive and neuronal processes must likewise occur at different time scales. We present a novel procedure that identifies brain regions responsive to sensory information accumulated over different time scales. We measured functional magnetic resonance imaging activity while observers viewed silent films presented forward, backward, or piecewise-scrambled in time. Early visual areas (e.g., primary visual cortex and the motion-sensitive area MTϩ) exhibited high response reliability regardless of disruptions in temporal structure. In contrast, the reliability of responses in several higher brain areas, including the superior temporal sulcus (STS), precuneus, posterior lateral sulcus (LS), temporal parietal junction (TPJ), and frontal eye field (FEF), was affected by information accumulated over longer time scales. These regions showed highly reproducible responses for repeated forward, but not for backward or piecewise-scrambled presentations. Moreover, these regions exhibited marked differences in temporal characteristics, with LS, TPJ, and FEF responses depending on information accumulated over longer durations (ϳ36 s) than STS and precuneus (ϳ12 s). We conclude that, similar to the known cortical hierarchy of spatial receptive fields, there is a hierarchy of progressively longer temporal receptive windows in the human brain.
Summary During realistic, continuous perception, humans automatically segment experiences into discrete events. Using a novel model of cortical event dynamics, we investigate how cortical structures generate event representations during narrative perception, and how these events are stored to and retrieved from memory. Our data-driven approach allows us to detect event boundaries as shifts between stable patterns of brain activity without relying on stimulus annotations, and reveals a nested hierarchy from short events in sensory regions to long events in high-order areas (including angular gyrus and posterior medial cortex), which represent abstract, multimodal situation models. High-order event boundaries are coupled to increases in hippocampal activity, which predict pattern reinstatement during later free recall. These areas also show evidence of anticipatory reinstatement as subjects listen to a familiar narrative. Based on these results, we propose that brain activity is naturally structured into nested events, which form the basis of long-term memory representations.
Functional magnetic resonance imaging (fMRI) is an important tool for investigating human brain function, but the relationship between the hemodynamically based fMRI signals in the human brain and the underlying neuronal activity is unclear. We recorded single unit activity and local field potentials in auditory cortex of two neurosurgical patients and compared them with the fMRI signals of 11 healthy subjects during presentation of an identical movie segment. The predicted fMRI signals derived from single units and the measured fMRI signals from auditory cortex showed a highly significant correlation (r = 0.75, P < 10(-47)). Thus, fMRI signals can provide a reliable measure of the firing rate of human cortical neurons.
Our lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? Participants viewed a fifty-minute movie, then verbally described the events during functional MRI, producing unguided detailed descriptions lasting up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated in default-network, medial-temporal, and high-level visual areas. Individual event patterns were both highly discriminable from one another and similar between people, suggesting consistent spatial organization. In many high-order areas, patterns were more similar between people recalling the same event than between recall and perception, indicating systematic reshaping of percept into memory. These results reveal the existence of a common spatial organization for memories in high-level cortical areas, where encoded information is largely abstracted beyond sensory constraints; and that neural patterns during perception are altered systematically across people into shared memory representations for real-life events.
Does the default mode network (DMN) reconfigure to encode information about the changing environment? This question has proven difficult, because patterns of functional connectivity reflect a mixture of stimulus-induced neural processes, intrinsic neural processes and non-neuronal noise. Here we introduce inter-subject functional correlation (ISFC), which isolates stimulus-dependent inter-regional correlations between brains exposed to the same stimulus. During fMRI, we had subjects listen to a real-life auditory narrative and to temporally scrambled versions of the narrative. We used ISFC to isolate correlation patterns within the DMN that were locked to the processing of each narrative segment and specific to its meaning within the narrative context. The momentary configurations of DMN ISFC were highly replicable across groups. Moreover, DMN coupling strength predicted memory of narrative segments. Thus, ISFC opens new avenues for linking brain network dynamics to stimulus features and behaviour.
Verbal communication is a joint activity; however, speech production and comprehension have primarily been analyzed as independent processes within the boundaries of individual brains. Here, we applied fMRI to record brain activity from both speakers and listeners during natural verbal communication. We used the speaker's spatiotemporal brain activity to model listeners' brain activity and found that the speaker's activity is spatially and temporally coupled with the listener's activity. This coupling vanishes when participants fail to communicate. Moreover, though on average the listener's brain activity mirrors the speaker's activity with a delay, we also find areas that exhibit predictive anticipatory responses. We connected the extent of neural coupling to a quantitative measure of story comprehension and find that the greater the anticipatory speaker-listener coupling, the greater the understanding. We argue that the observed alignment of productionand comprehension-based processes serves as a mechanism by which brains convey information.functional MRI | intersubject correlation | language production | language comprehension V erbal communication is a joint activity by which interlocutors share information (1). However, little is known about the neural mechanisms underlying the transfer of linguistic information across brains. Communication between brains may be facilitated by a shared neural system dedicated to both the production and the perception/comprehension of speech (1-7). Existing neurolinguistic studies are mostly concerned with either speech production or speech comprehension, and focus on cognitive processes within the boundaries of individual brains (1). The ongoing interaction between the two systems during everyday communication thus remains largely unknown. In this study we directly examine the spatial and temporal coupling between production and comprehension across brains during natural verbal communication.Using fMRI, we recorded the brain activity of a speaker telling an unrehearsed real-life story and the brain activity of a listener listening to a recording of the story. In the past, recording speech during an fMRI scan has been problematic due to the high levels of acoustic noise produced by the MR scanner and the distortion of the signal by traditional microphones. Thus, we used a customized MR-compatible dual-channel optic microphone that cancels the acoustic noise in real time and achieves high levels of noise reduction with negligible loss of audibility (see SI Methods and Fig. 1A). To make the study as ecologically valid as possible, we instructed the speaker to speak as if telling the story to a friend (see SI Methods for a transcript of the story and Movie S1 for an actual sample of the recording). To minimize motion artifacts induced by vocalization during an fMRI scan, we trained the speaker to produce as little head movement as possible. Next, we measured the brain activity (n = 11) of a listener listening to the recorded audio of the spoken story, thereby capturing the time...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.