Does the default mode network (DMN) reconfigure to encode information about the changing environment? This question has proven difficult, because patterns of functional connectivity reflect a mixture of stimulus-induced neural processes, intrinsic neural processes and non-neuronal noise. Here we introduce inter-subject functional correlation (ISFC), which isolates stimulus-dependent inter-regional correlations between brains exposed to the same stimulus. During fMRI, we had subjects listen to a real-life auditory narrative and to temporally scrambled versions of the narrative. We used ISFC to isolate correlation patterns within the DMN that were locked to the processing of each narrative segment and specific to its meaning within the narrative context. The momentary configurations of DMN ISFC were highly replicable across groups. Moreover, DMN coupling strength predicted memory of narrative segments. Thus, ISFC opens new avenues for linking brain network dynamics to stimulus features and behaviour.
Neuroimaging studies of language have typically focused on either production or comprehension of single speech utterances such as syllables, words, or sentences. In this study we used a new approach to functional MRI acquisition and analysis to characterize the neural responses during production and comprehension of complex real-life speech. First, using a time-warp based intrasubject correlation method, we identified all areas that are reliably activated in the brains of speakers telling a 15-min-long narrative. Next, we identified areas that are reliably activated in the brains of listeners as they comprehended that same narrative. This allowed us to identify networks of brain regions specific to production and comprehension, as well as those that are shared between the two processes. The results indicate that production of a real-life narrative is not localized to the left hemisphere but recruits an extensive bilateral network, which overlaps extensively with the comprehension system. Moreover, by directly comparing the neural activity time courses during production and comprehension of the same narrative we were able to identify not only the spatial overlap of activity but also areas in which the neural activity is coupled across the speaker's and listener's brains during production and comprehension of the same narrative. We demonstrate widespread bilateral coupling between production-and comprehension-related processing within both linguistic and nonlinguistic areas, exposing the surprising extent of shared processes across the two systems.speech production | speech comprehension | intersubject correlation | brain-to-brain coupling S uccessful verbal communication requires the finely orchestrated interaction between production-based processes in the speaker's brain and comprehension-based processes in the listener's brain. The extent of brain areas involved in the production of real-world speech in a speaker's brain during naturalistic communication is largely unknown. As a result, the degree of overlap between the production and comprehension systems, and the ways in which they interact, remain controversial. This study pursues three aims: (i) to map all areas (including but not limited to sensory, motoric, linguistic, and extralinguistic) that are reliably activated during the production of complex, real-world narrative; (ii) to map the overlap between areas that respond reliably during the production and the comprehension of real-world narrative; and (iii) to assess the coupling between activity in the speaker's brain during naturalistic production and activity in the listener's brain during comprehension of the same narrative. We discuss each challenge in turn.The functional-anatomic architecture underlying the production of speech in an ecological context is incompletely characterized. Studies investigating production-based brain activity have been mainly restricted to the production of single phonemes (1-5), words (6-8), or short phrases in decontextualized, isolated environments (9-13) (see refs. 14...
It is well known that formation of new episodic memories depends on hippocampus, but in real-life settings (e.g., conversation), hippocampal amnesics can utilize information from several minutes earlier. What neural systems outside hippocampus might support this minutes-long retention? In this study, subjects viewed an audiovisual movie continuously for 25 min; another group viewed the movie in 2 parts separated by a 1-day delay. Understanding Part 2 depended on retrieving information from Part 1, and thus hippocampus was required in the day-delay condition. But is hippocampus equally recruited to access the same information from minutes earlier? We show that accessing memories from a few minutes prior elicited less interaction between hippocampus and default mode network (DMN) cortical regions than accessing day-old memories of identical events, suggesting that recent information was available with less reliance on hippocampal retrieval. Moreover, the 2 groups evinced reliable but distinct DMN activity timecourses, reflecting differences in information carried in these regions when Part 1 was recent versus distant. The timecourses converged after 4 min, suggesting a time frame over which the continuous-viewing group may have relied less on hippocampal retrieval. We propose that cortical default mode regions can intrinsically retain real-life episodic information for several minutes.
Linguistic content can be conveyed both in speech and in writing. But how similar is the neural processing when the same real-life information is presented in spoken and written form? Using functional magnetic resonance imaging, we recorded neural responses from human subjects who either listened to a 7 min spoken narrative or read a time-locked presentation of its transcript. Next, within each brain area, we directly compared the response time courses elicited by the written and spoken narrative. Early visual areas responded selectively to the written version, and early auditory areas to the spoken version of the narrative. In addition, many higher-order parietal and frontal areas demonstrated strong selectivity, responding far more reliably to either the spoken or written form of the narrative. By contrast, the response time courses along the superior temporal gyrus and inferior frontal gyrus were remarkably similar for spoken and written narratives, indicating strong modality-invariance of linguistic processing in these circuits. These results suggest that our ability to extract the same information from spoken and written forms arises from a mixture of selective neural processes in early (perceptual) and high-order (control) areas, and modality-invariant responses in linguistic and extra-linguistic areas.
Differences in our prior beliefs can substantially impact our interpretation of a series of events. In this functional magnetic resonance imaging (fMRI) study, we manipulated subjects’ prior beliefs, leading two groups of subjects to interpret the same narrative in two different ways. We found that responses in high-order areas, including the default mode network, language areas and subsets of the mirror neuron system, tend to be similar among people who share the same interpretation, but different from people with an opposing interpretation. Furthermore, the difference in neural responses between the two groups at each moment was correlated with the magnitude of the difference in the interpretation of the narrative. This study demonstrates that brain responses to the same event tend to cluster together among people who share the same views.
The vibrissal system of the rat is an example of active tactile sensing, and has recently been used as a prototype in construction of touch-oriented robots. Active vibrissal exploration and touch are enabled and controlled by musculature of the mystacial pad. So far, knowledge about motor control of the rat vibrissal system has been extracted from what is known about the vibrissal systems of other species, mainly mice and hamsters, since a detailed description of the musculature of the rat mystacial pad was lacking. In the present work, the musculature of the rat mystacial pad was revealed by slicing the mystacial pad in four different planes, staining of mystacial pad slices for cytochrome oxidase, and tracking spatial organization of mystacial pad muscles in consecutive slices. We found that the rat mystacial pad contains four superficial extrinsic muscles and five parts of the M. nasolabialis profundus. The connection scheme of the three parts of the M. nasolabialis profundus is described here for the first time. These muscles are inserted into the plate of the mystacial pad, and thus, their contraction causes whisker retraction. All the muscles of the rat mystacial pad contained three types of skeletal striated fibers (red, white, and intermediate). Although the entire rat mystacial pad usually functions as unity, our data revealed its structural segmentation into nasal and maxillary subdivisions. The mechanisms of whisking in the rat, and hypotheses concerning biomechanical interactions during whisking, are discussed with respect to the muscle architecture of the rat mystacial pad. Anat Rec, 293:1192Rec, 293: -1206
In the vibrissal system, touch information is conveyed by a receptorless whisker hair to follicle mechanoreceptors, which then provide input to the brain. We examined whether any processing, that is, meaningful transformation, occurs in the whisker itself. Using high-speed videography and tracking the movements of whiskers in anesthetized and behaving rats, we found that whisker-related morphological phase planes, based on angular and curvature variables, can represent the coordinates of object position after contact in a reliable manner, consistent with theoretical predictions. By tracking exposed follicles, we found that the follicle-whisker junction is rigid, which enables direct readout of whisker morphological coding by mechanoreceptors. Finally, we found that our behaving rats pushed their whiskers against objects during localization in a way that induced meaningful morphological coding and, in parallel, improved their localization performance, which suggests a role for pre-neuronal morphological computation in active vibrissal touch.
The present study investigates brain-to-brain coupling, defined as inter-subject correlations in the hemodynamic response, during natural verbal communication. We used functional near-infrared spectroscopy (fNIRS) to record brain activity of 3 speakers telling stories and 15 listeners comprehending audio recordings of these stories. Listeners’ brain activity was significantly correlated with speakers’ with a delay. This between-brain correlation disappeared when verbal communication failed. We further compared the fNIRS and functional Magnetic Resonance Imaging (fMRI) recordings of listeners comprehending the same story and found a significant relationship between the fNIRS oxygenated-hemoglobin concentration changes and the fMRI BOLD in brain areas associated with speech comprehension. This correlation between fNIRS and fMRI was only present when data from the same story were compared between the two modalities and vanished when data from different stories were compared; this cross-modality consistency further highlights the reliability of the spatiotemporal brain activation pattern as a measure of story comprehension. Our findings suggest that fNIRS can be used for investigating brain-to-brain coupling during verbal communication in natural settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.