According to recent functional magnetic resonance imaging (fMRI) studies, spectators of a movie may share similar spatiotemporal patterns of brain activity. We aimed to extend these findings of intersubject correlation to temporally accurate single-trial magnetoencephalography (MEG). A silent 15-min black-and-white movie was shown to eight subjects twice. We adopted a spatial filtering model and estimated its parameter values by using multi-set canonical correlation analysis (M-CCA) so that the intersubject correlation was maximized. The procedure resulted in multiple (mutually uncorrelated) time-courses with statistically significant intersubject correlations at frequencies below 10 Hz; the maximum correlation was 0.28 ± 0.075 in the ≤1 Hz band. Moreover, the 24-Hz frame rate elicited steady-state responses with statistically significant intersubject correlations up to 0.29 ± 0.12. To assess the brain origin of the across-subjects correlated signals, the time-courses were correlated with minimum-norm source current estimates (MNEs) projected to the cortex. The time series implied across-subjects synchronous activity in the early visual, posterior and inferior parietal, lateral temporo-occipital, and motor cortices, and in the superior temporal sulcus (STS) bilaterally. These findings demonstrate the capability of the proposed methodology to uncover cortical MEG signatures from single-trial signals that are consistent across spectators of a movie.
Movie viewing allows human perception and cognition to be studied in complex, real-life-like situations in a brain-imaging laboratory. Previous studies with functional magnetic resonance imaging (fMRI) and with magneto-and electroencephalography (MEG and EEG) have demonstrated consistent temporal dynamics of brain activity across movie viewers. However, little is known about the similarities and differences of fMRI and MEG or EEG dynamics during such naturalistic situations.We thus compared MEG and fMRI responses to the same 15-min black-and-white movie in the same eight subjects who watched the movie twice during both MEG and fMRI recordings. We analyzed intra-and intersubject voxel-wise correlations within each imaging modality as well as the correlation of the MEG envelopes and fMRI signals. The fMRI signals showed voxel-wise within-and between-subjects correlations up to r ¼ 0.66 and r ¼ 0.37, respectively, whereas these correlations were clearly weaker for the envelopes of band-pass filtered (7 frequency bands below 100 Hz) MEG signals (within-subjects correlation r < 0.14 and between-subjects r < 0.05). Direct MEG-fMRI voxel-wise correlations were unreliable. Notably, applying a spatial-filtering approach to the MEG data uncovered consistent canonical variates that showed considerably stronger (up to r ¼ 0.25) betweensubjects correlations than the univariate voxel-wise analysis. Furthermore, the envelopes of the time courses of these variates up to about 10 Hz showed association with fMRI signals in a general linear model. Similarities between envelopes of MEG canonical variates and fMRI voxel time-courses were seen mostly in occipital, but also in temporal and frontal brain regions, whereas intra-and intersubject correlations for MEG and fMRI separately were strongest only in the occipital areas.In contrast to the conventional univariate analysis, the spatial-filtering approach was able to uncover associations between the MEG envelopes and fMRI time courses, shedding light on the similarities of hemodynamic and electromagnetic brain activities during movie viewing. IntroductionA practical and ecologically valid approach to probe the neural underpinnings of perception and social cognition is to use movies as stimuli in neuroimaging experiments. Mimicking everyday situations around us, movies can provoke a wide spectrum of sensory, social, and emotional percepts that may be difficult to elicit using the highly controlled repetitive stimuli typically employed in conventional brain-imaging experiments. Despite the apparent complexity and unrestrained nature of movies, consistent and synchronized brain activity patterns across movie viewers have been demonstrated with functional magnetic resonance imaging (fMRI; e.g. Hasson et al.,
Observation of another person's actions and feelings activates brain areas that support similar functions in the observer, thereby facilitating inferences about the other's mental and bodily states. In real life, events eliciting this kind of vicarious brain activations are intermingled with other complex, ever‐changing stimuli in the environment. One practical approach to study the neural underpinnings of real‐life vicarious perception is to image brain activity during movie viewing. Here the goal was to find out how observed haptic events in a silent movie would affect the spectator's sensorimotor cortex. The functional state of the sensorimotor cortex was monitored by analyzing, in 16 healthy subjects, magnetoencephalographic (MEG) responses to tactile finger stimuli that were presented once per second throughout the session. Using canonical correlation analysis and spatial filtering, consistent single‐trial responses across subjects were uncovered, and their waveform changes throughout the movie were quantified. The long‐latency (85–175 ms) parts of the responses were modulated in concordance with the participants’ average moment‐by‐moment ratings of own engagement in the haptic content of the movie (correlation r = 0.49; ratings collected after the MEG session). The results, obtained by using novel signal‐analysis approaches, demonstrate that the functional state of the human sensorimotor cortex fluctuates in a fine‐grained manner even during passive observation of temporally varying haptic events. Hum Brain Mapp 37:4061–4068, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Invasive neurophysiological studies in nonhuman primates have shown different laminar activation profiles to auditory vs. visual stimuli in auditory cortices and adjacent polymodal areas. Means to examine the underlying feedforward vs. feedback type influences noninvasively have been limited in humans. Here, using 1‐mm isotropic resolution 3D echo‐planar imaging at 7 T, we studied the intracortical depth profiles of functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) signals to brief auditory (noise bursts) and visual (checkerboard) stimuli. BOLD percent‐signal‐changes were estimated at 11 equally spaced intracortical depths, within regions‐of‐interest encompassing auditory (Heschl's gyrus, Heschl's sulcus, planum temporale, and posterior superior temporal gyrus) and polymodal (middle and posterior superior temporal sulcus) areas. Effects of differing BOLD signal strengths for auditory and visual stimuli were controlled via normalization and statistical modeling. The BOLD depth profile shapes, modeled with quadratic regression, were significantly different for auditory vs. visual stimuli in auditory cortices, but not in polymodal areas. The different depth profiles could reflect sensory‐specific feedforward versus cross‐sensory feedback influences, previously shown in laminar recordings in nonhuman primates. The results suggest that intracortical BOLD profiles can help distinguish between feedforward and feedback type influences in the human brain. Further experimental studies are still needed to clarify how underlying signal strength influences BOLD depth profiles under different stimulus conditions.
Previous studies have demonstrated that auditory cortex activity can be influenced by cross-sensory visual inputs. Intracortical recordings in non-human primates (NHP) have suggested a bottom-up feedforward (FF) type laminar profile for auditory evoked but top-down feedback (FB) type for cross-sensory visual evoked activity in the auditory cortex. To test whether this principle applies also to humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex region of interest, auditory evoked responses showed peaks at 37 and 90 ms and cross-sensory visual responses at 125 ms. The inputs to the auditory cortex were then modeled through FF and FB type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which consists of a neocortical circuit model linking the cellular- and circuit-level mechanisms to MEG. The HNN models suggested that the measured auditory response could be explained by an FF input followed by an FB input, and the cross-sensory visual response by an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG/EEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.