Here we present an update of the studyforrest (http://studyforrest.org) dataset that complements the previously released functional magnetic resonance imaging (fMRI) data for natural language processing with a new two-hour 3 Tesla fMRI acquisition while 15 of the original participants were shown an audio-visual version of the stimulus motion picture. We demonstrate with two validation analyses that these new data support modeling specific properties of the complex natural stimulus, as well as a substantial within-subject BOLD response congruency in brain areas related to the processing of auditory inputs, speech, and narrative when compared to the existing fMRI data for audio-only stimulation. In addition, we provide participants' eye gaze location as recorded simultaneously with fMRI, and an additional sample of 15 control participants whose eye gaze trajectories for the entire movie were recorded in a lab setting—to enable studies on attentional processes and comparative investigations on the potential impact of the stimulation setting on these processes.
The studyforrest (http://studyforrest.org) dataset is likely the largest neuroimaging dataset on natural language and story processing publicly available today. In this article, along with a companion publication, we present an update of this dataset that extends its scope to vision and multi-sensory research. 15 participants of the original cohort volunteered for a series of additional studies: a clinical examination of visual function, a standard retinotopic mapping procedure, and a localization of higher visual areas—such as the fusiform face area. The combination of this update, the previous data releases for the dataset, and the companion publication, which includes neuroimaging and eye tracking data from natural stimulation with a motion picture, form an extremely versatile and comprehensive resource for brain imaging research—with almost six hours of functional neuroimaging data across five different stimulation paradigms for each participant. Furthermore, we describe employed paradigms and present results that document the quality of the data for the purpose of characterising major properties of participants’ visual processing stream.
A decade after it was shown that the orientation of visual grating stimuli can be decoded from human visual cortex activity by means of multivariate pattern classification of BOLD fMRI data, numerous studies have investigated which aspects of neuronal activity are reflected in BOLD response patterns and are accessible for decoding. However, it remains inconclusive what the effect of acquisition resolution on BOLD fMRI decoding analyses is. The present study is the first to provide empirical ultra highfield fMRI data recorded at four spatial resolutions (0.8 mm, 1.4 mm, 2 mm, and 3 mm isotropic voxel size) on this topic -in order to test hypotheses on the strength and spatial scale of orientation discriminating signals. We present detailed analysis, in line with predictions from previous simulation studies, about how the performance of orientation decoding varies with different acquisition resolutions. Moreover, we also examine different spatial filtering procedures and its effects on orientation decoding. Here we show that higher-resolution scans with subsequent down-sampling or low-pass filtering yield no benefit over scans natively recorded in the corresponding lower resolution regarding decoding accuracy. The orientation-related signal in the BOLD fMRI data is spatially broadband in nature, includes both high spatial frequency components, as well as large-scale biases previously proposed in the literature. Moreover, we found above chance-level contribution from large draining veins to orientation decoding. Acquired raw data were publicly released to facilitate further investigation.
Here we present an update of the studyforrest (http://studyforrest.org) dataset that complements the previously released functional magnetic resonance imaging (fMRI) data for natural language processing with a new two-hour 3 Tesla fMRI acquisition while 15 of the original participants were shown an audio-visual version of the stimulus motion picture. We demonstrate with two validation analyses that these new data support modeling specific properties of the complex natural stimulus, as well as a substantial withinsubject BOLD response congruency in brain areas related to the processing of auditory inputs, speech, and narrative when compared to the existing fMRI data for audio-only stimulation. In addition, we provide participants' eye gaze location as recorded simultaneously with fMRI, and an additional sample of 15 control participants whose eye gaze trajectories for the entire movie were recorded in a lab setting -to enable studies on attentional processes and comparative investigations on the potential impact of the stimulation setting on these processes.
A decade after it was shown that the orientation of visual grating stimuli can be decoded from human visual cortex activity by means of multivariate pattern classification of BOLD fMRI data, numerous studies have investigated which aspects of neuronal activity are reflected in BOLD response patterns and are accessible for decoding. However, it remains inconclusive what the effect of acquisition resolution on BOLD fMRI decoding analyses is. The present study is the first to provide empirical ultra high-field fMRI data recorded at four spatial resolutions (0.8mm, 1.4mm, 2mm, and 3mm isotropic voxel size) on this topic - in order to test hypotheses on the strength and spatial scale of orientation discriminating signals. We present detailed analysis, in line with predictions from previous simulation studies, about how the performance of orientation decoding varies with different acquisition resolutions. Moreover, we also examine different spatial filtering procedures and its effects on orientation decoding. Here we show that higher-resolution scans with subsequent down-sampling or low-pass filtering yield no benefit over scans natively recorded in the corresponding lower resolution regarding decoding accuracy. The orientation-related signal in the BOLD fMRI data is spatially broadband in nature, includes both high spatial frequency components, as well as large-scale biases previously proposed in the literature. Moreover, we found above chance-level contribution from large draining veins to orientation decoding. Acquired raw data were publicly released to facilitate further investigation.
With the advent of ultra-high field (7T), high spatial resolution functional MRI (fMRI) has allowed the differentiation of the cortical representations of each of the digits at an individual-subject level in human primary somatosensory cortex (S1). Here we generate a probabilistic atlas of the contralateral SI representations of the digits of both the left and right hand in a group of 22 right-handed individuals. The atlas is generated in both volume and surface standardised spaces from somatotopic maps obtained by delivering vibrotactile stimulation to each distal phalangeal digit using a travelling wave paradigm. Metrics quantify the likelihood of a given position being assigned to a digit (full probability map) and the most probable digit for a given spatial location (maximum probability map). The atlas is validated using a leave-one-out cross validation procedure. Anatomical variance across the somatotopic map is also assessed to investigate whether the functional variability across subjects is coupled to structural differences. This probabilistic atlas quantifies the variability in digit representations in healthy subjects, finding some quantifiable separability between digits 2, 3 and 4, a complex overlapping relationship between digits 1 and 2, and little agreement of digit 5 across subjects. The atlas and constituent subject maps are available online for use as a reference in future neuroimaging studies.
The sensation of touch in the glabrous skin of the human hand is conveyed by thousands of fast-conducting mechanoreceptive afferents, which can be categorised into four distinct types. The spiking properties of these afferents in the periphery in response to varied tactile stimuli are well-characterised, but relatively little is known about the spatiotemporal properties of the neural representations of these different receptor types in the human cortex. Here, we use the novel methodological combination of single-unit intraneural microstimulation (INMS) with magnetoencephalography (MEG) to localise cortical representations of individual touch afferents in humans, by measuring the extracranial magnetic fields from neural currents. We found that by assessing the modulation of the beta (13–30 Hz) rhythm during single-unit INMS, significant changes in oscillatory amplitude occur in the contralateral primary somatosensory cortex within and across a group of fast adapting type I mechanoreceptive afferents, which corresponded well to the induced response from matched vibrotactile stimulation. Combining the spatiotemporal specificity of MEG with the selective single-unit stimulation of INMS enables the interrogation of the central representations of different aspects of tactile afferent signalling within the human cortices. The fundamental finding that single-unit INMS ERD responses are robust and consistent with natural somatosensory stimuli will permit us to more dynamically probe the central nervous system responses in humans, to address questions about the processing of touch from the different classes of mechanoreceptive afferents and the effects of varying the stimulus frequency and patterning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.