Electrophysiological research with event-related brain potentials (ERPs) is increasingly moving from simple, strictly orthogonal stimulation paradigms towards more complex, quasi-experimental designs and naturalistic situations that involve fast, multisensory stimulation and complex motor behavior. As a result, electrophysiological responses from subsequent events often overlap with each other. In addition, the recorded neural activity is typically modulated by numerous covariates, which influence the measured responses in a linear or non-linear fashion. Examples of paradigms where systematic temporal overlap variations and low-level confounds between conditions cannot be avoided include combined electroencephalogram (EEG)/eye-tracking experiments during natural vision, fast multisensory stimulation experiments, and mobile brain/body imaging studies. However, even “traditional,” highly controlled ERP datasets often contain a hidden mix of overlapping activity (e.g., from stimulus onsets, involuntary microsaccades, or button presses) and it is helpful or even necessary to disentangle these components for a correct interpretation of the results. In this paper, we introduce unfold, a powerful, yet easy-to-use MATLAB toolbox for regression-based EEG analyses that combines existing concepts of massive univariate modeling (“regression-ERPs”), linear deconvolution modeling, and non-linear modeling with the generalized additive model into one coherent and flexible analysis framework. The toolbox is modular, compatible with EEGLAB and can handle even large datasets efficiently. It also includes advanced options for regularization and the use of temporal basis functions (e.g., Fourier sets). We illustrate the advantages of this approach for simulated data as well as data from a standard face recognition experiment. In addition to traditional and non-conventional EEG/ERP designs, unfold can also be applied to other overlapping physiological signals, such as pupillary or electrodermal responses. It is available as open-source software at .
Electrophysiological research with event-related brain potentials (ERPs) is increasingly moving from simple, strictly orthogonal stimulation paradigms towards more complex, quasiexperimental designs and naturalistic situations that involve fast, multisensory stimulation and complex motor behavior. As a result, electrophysiological responses from subsequent events overlap with each other. In addition, the recorded neural activity is often modulated by numerous covariates, which influence the measured responses in a linear or nonlinear fashion.Examples of paradigms where systematic temporal overlap variations and low-level confounds between conditions cannot be avoided include combined EEG/eye-tracking experiments during natural vision, fast multisensory stimulation experiment, or mobile brain/body imaging studies.However, even "traditional", highly controlled ERP datasets often contain a hidden mix of overlapping activity (e.g. from stimulus onsets, involuntary microsaccades, or button presses) and, it is helpful or even necessary to disentangle these components for a correct interpretation of the results. In this paper, we introduce unfold, a powerful, yet easy-to-use MATLAB toolbox for regression-based EEG analyses that combines the existing concepts of massive univariate modeling ("regression ERPs"), linear deconvolution modeling, and generalized additive modeling into one coherent and flexible analysis framework. The toolbox is modular, compatible with EEGLAB and can handle even large datasets efficiently. It also includes advanced options for regularization and the use of temporal basis functions (e.g. Fourier sets).We illustrate the advantages of this approach for simulated data as well as data from a standard face recognition experiment. In addition to traditional and non-conventional EEG/ERP designs,
In everyday life, spatial navigation involving locomotion provides congruent visual, vestibular, and kinesthetic information that need to be integrated. Yet, previous studies on human brain activity during navigation focus on stationary setups, neglecting vestibular and kinesthetic feedback. The aim of our work is to uncover the influence of those sensory modalities on cortical processing. We developed a fully immersive virtual reality setup combined with high-density mobile electroencephalography (EEG). Participants traversed one leg of a triangle, turned on the spot, continued along the second leg, and finally indicated the location of their starting position. Vestibular and kinesthetic information was provided either in combination, as isolated sources of information, or not at all within a 2 × 2 full factorial intra-subjects design. EEG data were processed by clustering independent components, and time-frequency spectrograms were calculated. In parietal, occipital, and temporal clusters, we detected alpha suppression during the turning movement, which is associated with a heightened demand of visuo-attentional processing and closely resembles results reported in previous stationary studies. This decrease is present in all conditions and therefore seems to generalize to more natural settings. Yet, in incongruent conditions, when different sensory modalities did not match, the decrease is significantly stronger. Additionally, in more anterior areas we found that providing only vestibular but no kinesthetic information results in alpha increase. These observations demonstrate that stationary experiments omit important aspects of sensory feedback. Therefore, it is important to develop more natural experimental settings in order to capture a more complete picture of neural correlates of spatial navigation.
There is growing awareness across the neuroscience community that the replicability of findings about the relationship between brain activity and cognitive phenomena can be improved by conducting studies with high statistical power that adhere to welldefined and standardised analysis pipelines. Inspired by recent efforts from the psychological sciences, and with the desire to examine some of the foundational findings using electroencephalography (EEG), we have launched #EEGManyLabs, a large-scale international collaborative replication effort. Since its discovery in the early 20th century, EEG has had a profound influence on our understanding of human cognition, but there is limited evidence on the replicability of some of the most highly cited discoveries. After a systematic search and selection process, we have identified 27 of the most influential and continually cited studies in the field. We plan to directly test the replicability of key findings from 20 of these studies in teams of at least three independent laboratories. The design and protocol of each replication effort will be submitted as a Registered Report and peer-reviewed prior to data collection.Prediction markets, open to all EEG researchers, will be used as a forecasting tool to examine which findings the community expects to replicate. This project will update our confidence in some of the most influential EEG findings and generate a large open access database that can be used to inform future research practices. Finally, through this international effort, we hope to create a cultural shift towards inclusive, highpowered multi-laboratory collaborations.
G-protein-coupled receptors (GPCRs) represent the major protein family for cellular modulation in mammals. Therefore, various strategies have been developed to analyze the function of GPCRs involving pharmaco- and optogenetic approaches [1, 2]. However, a tool that combines precise control of the activation and deactivation of GPCR pathways and/or neuronal firing with limited phototoxicity is still missing. We compared the biophysical properties and optogenetic application of a human and a mouse melanopsin variant (hOpn4L and mOpn4L) on the control of Gi/o and Gq pathways in heterologous expression systems and mouse brain. We found that GPCR pathways can be switched on/off by blue/yellow light. The proteins differ in their kinetics and wavelength dependence to activate and deactivate G protein pathways. Whereas mOpn4L is maximally activated by very short light pulses, leading to sustained G protein activation, G protein responses of hOpn4L need longer light pulses to be activated and decline in amplitude. Based on the different biophysical properties, brief light activation of mOpn4L is sufficient to induce sustained neuronal firing in cerebellar Purkinje cells (PC), whereas brief light activation of hOpn4L induces AP firing, which declines in frequency over time. Most importantly, mOpn4L-induced sustained firing can be switched off by yellow light. Based on the biophysical properties, hOpn4L and mOpn4L represent the first GPCR optogenetic tools, which can be used to switch GPCR pathways/neuronal firing on an off with temporal precision and limited phototoxicity. We suggest to name these tools moMo and huMo for future optogenetic applications.
Perceptual decisions are biased toward previous decisions. Earlier research suggests that this choice repetition bias is increased after previous decisions of high confidence, as inferred from response time measures (Urai, Braun, & Donner, 2017), but also when previous decisions were based on weak sensory evidence (Akaishi, Umeda, Nagase, & Sakai, 2014). As weak sensory evidence is typically associated with low confidence, these previous findings appear conflicting. To resolve this conflict, we set out to investigate the effect of decision confidence on choice repetition more directly by measuring explicit confidence ratings in a motion coherence discrimination task. Moreover, we explored how choice and evidence history jointly affect subsequent perceptual choices. We found that participants were more likely to repeat previous choices of high subjective confidence, as well as previous fast choices, confirming the boost of choice repetition with decision confidence. Furthermore, we discovered that current choices were biased away from the previous evidence direction and that this effect grew with previous evidence strength. These findings point toward simultaneous biases of choice repetition, modulated by decision confidence, and evidence adaptation, modulated by the strength of evidence, which bias current perceptual decisions in opposite directions.
Eye-tracking experiments rely heavily on good data quality of eye-trackers. Unfortunately, it is often the case that only the spatial accuracy and precision values are available from the manufacturers. These two values alone are not sufficient to serve as a benchmark for an eye-tracker: Eye-tracking quality deteriorates during an experimental session due to head movements, changing illumination or calibration decay. Additionally, different experimental paradigms require the analysis of different types of eye movements; for instance, smooth pursuit movements, blinks or microsaccades, which themselves cannot readily be evaluated by using spatial accuracy or precision alone. To obtain a more comprehensive description of properties, we developed an extensive eye-tracking test battery. In 10 different tasks, we evaluated eye-tracking related measures such as: the decay of accuracy, fixation durations, pupil dilation, smooth pursuit movement, microsaccade classification, blink classification, or the influence of head motion. For some measures, true theoretical values exist. For others, a relative comparison to a reference eye-tracker is needed. Therefore, we collected our gaze data simultaneously from a remote EyeLink 1000 eye-tracker as the reference and compared it with the mobile Pupil Labs glasses. As expected, the average spatial accuracy of 0.57° for the EyeLink 1000 eye-tracker was better than the 0.82° for the Pupil Labs glasses (N = 15). Furthermore, we classified less fixations and shorter saccade durations for the Pupil Labs glasses. Similarly, we found fewer microsaccades using the Pupil Labs glasses. The accuracy over time decayed only slightly for the EyeLink 1000, but strongly for the Pupil Labs glasses. Finally, we observed that the measured pupil diameters differed between eye-trackers on the individual subject level but not on the group level. To conclude, our eye-tracking test battery offers 10 tasks that allow us to benchmark the many parameters of interest in stereotypical eye-tracking situations and addresses a common source of confounds in measurement errors (e.g., yaw and roll head movements). All recorded eye-tracking data (including Pupil Labs’ eye videos), the stimulus code for the test battery, and the modular analysis pipeline are freely available (https://github.com/behinger/etcomp).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.