In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies.
Abstract. Software simulations of building evacuation during emergency can provide rich qualitative and quantitative results for safety analysis. However, the majority of them do not take into account current surveys on human behaviors under stressful situations that explain the important role of personality and emotions in crowd behaviors during evacuations. In this paper we propose a framework for designing evacuation simulations that is based on a multi-agent BDI architecture enhanced with the OCEAN model of personality and the OCC model of emotions.
Memory reactivation during sleep is critical for consolidation, but also extremely difficult to measure as it is subtle, distributed and temporally unpredictable. This article reports a novel method for detecting such reactivation in standard sleep recordings. During learning, participants produced a complex sequence of finger presses, with each finger cued by a distinct audio-visual stimulus. Auditory cues were then re-played during subsequent sleep to trigger neural reactivation through a method known as targeted memory reactivation (TMR). Next, we used electroencephalography data from the learning session to train a machine learning classifier, and then applied this classifier to sleep data to determine how successfully each tone had elicited memory reactivation. Above chance classification was significantly higher in slow wave sleep than in stage 2, suggesting differential efficacy of TMR in these two sleep stages.Interestingly, classification success reduced across numerous repetitions of the tone cue, suggesting either a gradually reducing responsiveness to such cues or a plasticity-related change in the neural signature as a result of cueing. We believe this method will be invaluable for future investigations of memory consolidation.not peer-reviewed)
Recently, we showed that presenting salient names (i.e., a participant's first name) on the fringe of awareness (in rapid serial visual presentation, RSVP) breaks through into awareness, resulting in the generation of a P3, which (if concealed information is presented) could be used to differentiate between deceivers and nondeceivers. The aim of the present study was to explore whether face stimuli can be used in an ERPbased RSVP paradigm to infer recognition of broadly familiar faces. To do this, we explored whether famous faces differentially break into awareness when presented in RSVP and, importantly, whether ERPs can be used to detect these breakthrough events on an individual basis. Our findings provide evidence that famous faces are differentially perceived and processed by participants' brains as compared to novel (or unfamiliar) faces. EEG data revealed large differences in brain responses between these conditions. K E Y W O R D S deception detection, EEG/ERP, familiarity, famous faces, P3, RSVP, time-frequency analyses 2 of 20 | ALSUFYANI et al.
Memory reactivation during sleep is critical for consolidation, but also extremely difficult to measure as it is subtle, distributed and temporally unpredictable. This article reports a novel method for detecting such reactivation in standard sleep recordings. During learning, participants produced a complex sequence of finger presses, with each finger cued by a distinct audio-visual stimulus. Auditory cues were then re-played during subsequent sleep to trigger neural reactivation through a method known as targeted memory reactivation (TMR). Next, we used electroencephalography data from the learning session to train a machine learning classifier, and then applied this classifier to sleep data to determine how successfully each tone had elicited memory reactivation. Neural reactivation was classified above chance in all participants when TMR was applied in SWS, and in 5 of the 14 participants to whom TMR was applied in N2. Classification success reduced across numerous repetitions of the tone cue, suggesting either a gradually reducing responsiveness to such cues or a plasticity-related change in the neural signature as a result of cueing. We believe this method will be valuable for future investigations of memory consolidation.
There has been considerable debate and concern as to whether there is a replication crisis in the scientific literature. A likely cause of poor replication is the multiple comparisons problem. An important way in which this problem can manifest in the M/EEG context is through post hoc tailoring of analysis windows (a.k.a. regions-of-interest, ROIs) to landmarks in the collected data. Post hoc tailoring of ROIs is used because it allows researchers to adapt to inter-experiment variability and discover novel differences that fall outside of windows defined by prior precedent, thereby reducing Type II errors. However, this approach can dramatically inflate Type I error rates. One way to avoid this problem is to tailor windows according to a contrast that is orthogonal (strictly parametrically orthogonal) to the contrast being tested. A key approach of this kind is to identify windows on a fully flattened average. On the basis of simulations, this approach has been argued to be safe for post hoc tailoring of analysis windows under many conditions. Here, we present further simulations and mathematical proofs to show exactly why the Fully Flattened Average approach is unbiased, providing a formal grounding to the approach, clarifying the limits of its applicability and resolving published misconceptions about the method. We also provide a statistical power analysis, which shows that, in specific contexts, the fully flattened average approach provides higher statistical power than Fieldtrip cluster inference. This suggests that the Fully Flattened Average approach will enable researchers to identify more effects from their data without incurring an inflation of the false positive rate.
Resampling techniques are used widely within the ERP community to assess statistical significance and especially in the deception detection literature. Here, we argue that because of statistical bias, bootstrap should not be used in combination with methods like peak-to-peak. Instead, permutation tests provide a more appropriate alternative.
Methods for measuring latency contrasts are evaluated against a new method utilizing the Dynamic Time Warping algorithm. They are applied on simulated data, for different signal to noise ratios and two sizes of window (broad vs narrow). The results are subjected to statistical and ROC analysis. The analysis suggests that DTW performs better than the other methods, being less sensitive to noise as well as to placement and width of the window selected.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers