This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
The spatial Stroop task measures the ability to resolve interference between relevant and irrelevant spatial information. We recently proposed a four-choice spatial Stroop task that ensures methodological advantages over the original color-word verbal Stroop task, requiring participants to indicate the direction of an arrow while ignoring its position in one of the screen corners. However, its peripheral spatial arrangement might represent a methodological weakness and could introduce experimental confounds. Thus, aiming at improving our “Peripheral” spatial Stroop, we designed and made available five novel spatial Stroop tasks (Perifoveal, Navon, Figure-Ground, Flanker, and Saliency), wherein the stimuli appeared at the center of the screen. In a within-subjects online study, we compared the six versions to identify which task produced the largest but also the most reliable and robust Stroop effect. Indeed, although internal reliability is frequently overlooked, its estimate is fundamental, also in light of the recently proposed reliability paradox. Data analyses were performed using both the classical general linear model analytical approach and two multilevel modelling approaches (linear mixed models and random coefficient analysis), which specifically served for more accurately estimating the Stroop effect by explaining intra-subject, trial-by-trial variability. We then assessed our results based on their robustness to such analytic flexibility. Overall, our results indicate that the Perifoveal spatial Stroop is the best alternative task for its statistical properties and methodological advantages. Interestingly, our results also indicate that the Peripheral and Perifoveal Stroop effects were not only the largest, but also those with highest and most robust internal reliability.
The Stroop task has been a seminal paradigm in experimental psychology, so much that, in addition to the original color-word version, a multitude of alternative variations has been proposed. The spatial Stroop task, compared to many of the other variants, might potentially be not only a more proper variation but it also might overcome some of the methodological limitations of the original paradigm. Therefore, the present work offers a methodological review of the spatial Stroop tasks used in the literature to verify whether they really exploit such potential. However, this was not the case because not all the tasks (1) were purely spatial, (2) had the dimensional overlap considered fundamental for yielding proper Stroop effects according to Kornblum’s theory, and/or (3) controlled for low-level binding and priming effects. Based on these methodological considerations, we put forward some examples of spatial Stroop tasks that, in our view, produce proper Stroop effects, while controlling for sequence effects and excluding verbal stimuli. Overall, this review wants to emphasize the importance of designing methodologically rigorous Stroop paradigms and to offer some examples of spatial Stroop tasks satisfying such requirements. Indeed, methodological rigor is not an end by itself, but it is fundamental for achieving a better understanding of interference resolution in the Stroop task, which, so far, has been hindered by methodological heterogeneity and limitations.
Given their ability to simultaneously model crossed random effects for subjects and items, mixed-effects models are the current standard for the analysis of behavioral studies in psycholinguistics and related fields. However, they are hardly applied in neuroimaging and psychophysiology, where the use of mass univariate analyses in combination with permutation testing would be too computationally demanding to be practicable with mixed models. Here we propose and validate an analytical strategy that enables the use of linear mixed models with crossed random effects in mass univariate analyses of EEG data (lmeEEG) overcoming the computational costs of standard available approaches (our method was indeed ≈250 times faster). Data and codes are available at osf.io/kw87a. Codes and a tutorial are also available at github.com/antovis86/lmeEEG.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.