Abstract:Whenever parameter estimates are uncertain or observations are contaminated by measurement error, the Pearson correlation coefficient can severely underestimate the true strength of an association. Various approaches exist for inferring the correlation in the presence of estimation uncertainty and measurement error, but none are routinely applied in psychological research. Here we focus on a Bayesian hierarchical model proposed by Behseta, Berdyyeva, Olson, and Kass (2009) that allows researchers to infer the … Show more
“…In hierarchical models, trial noise and true variability of experimental effects are estimated separately. As a consequence, the estimates of the true variability of the attentional control effect in one task can then be correlated with that of another task without attenuation (Matzke et al, 2017;). However, applying hierarchical models alone cannot make up for all problems coming with current implementations of attentional control paradigms.…”
Attentional control as an ability to regulate information processing during goal-directed behavior is critical to many theories of human cognition and thought to predict a large range of everyday behaviors. However, in recent years, failures to reliably assess individual differences in attentional control have sparked a debate concerning whether attentional control, as currently conceptualized and assessed, can be regarded as a valid psychometric construct. In this consensus paper, we summarize the current debate from theoretical, methodological, and analytical perspectives. First, we propose a consensus-based definition of attentional control and the cognitive mechanisms that potentially contribute to individual differences in attentional control. Next, guided by the findings of an in-depth literature survey, we discuss the psychometric considerations that are critical when assessing attentional control. We then provide suggestions for recent methodological and analytical approaches that can alleviate the most common concerns. We conclude that, to truly advance our understanding of the construct of attentional control, we must develop a theory-driven and empirically supported consensus on how we define, operationalize, and assess attentional control. This consensus paper presents a first step to achieve this goal; a shift toward transparent reporting, sharing of materials and data, and cross-laboratory efforts will further accelerate progress.
“…In hierarchical models, trial noise and true variability of experimental effects are estimated separately. As a consequence, the estimates of the true variability of the attentional control effect in one task can then be correlated with that of another task without attenuation (Matzke et al, 2017;). However, applying hierarchical models alone cannot make up for all problems coming with current implementations of attentional control paradigms.…”
Attentional control as an ability to regulate information processing during goal-directed behavior is critical to many theories of human cognition and thought to predict a large range of everyday behaviors. However, in recent years, failures to reliably assess individual differences in attentional control have sparked a debate concerning whether attentional control, as currently conceptualized and assessed, can be regarded as a valid psychometric construct. In this consensus paper, we summarize the current debate from theoretical, methodological, and analytical perspectives. First, we propose a consensus-based definition of attentional control and the cognitive mechanisms that potentially contribute to individual differences in attentional control. Next, guided by the findings of an in-depth literature survey, we discuss the psychometric considerations that are critical when assessing attentional control. We then provide suggestions for recent methodological and analytical approaches that can alleviate the most common concerns. We conclude that, to truly advance our understanding of the construct of attentional control, we must develop a theory-driven and empirically supported consensus on how we define, operationalize, and assess attentional control. This consensus paper presents a first step to achieve this goal; a shift toward transparent reporting, sharing of materials and data, and cross-laboratory efforts will further accelerate progress.
“…5). A key feature of this approach is that it incorporates uncertainty in the inferences of the parameters themselves (Matzke et al, 2017). That is, we do not use point estimates of the various risk and consistency parameters, but instead acknowledge that participant's behavior is consistent with a range of possible values, given the limited behavioral data.…”
There are many ways to measure how people manage risk when they make decisions. A standard approach is to measure risk propensity using self-report questionnaires. An alternative approach is to use decision-making tasks that involve risk and uncertainty, and apply cognitive models of task behavior to infer parameters that measure people’s risk propensity. We report the results of a within-participants experiment that used three questionnaires and four decision-making tasks. The questionnaires are the Risk Propensity Scale, the Risk Taking Index, and the DomainSpecific Risk Taking Scale. The decision-making tasks are the Balloon Analogue Risk Task, the preferential choice gambling task, the optimal stopping problem, and the bandit problem. We analyze the relationships between the risk measures and cognitive parameters using Bayesian inferences about the patterns of correlation, and using a novel cognitive latent variable modeling approach. The results show that people’s risk propensity is generally consistent within different conditions for each of the decision-making tasks. There is, however, little evidence that the way people manage risk generalizes across the tasks, or that it corresponds to the questionnaire measures.
“…Thirdly, a related problem concerns how associations between individual covariates and model parameters can be tested. While some work has addressed the problem of testing correlations between a single covariate and a specific model parameter (Matzke et al, 2017;Jeffreys, 1961), it is not clear how to test individual entries from a covariance matrix if several covariates are included in a model simultaneously. The regression framework presented here, on the other hand, allows for straightforward tests of individual regression weights.…”
An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.