Lay SummaryDoubts over the credibility of science can be lifted by open research practices. Low reliability (absence of biases) and reproducibility (transparent workflow) result in a low probability of independent studies reaching the same outcome (lack of replicability). To circumvent these issues, we discuss how the Transparency and Openness Promotion guidelines, proposed by the Center for Open Science, along with a software engineering toolkit allow researchers to embrace the open science process.
Response time and accuracy are fundamental measures of behavioral science, but discerning participants' underlying abilities can be masked by speed-accuracy trade-offs (SATOs). SATOs are often inadequately addressed in experiment analyses which focus on a single variable or which involve a suboptimal analytic correction. Models of decision-making, such as the drift diffusion model (DDM), provide a principled account of the decision-making process, allowing the recovery of SATO-unconfounded decision parameters from observed behavioral variables. For plausible parameters of a typical between-groups experiment, we simulate experimental data, for both real and null group differences in participants' ability to discriminate stimuli (represented by differences in the drift rate parameter of the DDM used to generate the simulated data), for both systematic and null SATOs. We then use the DDM to fit the generated data. This allows the direct comparison of the specificity and sensitivity for testing of group differences of different measures (accuracy, reaction time, and the drift rate from the model fitting). Our purpose here is not to make a theoretical innovation in decision modeling, but to use established decision models to demonstrate and quantify the benefits of decision modeling for experimentalists. We show, in terms of reduction of required sample size, how decision modeling can allow dramatically more efficient data collection for set statistical power; we confirm and depict the non-linear speed-accuracy relation; and we show how accuracy can be a more sensitive measure than response time given decision parameters which reasonably reflect a typical experiment.
Response time and accuracy are fundamental measures of behavioural science, but discerning participants’ underlying abilities can be masked by speed-accuracy trade-offs (SATOs). Although a well-known possibility, SATOs are often inadequately addressed in experiment analyses which focus on a single variable (e.g. psychophysics paradigms analysing accuracy alone), or which involve a suboptimal analytic correction (e.g. dividing accuracy by response time). Models of decision making, such as the drift diffusion model (DDM), provide a principled account of the decision making process, allowing the recovery of SATO-unconfounded decision parameters from observed behavioural variables. For plausible parameters of a typical between-groups experiment we simulate experimental data, for both real and null group differences in participants’ ability to discriminate stimuli (represented by differences in the drift rate parameter of the DDM used to generate the simulated data), for both systematic and null SATOs. We then use the DDM to fit the generated data. This allows the direct comparison of the specificity and sensitivity for testing of group differences of different measures (accuracy, reaction time and the drift rate from the model fitting). Our purpose here is not to make a theoretical innovation in decision modelling, but to use established decision models to demonstrate and quantify the benefits of decision modelling for experimentalists. We show, in terms of reduction of required sample size, how decision modelling can allow dramatically more efficient data collection for set statistical power; we confirm and depict the non-linear speed-accuracy relation; and we show how accuracy can be a more sensitive measure than response time given decision parameters which reasonably reflect a typical experiment. Our results are supported by an online interactive data explorer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.