A classic discussion in the recognition-memory literature concerns the question of whether recognition judgments are better described by continuous or discrete processes. These two hypotheses are instantiated by the signal detection theory model (SDT) and the 2-high-threshold model, respectively. Their comparison has almost invariably relied on receiver operating characteristic data. A new model-comparison approach based on ranking judgments is proposed here. This approach has several advantages: It does not rely on particular distributional assumptions for the models, and it does not require costly experimental manipulations. These features permit the comparison of the models by means of simple paired-comparison tests instead of goodness-of-fit results and complex model-selection methods that are predicated on many auxiliary assumptions. Empirical results from 2 experiments are consistent with a continuous memory process such as the one assumed by SDT.
Acknowledgements: A. Szollosi and C. Donkin are supported by Australian Research Council grants (DP130100124 and DP190101675). Authors thank Jared Hotaling and Ben R. Newell for useful discussion, and various others for their constructive comments on a preprint of the current paper (entitled "Preregistration is redundant, at best"). Contribution statement: A. Szollosi and C. Donkin prepared the original outline, and A. Szollosi converted the outline into a draft. All authors contributed to improving the draft into its final version. All authors reviewed the text for final revisions.
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/.
Traditional approaches within the framework of signal detection theory (SDT; Green & Swets, 1966), especially in the field of recognition memory, assume that the positioning of response criteria is not a noisy process. Recent work (Benjamin, Diaz, & Wee, 2009; Mueller & Weidemann, 2008) has challenged this assumption, arguing not only for the existence of criterion noise but also for its large magnitude and substantive contribution to individuals' performance. A review of these recent approaches for the measurement of criterion noise in SDT identifies several shortcomings and confoundings. A reanalysis of Benjamin et al.'s (2009) data sets as well as the results from a new experimental method indicate that the different forms of criterion noise proposed in the recognition memory literature are of very low magnitudes, and they do not provide a significant improvement over the account already given by traditional SDT without criterion noise.
Decisions under risk have been shown to differ depending on whether information on outcomes and probabilities is gleaned from symbolic descriptions or gathered through experience. To some extent, this description-experience gap is due to sampling error in experience-based choice. Analyses with cumulative prospect theory (CPT), investigating to what extent the gap is also driven by differences in people's subjective representations of outcome and probability information (taking into account sampling error), have produced mixed results. We improve on previous analyses of description-based and experience-based choices by taking advantage of both a within-subjects design and a hierarchical Bayesian implementation of CPT. This approach allows us to capture both the differences and the within-person stability of individuals' subjective representations across the two modes of learning about choice options. Relative to decisions from description, decisions from experience showed reduced sensitivity to probabilities and increased sensitivity to outcomes. For some CPT parameters, individual differences were relatively stable across modes of learning. Our results suggest that outcome and probability information translate into systematically different subjective representations in description- versus experience-based choice. At the same time, both types of decisions seem to tap into the same individual-level regularities.
For many years the Diffusion Decision Model (DDM) has successfully accounted for behavioral data from a wide range of domains. Important contributors to the DDM's success are the across-trial variability parameters, which allow the model to account for the various shapes of response time distributions encountered in practice. However, several researchers have pointed out that estimating the variability parameters can be a challenging task. Moreover, the numerous fitting methods for the DDM each come with their own associated problems and solutions. This often leaves users in a difficult position. In this collaborative project we invited researchers from the DDM community to apply their various fitting methods to simulated data and provide advice and expert guidance on estimating the DDM's across-trial variability parameters using these methods. Our study establishes a comprehensive reference resource and describes methods that can help to overcome the challenges associated with estimating the DDM's across-trial variability parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.