Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.
Refreshing and elaboration are cognitive processes assumed to underlie verbal working-memory maintenance and assumed to support long-term memory formation. Whereas refreshing refers to the attentional focussing on representations, elaboration refers to linking representations in working memory into existing semantic networks. We measured the impact of instructed refreshing and elaboration on working and long-term memory separately, and investigated to what extent both processes are distinct in their contributions to working as well as long-term memory. Compared with a no-processing baseline, immediate memory was improved by repeating the items, but not by refreshing them. There was no credible effect of elaboration on working memory, except when items were repeated at the same time. Long-term memory benefited from elaboration, but not from refreshing the words. The results replicate the long-term memory benefit for elaboration, but do not support its beneficial role for working memory. Further, refreshing preserves immediate memory, but does not improve it beyond the level achieved without any processing.
Statistical procedures such as Bayes factor model selection and Bayesian model averaging require the computation of normalizing constants (e.g., marginal likelihoods). These normalizing constants are notoriously difficult to obtain, as they usually involve highdimensional integrals that cannot be solved analytically. Here we introduce an R package that uses bridge sampling (Meng and Wong 1996; Meng and Schilling 2002) to estimate normalizing constants in a generic and easy-to-use fashion. For models implemented in Stan, the estimation procedure is automatic. We illustrate the functionality of the package with three examples.
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/.
Statistical procedures such as Bayes factor model selection and Bayesian model averaging require the computation of normalizing constants (e.g., marginal likelihoods). These normalizing constants are notoriously difficult to obtain, as they usually involve highdimensional integrals that cannot be solved analytically. Here we introduce an R package that uses bridge sampling (Meng and Wong 1996;Meng and Schilling 2002) to estimate normalizing constants in a generic and easy-to-use fashion. For models implemented in Stan, the estimation procedure is automatic. We illustrate the functionality of the package with three examples.
Traditional approaches within the framework of signal detection theory (SDT; Green & Swets, 1966), especially in the field of recognition memory, assume that the positioning of response criteria is not a noisy process. Recent work (Benjamin, Diaz, & Wee, 2009; Mueller & Weidemann, 2008) has challenged this assumption, arguing not only for the existence of criterion noise but also for its large magnitude and substantive contribution to individuals' performance. A review of these recent approaches for the measurement of criterion noise in SDT identifies several shortcomings and confoundings. A reanalysis of Benjamin et al.'s (2009) data sets as well as the results from a new experimental method indicate that the different forms of criterion noise proposed in the recognition memory literature are of very low magnitudes, and they do not provide a significant improvement over the account already given by traditional SDT without criterion noise.
For many years the Diffusion Decision Model (DDM) has successfully accounted for behavioral data from a wide range of domains. Important contributors to the DDM's success are the across-trial variability parameters, which allow the model to account for the various shapes of response time distributions encountered in practice. However, several researchers have pointed out that estimating the variability parameters can be a challenging task. Moreover, the numerous fitting methods for the DDM each come with their own associated problems and solutions. This often leaves users in a difficult position. In this collaborative project we invited researchers from the DDM community to apply their various fitting methods to simulated data and provide advice and expert guidance on estimating the DDM's across-trial variability parameters using these methods. Our study establishes a comprehensive reference resource and describes methods that can help to overcome the challenges associated with estimating the DDM's across-trial variability parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.