This article critically examines the view that the signal detection theory (SDT) interpretation of the remember-know (RK) paradigm has been ruled out by the evidence. The author evaluates 5 empirical arguments against a database of 72 studies reporting RK data under 400 different conditions. These arguments concern (a). the functional independence of remember and know rates, (b). the invariance of estimates of sensitivity, (c). the relationship between remember rates and overall hit and false alarm rates, (d). the relationship between RK responses and confidence judgments, and (e). dissociations between remember and overall hit rates. Each of these arguments is shown to be flawed, and despite being open to refutation, the SDT interpretation is consistent with existing data from both the RK and remember-know-guess paradigms and offers a basis for further theoretical development.
A joint aim of cognitive psychology and neuropsychology has been the decomposition of mental function-the isolation and characterization of basic processes underlying behavior. By convention, the principal techniques used to identify such processes are based on functional dissociation-the observation of selective effects of variables on tasks. Yet, despite their widespread use, the inferential logic associated with these techniques is flawed in two ways. First, it is possible to posit single processes that mimic both single and double dissociation; and second, observation and interpretation of both kinds of dissociation are limited by an assumption of selective influence that most, if not all, psychologists would now reject as false. The aims of this article are twofold: (a) to review and make explicit the inferential limits of single and double dissociation; and (b) to introduce a new technique that overcomes these limits. Called reversed association, this new technique is denned as any nonmonotonic relation between two tasks of interest. We argue that reversed association, in place of functional dissociation, offers a sounder basis for inferring the number of functionally independent processes underlying performance and, having fewer assumptions, offers researchers greater scope for discovering such processes and determining their nature and effects. This research was funded by a grant from the Australian Research Grants Scheme.We wish to thank Max Coltheart and Alistair Mees for their helpful comments on an early draft of this article.
This article addresses the issue of whether the remember-know (RK) task is best explained by a single-process or a dual-process model. All single-process models propose that remember and know responses reflect different levels of a single strength-of-evidence dimension. Thus, across conditions in which response criteria are held constant, these models predict that the RK task is unidimensional. Many dual-process models propose that remember and know responses reflect two qualitatively distinct processes underlying recognition memory, often characterized as recollection and familiarity. These models predict that the RK task is bidimensional. Using data from 37 studies, the author conducted a state-trace analysis to determine the dimensionality of the RK task. In those studies, non-memory-related differences between conditions were eliminated via decision criteria constrained to be constant across all levels of the independent variables. The results reveal little or no evidence of bidimensionality and lend additional support to the unequal-variance signal detection model. Other arguments supporting a bidimensional interpretation are examined, and the author concludes there is insufficient evidence for the RK task to be used to identify qualitatively different memory components.
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.eyewitness identification | confidence-accuracy relationship | simultaneous vs. sequential lineups E yewitnesses to a crime are often called upon by police investigators to identify a suspected perpetrator from a lineup. A traditional police lineup in the United States consists of the simultaneous presentation of six people, one of whom is the suspect (who is either guilty or innocent) and five of whom are fillers who resemble the suspect but who are known to be innocent. Live lineups were once the norm, but, nowadays, photo lineups are much more commonly used (1). When presented with a photo lineup, an eyewitness can identify someone-either the suspect (a suspect ID) or one of the fillers (a filler ID)-or can reject the lineup (no ID). A filler ID is a known error that does not imperil the identified individual, but a suspect ID (including a misidentification of an innocent suspect) does. According to the Innocence Project, eyewitness misidentification is the single greatest cause of wrongful convictions in the United States, having played a role in over 70% of the 333 wrongful convictions that have been overturned by DNA evidence since 1989 (2).In an effort to reduce eyewitness misidentifications, several reforms based largely on the results of mock crime studies have been proposed. In a typical mock crime study, participants become witnesses to a staged crime (e.g., a purse snatching) and then later attempt to identify the perpetrator from a targetpresent lineup (containing a photo of the perpetrator) or a target-absent lineup (in which the photo of the perpetrator is replaced by a photo of the "innocent suspect"). The results of mock crime studies have often been interpreted to mean that (i) eyewitness confidence is an unreliable indicator of accuracy (3, 4) and (ii) suspect ID accuracy is enhanced-and the risk to innocent suspects is reduced-when the lineup members are presented sequentially (i.e., one at a time) rather than simultaneously (5-7). In light of such findings, the state of New J...
Estimator variables are factors that can affect the accuracy of eyewitness identifications but that are outside of the control of the criminal justice system. Examples include (1) the duration of exposure to the perpetrator, (2) the passage of time between the crime and the identification (retention interval), (3) the distance between the witness and the perpetrator at the time of the crime. Suboptimal estimator variables (e.g., long distance) have long been thought to reduce the reliability of eyewitness identifications (IDs), but recent evidence suggests that this is not true of IDs made with high confidence and may or may not be true of IDs made with lower confidence. The evidence suggests that though suboptimal estimator variables decrease discriminability (i.e., the ability to distinguish innocent from guilty suspects), they do not decrease the reliability of IDs made with high confidence. Such findings are inconsistent with the longstanding "optimality hypothesis" and therefore require a new theoretical framework. Here, we propose that a signal-detection-based likelihood ratio account-which has long been a mainstay of basic theories of recognition memory-naturally accounts for these findings.
State-trace analysis was used to investigate the effect of concurrent working memory load on perceptual category learning. Initial reanalysis of Zeithamova and Maddox (2006, Experiment 1) revealed an apparently two-dimensional state-trace plot consistent with a dual-system interpretation of category learning. However, three modified replications of the original experiment found evidence of a single resource underlying the learning of both rule-based and information integration category structures. Follow-up analyses of the Zeithamova and Maddox data, restricted to only those participants who had learned the category task and performed the concurrent working memory task adequately, revealed a one-dimensional plot consistent with a single-resource interpretation and the results of the three new experiments. The results highlight the potential of state-trace analysis in furthering our understanding of the mechanisms underlying category learning.
Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.