Two experiments investigated criterion setting and metacognitive processes underlying the strategic regulation of accuracy on the Scholastic Aptitude Test (SAT) using Type-2 signal detection theory (SDT). In Experiment 1, report bias was manipulated by penalizing participants either 0.25 (low incentive) or 4 (high incentive) points for each error. Best guesses to unanswered items were obtained so that Type-2 signal detection indices of discrimination and bias could be calculated. The same incentive manipulation was used in Experiment 2, only the test was computerized, confidence ratings were taken so that receiver operating characteristic (ROC) curves could be generated, and feedback was manipulated. The results of both experiments demonstrated that SDT provides a viable alternative to A. Koriat and M. Goldsmith's (1996c) framework of monitoring and control and reveals information about the regulation of accuracy that their framework does not. For example, ROC analysis indicated that the threshold model implied by formula scoring is inadequate. Instead, performance on the SAT should be modeled with an equal-variance Gaussian, Type-2 signal detection model.
Performance on tests in which there is control over reporting (e.g., cued recall with the option to withhold responses) can be characterizedby four parameters: free-and forced-report retrieval(correct responses retrieved from memory when the option to withhold responses is exercised and when it is not, respectively),monitoring (discrimination between correct and incorrect potential responses), and report bias (willingness to report responses). Typically, researchers do not examine all these components in cued-test performance; blanks are sometimes counted the same as errors, meaning that the (free-report) performance index is contaminated with report bias and monitoring ability. In this research, a two-stage testing procedure is described that allows measures of free-and forced-report retrieval, monitoring, and bias to be derived from the original encoding specificity experiments (Thomson & Tulving, 1970). The results show that their cue-reinstatement manipulation affected free-report retrieval, but once report bias and monitoring effects were removed by forcing output, retrieval was unaffected.
(2000, Exps. 1 & 3) demonstrated that a slight increase in the display duration of a briefly presented word prior to displaying it in the clear for a recognition response increased the bias to respond "old". In the current research, three experiments investigated the phenomenology associated with this illusion of memory using the standard remember-know procedure and a new, independent-scales methodology. Contrary to expectations based on the fluency heuristic, which predicts effects of display duration on subjective familiarity only, the results indicated that the illusion was reported as both familiarity and recollection. Furthermore, manipulations of prime duration induced reports of false recollection in all experiments. The results-in particular, the implications of illusory recollection-are discussed in terms of dualprocess, fuzzy-trace, two-criteria signal detection models and attribution models of recognition memory.Over the past 30 years, dual-process theory (e.g., Atkinson & Juola, 1973, 1974Jacoby, 1991;Jacoby & Dallas, 1981;Mandler, 1979Mandler, , 1980Yonelinas, 1994Yonelinas, , 1997 has remained a dominant account of recognition memory. The theory proposes that recognition memory is based on two qualitatively and quantitatively distinct processes, commonly referred to as recollection and familiarity (see Yonelinas, 2002, for a recent review). Recollection (with or without familiarity) involves the conscious retrieval of veridical episodic information from an earlier encounter with a stimulus and gives rise to a feeling of reliving a past event. On the other hand, familiarity is associated with fluent conceptual and perceptual processing, stimulus similarity, and a vague, source nonspecific feeling of remembrance. Thus, it follows from dual-process Requests for reprints should be sent to
In two experiments, participants viewed a videotape of a simulated armed robbery, later answered misleading questions about it, and then finally completed a source monitoring test. For the test, participants were asked to indicate for each test item whether it was (1) seen in the video only, (2) read about in the questions only, (3) both seen and read about, (4) not remembered or (5) known to have occurred but the source was unclear. The latter response category was included on the test to remove source guessing and to ensure that attributions to ‘video’, ‘questions' or ‘both’ were caused by false conscious recollection. In Expt 1, robust misinformation effects were obtained with both 1‐ and 48‐hour delays between receiving misinformation and the memory test. However, suggested objects were more likely to receive ‘video only’ attributions at long delay than at short. Experiment 2 verified that it was the interval between receiving the misinformation and the test, and not the interval between viewing the video and receiving the misinformation, that determined the effect of delay in Expt 1. The results are explained by assuming that, at short delay, participants remembered reading about the suggested objects and could discount the ‘video only’ category. However, despite accurately remembering the source of suggested information, the misinformation effect as measured by ‘both’ responses was not diminished. Thus, remembering that misinformation was suggested does not necessarily stop the creation of false memories.
Participants viewed a videotape of a simulated murder, and their recall (and confidence) was tested 1 week later with the cognitive interview. Results indicated that (a) the subset of statements assigned high confidence was more accurate than the full set of statements; (b) the accuracy benefit was limited to information that forensic experts considered relevant to an investigation, whereas peripheral information showed the opposite pattern; (c) the confidence-accuracy relationship was higher for relevant than for peripheral information; (d) the focused-retrieval phase was associated with a greater proportion of peripheral and a lesser proportion of relevant information than the other phases; and (e) only about 50% of the relevant information was elicited, and most of this was elicited in Phase 1.
We report two experiments that investigated the regulation of memory accuracy with a new regulatory mechanism: the plurality option. This mechanism is closely related to the grain-size option but involves control over the number of alternatives contained in an answer rather than the quantitative boundaries of a single answer. Participants were presented with a slideshow depicting a robbery (Experiment 1) or a murder (Experiment 2), and their memory was tested with five-alternative multiple-choice questions. For each question, participants were asked to generate two answers: a single answer consisting of one alternative and a plural answer consisting of the single answer and two other alternatives. Each answer was rated for confidence (Experiment 1) or for the likelihood of being correct (Experiment 2), and one of the answers was selected for reporting. Results showed that participants used the plurality option to regulate accuracy, selecting single answers when their accuracy and confidence were high, but opting for plural answers when they were low. Although accuracy was higher for selected plural than for selected single answers, the opposite pattern was evident for confidence or likelihood ratings. This dissociation between confidence and accuracy for selected answers was the result of marked overconfidence in single answers coupled with underconfidence in plural answers. We hypothesize that these results can be attributed to overly dichotomous metacognitive beliefs about personal knowledge states that cause subjective confidence to be extreme.
Three artificial grammer learning experiments investigated the memory processes underlying classification judgments. In Experiment 1, effects of grammatically, specific item similarity, and chunk frequency were analogous between classification and recognition tasks. In Experiments 2A and 2B, instructions to exclude "old" and "similar" test items, under conditions that limited the role of conscious recollection, dissociated grammaticality and similarity effects in classification. Dividing attention at test also produced a dissociation in Experiment 3. It is concluded that a dual-process model of classification, whereby the grammaticality and specific similarity effects are based mostly on automatic and intentional memory processes, respectively, is consistent with the results, whereas a unitary mechanism account is not. This conclusion is further supported by evidence indicating that chunk frequency had both implicit and explicit influences on classification judgments.
Criterion- versus distribution-shift accounts of frequency and strength effects in recognition memory were investigated with Type-2 signal detection receiver operating characteristic (ROC) analysis, which provides a measure of metacognitive monitoring. Experiment 1 demonstrated a frequency-based mirror effect, with a higher hit rate and lower false alarm rate, for low frequency words compared with high frequency words. In Experiment 2, the authors manipulated item strength with repetition, which showed an increased hit rate but no effect on the false alarm rate. Whereas Type-1 indices were ambiguous as to whether these effects were based on a criterion- or distribution-shift model, the two models predict opposite effects on Type-2 distractor monitoring under some assumptions. Hence, Type-2 ROC analysis discriminated between potential models of recognition that could not be discriminated using Type-1 indices alone. In Experiment 3, the authors manipulated Type-1 response bias by varying the number of old versus new response categories to confirm the assumptions made in Experiments 1 and 2. The authors conclude that Type-2 analyses are a useful tool for investigating recognition memory when used in conjunction with more traditional Type-1 analyses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.