A widely held assumption in metamemory is that better, more accurate metamemory monitoring leads to better, more efficacious restudy decisions, reflected in better memory performance--we refer to this causal chain as the restudy selectivity hypothesis. In 3 sets of experiments, we tested this hypothesis by factorially manipulating metamemory monitoring accuracy and self-regulation of study. To manipulate monitoring accuracy, we compared judgments of learning (JOLs) made contemporaneously with a delayed retrieval attempt to JOLs either made at a delay without attempting retrieval or made immediately after study; in previous studies, delayed retrieval-based JOLs have robustly predicted recall with greater relative accuracy than have the other JOL types. To manipulate self-regulation of study, in Experiments 1A-1C and 2A-2C, we compared conditions in which participants' restudy selections were honored with conditions in which they were completely or randomly dishonored; in Experiments 3A-3C, we randomly honored or dishonored half of the restudy selections and half of the nonselections. Results revealed that the benefit of delayed, retrieval-based JOLs for final memory performance was due largely to the selection of more items for restudy rather than to better discriminations between items that would benefit more versus less from restudy. In most cases, gains in recall due to greater self-regulation of study did not increase with better monitoring accuracy; when they did, the effect was extremely small. The surprising conclusion was that restudy decisions were not very much more efficacious under conditions that yield greater monitoring accuracy than under those that do not.
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw scores. This research will evaluate several approaches found in the literature using an approach that evaluates how the inclusion of scoring related to the selection/nonselection of both relevant and irrelevant information is incorporated extending Wilson’s approach. Results indicated all methods have potential, but the plus/minus and true/false methods seemed the most promising for items using the “select all that apply” instruction set. Additionally, these methods showed a large increase in information per time unit over the dichotomous method.
Increasing use of innovative items in operational assessments has shedded new light on the polytomous testlet models. In this study, we examine performance of several scoring models when polytomous items exhibit random testlet effects. Four models are considered for investigation: the partial credit model (PCM), testlet-as-a-polytomousitem model (TPIM), random-effect testlet model (RTM), and fixed-effect testlet model (FTM). The performance of the models was evaluated in two adaptive testings where testlets have nonzero random effects. The outcomes of the study suggest that, despite the manifest random testlet effects, PCM, FTM, and RTM perform comparably in trait recovery and examinee classification. The overall accuracy of PCM and FTM in trait inference was comparable to that of RTM. TPIM consistently underestimated population variance and led to significant overestimation of measurement precision, showing limited utility for operational use. The results of the study provide practical implications for using the polytomous testlet scoring models.
Two experiments investigated the effects of spreading semantic activation during a recognition test. In Experiment 1, activation spreading during testing from words that were thematic associates of unstudied critical words yielded a linear increase in false alarms to such critical words as the number of tested associates increased, regardless of whether the theme appeared during study or whether any thematic processing occurred during study at all. In Experiment 2, the number of tested associates was held constant, and false alarms to critical words from unstudied themes increased linearly with the strength of association between the critical word and its tested associates, consistent with predictions of spreading-activation theory. For studied themes, however, testing weaker or stronger associates yielded similar rates of such false alarms, contrary to spreading-activation theory. These results suggest that test-induced thematic priming is driven by spreading activation for unstudied themes but by thematic reactivation for studied themes.
The part-set cueing effect refers to paradoxical memory impairment often observed when elements from a set of items appear as ostensibly helpful retrieval cues during testing of memory for the set. We tested predictions of a two-mechanism account of part-set cueing-^that, without enhanced relational processing, standard encoding leaves items susceptible to cueing-induced inhibition that persists after cues are removed; and that increasing item-specific encoding increases this persisting inhibition. Experiment 1 used antonym generation during study to increase item-specific encoding relative to standard encoding. Tests using item-specific probes revealed greater cueing-induced impairment for the generation condition, as predicted. However, when part-set cues were later removed, this impairment abated significantly in the generation condition and even disappeared in the standard-encoding condition-effects not predicted by the two-mechanism account, challenging its completeness. In Experiment 2, we ruled out an artifactual explanation of these results by replicating previously reported persisting impairment on free recall tests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.