When respondents do not understand the meaning of a survey question, they will not supply valid and reliable answers. Survey methodologists should therefore benefit from computer tools and other analytical schemes that help them identify problems with questions with respect to comprehension difficulty. We developed a Web facility called Question Understanding Aid (QUAID; www.psyc.memphis.edu/ quaid.html) that assists survey methodologists in identifying problems with the wording, syntax, and semantics of questions on questionnaires. The survey methodologist enters the question into the Web facility, along with any context information and answer alternatives that accompany the question. QUAID quickly returns a list of potential problems with question comprehension, including unfamiliar technical terms, vague or imprecise relative terms, vague or ambiguous noun phrases, complex syntax, and working memory overload. This article describes QUAID and some empirical studies that have assessed the validity and utility of QUAID's critiques of questions. The output of QUAID was compared with the judgments of experts in language, discourse, and cognition during the development of the tool. In one evaluation, expert survey methodologists critiqued and revised problematic questions, whereas in a second evaluation survey methodologists evaluated the
Psychology researchers have long attempted to identify educational practices that improve student learning. However, experimental research on these practices is often conducted in laboratory contexts or in a single course, which threatens the external validity of the results. In this article, we establish an experimental paradigm for evaluating the benefits of recommended practices across a variety of authentic educational contexts—a model we call ManyClasses. The core feature is that researchers examine the same research question and measure the same experimental effect across many classes spanning a range of topics, institutions, teacher implementations, and student populations. We report the first ManyClasses study, in which we examined how the timing of feedback on class assignments, either immediate or delayed by a few days, affected subsequent performance on class assessments. Across 38 classes, the overall estimate for the effect of feedback timing was 0.002 (95% highest density interval = [−0.05, 0.05]), which indicates that there was no effect of immediate feedback compared with delayed feedback on student learning that generalizes across classes. Furthermore, there were no credibly nonzero effects for 40 preregistered moderators related to class-level and student-level characteristics. Yet our results provide hints that in certain kinds of classes, which were undersampled in the current study, there may be modest advantages for delayed feedback. More broadly, these findings provide insights regarding the feasibility of conducting within-class randomized experiments across a range of naturally occurring learning environments.
In the present experiments, we explored the relationship between individual differences in working memory (WM) capacity and susceptibility to false recognitions and their accompanying subjective experiences. Deese/RoedigerMcDermott (DRM) associative lists were used to elicit false memories, and remember/know judgments were used to measure the recollective experiences accompanying recognition decisions. We found that WM capacity was related to false recognitions of nonpresented critical lures and to the proportion of remember responses given to critical lures, such that higher WM capacity was associated with lower falserecognition rates and with lower proportions of remember responses. Importantly, these WM differences were only found when participants were forewarned about the nature of the DRM task prior to encoding (Exp. 1). When the forewarning was absent, WM capacity was not related to false recognitions or to the proportion of remember responses given to critical lures (Exp. 2). These results support the controlledattention view of WM and suggest that subjective experiences of falsely recognized lures fluctuate as a function of WM capacity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.