Perceptual manipulations, such as changes in font type or figure-ground contrast, have been shown to increase judgments of difficulty or effort related to the presented material. Previous theory has suggested that this is the result of changes in online processing or perhaps the post-hoc influence of perceived difficulty recalled at the time of judgment. These two experiments seek to examine by which mechanism (or both) the fluency effect is produced. Results indicate that disfluency does in fact change in situ reading behavior, and this change significantly mediates judgments. Eye movement analyses corroborate this suggestion and observe a difference in how people read a disfluent presentation. These findings support the notion that readers are using perceptual cues in their reading experiences to change how they interact with the material, which in turn produces the observed biases.
This study explored students' ability to evaluate their learning from a multimedia inquiry unit about the causes of global climate change. Participants were 90 sixth grade students from four science classrooms. Students were provided with a text describing the causes of climate change as well as graphs showing average global temperature changes. Half of the students also received an analogy to help support their understanding of the topic. Results indicated that overall students were overconfident about how much they learned and how well they understood the topic. Further, the presence of an analogy led to higher levels of overconfidence. Results also indicated that students with better graph interpretation skills were less overconfident even when the analogy was present. These results suggest that the presence of graphs and analogies can negatively affect students' abilities to accurately judge their own level of understanding and may lead to an illusion of comprehension.
Students tend to have poor metacomprehension when learning from text, meaning they are not able to distinguish between what they have understood well and what they have not. Although there are a good number of studies that have explored comprehension monitoring accuracy in laboratory experiments, fewer studies have explored this in authentic course contexts. This study investigated the effect of an instructional condition that encouraged comprehension-test-expectancy and self-explanation during study on metacomprehension accuracy in the context of an undergraduate course in research methods. Results indicated that when students received this instructional condition, relative metacomprehension accuracy was better than in a comparison condition. In addition, differences were also seen in absolute metacomprehension accuracy measures, strategic study behaviors, and learning outcomes. The results of the current study demonstrate that a condition that has improved relative metacomprehension accuracy in laboratory contexts may have value in real classroom contexts as well. (PsycINFO Database Record
Effective use of the working memory system is critical for successful learning, and this assumption has motivated much of the work on multimedia instruction. Interestingly, the limited capacity of human working memory has been invoked as part of explanations for both advantages and disadvantages of multimedia learning in comparison with learning from text or pictures alone. This chapter reviews several lines of reasoning that have guided explorations of the role of working memory in multimedia learning, including approaches that have emphasized the modality-specifi c buffer system and the potential for overloading the limited resources that are available to learners; as well as a newer approach that considers working memory capacity as an individual differences variable representing attentional control.
This article describes several approaches to assessing student understanding using written explanations that students generate as part of a multiple-document inquiry activity on a scientific topic (global warming). The current work attempts to capture the causal structure of student explanations as a way to detect the quality of the students' mental models and understanding of the topic by combining approaches from Cognitive Science and Artificial Intelligence, and applying them to Education. First, several attributes of the explanations are explored by hand-coding and leveraging existing technologies (LSA and Coh-Metrix). Then, we describe an approach for inferring the quality of the explanations using a novel, two-phase machine learning approach for detecting causal relations and the causal chains that are present within student essays. The results demonstrate the benefits of using a machine learning approach for detecting content, but also highlight the promise of hybrid methods that combine ML, LSA and Coh-Metrix approaches for detecting student understanding. Opportunities to use automated approaches as part of Intelligent Tutoring Systems that provide feedback toward improving student explanations and understanding are discussed.
SummaryWord problems embed a math equation within a short narrative. Due to their structure, both numerical and linguistic factors can contribute to problem difficulty. The present studies explored the role of irrelevant information in word problems, to determine whether its negative impact is due to numerical (foregrounding hypothesis) or linguistic (inconsistent‐operations hypothesis) interference. Across three experiments, participants solved multiplication and division word problems containing irrelevant numerical information, which was either associated or disassociated with the protagonist. Results demonstrated increased solution errors on division problems when irrelevant numbers were disassociated with the protagonist. When memory for numerical information was emphasized, disassociation was specifically impacted low‐working memory individuals. The effect of disassociation on division performance persisted even when irrelevant numbers, but not words, were removed from problems. These results suggest that, even in the presence of numerically interfering information, it is the language of word problems that often drive their difficulty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.