In 2 experiments, understanding of historical subject matter was enhanced when students acted as historians and constructed their own models of an historical event. Providing students with information in a web site with multiple sources instead of a textbook chapter, and instructing them to write arguments instead of narratives, summaries, or explanations, produced the most integrated and causal essays with the most transformation from the original sources. Better performance on inference and analogy tasks provided converging evidence that students who wrote arguments from the web sources gained a better understanding than other students. A second experiment replicated the advantage of argument writing even when information was presented as an argument.
In two experiments, undergraduates' evaluation and use of multiple Internet sources during a science inquiry task were examined. In Experiment 1, undergraduates had the task of explaining what caused the eruption of Mt. St. Helens using the results of an Internet search. Multiple regression analyses indicated that source evaluation significantly predicted learning outcomes, with more successful learners better able to discriminate scientifically reliable from unreliable information. In Experiment 2, an instructional unit (SEEK) taught undergraduates how to evaluate the reliability of information sources. Undergraduates who used SEEK while working on an inquiry task about the Atkins low-carbohydrate diet displayed greater differentiation in their reliability judgments of information sources than a comparison group. Both groups then participated in the Mt. St. Helens task. Undergraduates in the SEEK conditions demonstrated better learning from the volcano task. The current studies indicate that the evaluation of information sources is critical to successful learning from Internet-based inquiry and amenable to improvement through instruction.
The subjective Aha! experience that problem solvers often report when they find a solution has been taken as a marker for insight. If Aha! is closely linked to insightful solution processes, then theoretically, an Aha! should only be experienced when the correct solution is found. However, little work has explored whether the Aha! experience can also accompany incorrect solutions (“false insights”). Similarly, although the Aha! experience is not a unitary construct, little work has explored the different dimensions that have been proposed as its constituents. To address these gaps in the literature, 70 participants were presented with a set of difficult problems (37 magic tricks), and rated each of their solutions for Aha! as well as with regard to Suddenness in the emergence of the solution, Certainty of being correct, Surprise, Pleasure, Relief, and Drive. Solution times were also used as predictors for the Aha! experience. This study reports three main findings: First, false insights exist. Second, the Aha! experience is multidimensional and consists of the key components Pleasure, Suddenness and Certainty. Third, although Aha! experiences for correct and incorrect solutions share these three common dimensions, they are also experienced differently with regard to magnitude and quality, with correct solutions emerging faster, leading to stronger Aha! experiences, and higher ratings of Pleasure, Suddenness, and Certainty. Solution correctness proffered a slightly different emotional coloring to the Aha! experience, with the additional perception of Relief for correct solutions, and Surprise for incorrect ones. These results cast some doubt on the assumption that the occurrence of an Aha! experience can serve as a definitive signal that a true insight has taken place. On the other hand, the quantitative and qualitative differences in the experience of correct and incorrect solutions demonstrate that the Aha! experience is not a mere epiphenomenon. Strong Aha! experiences are clearly, but not exclusively linked to correct solutions.
Previous work on learning from text has demonstrated that although illustrated text can enhance comprehension, illustrations can also sometimes lead to poor learning outcomes when they are not relevant to understanding the text. This phenomenon is known as the seductive details effect. The first experiment was designed to test whether the ability to control one's attention, as measured by working memory span tasks, would influence the processing of a scientific text that contained seductive (irrelevant) images, conceptually relevant images, or no illustrations. Understanding was evaluated using both an essay response and an inference verification task. Results indicated that low working memory capacity readers are especially vulnerable to the seductive details effect. In the second experiment, this issue was explored further, using eye-tracking methodology to evaluate the reading patterns of individuals who differed in working memory capacity as they read the same seductively illustrated scientific text. Results indicated that low working memory individuals attend to seductive illustrations more often than not and, also, for a longer duration than do those individuals high in working memory capacity.
Two studies attempt to determine the causes of poor metacomprehension accuracy, and then, in turn, to identify interventions that circumvent these difficulties to support effective comprehension monitoring performance. The first study explored the cues that both at-risk and typical college readers use as a basis for their metacomprehension judgments in the context of a delayed summarization paradigm. Improvement was seen in all readers, but at-risk readers did not reach the same level of metacomprehension accuracy as a sample of typical college readers.Further, while few readers reported using comprehension-related cues, more at-risk readers reported using surface-related cues as the basis for their judgments. To support the use of more predictive cues among the at-risk readers, a second study employed a concept map intervention, which was intended to make situation model-level representations more salient. Concept mapping improved both the comprehension and metacomprehension accuracy of at-risk readers.The results suggest that poor metacomprehension accuracy can result from a failure to use appropriate cues for monitoring judgments, and that especially less-able readers need interventions that direct them to predictive cues for comprehension.This is an electronic version of an article published in Discourse Processes, 47(4). Discourse Processes is available online at: http://www.informaworld.com/smpp. DOI: 10.1080/01638530902959927Metacomprehension and Cue Use 3 Poor Metacomprehension Accuracy as a Result of Inappropriate Cue UseLearning from text is a standard adjunct to classroom instruction. Students are assigned reading for homework, where they are expected to study and understand textbook chapters or other texts. Models of self-regulated learning (e.g., Dunlosky & Thiede, 1998;Metcalfe, 2002;Nelson & Narens, 1990) suggest that metacognitive monitoring and regulation of study play an important role in such learning. Thiede, Anderson, and Therriault (2003) showed that monitoring accuracy (operationalized as the intra-individual correlation between metacomprehension judgments and test performance computed across texts 1 ) influenced decisions about which texts to restudy, which in turn affected learning from text. In particular, they showed that participants who more accurately monitored their comprehension made better decisions about which texts to reread than did participants who less accurately monitored their comprehension. That is, for a group with higher monitoring accuracy, participants chose to restudy primarily the texts that they did not understand. Their mean proportion correct on initial comprehension tests for the texts they selected to reread was .27 versus .78 for the texts they did not select to reread. By contrast, groups with lower monitoring accuracy showed less of a preference. The mean proportion correct on tests for the texts they selected to reread was .43versus .53 for those they did not select to reread. The more effective regulation of study among the group with higher monitoring acc...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.