In 4 experiments, students received a lesson consisting of computer-based animation and narration or a lesson consisting of paper-based static diagrams and text. The lessons used the same words and graphics in the paper-based and computer-based versions to explain the process of lightning formation (Experiment 1), how a toilet tank works (Experiment 2), how ocean waves work (Experiment 3), and how a car's braking system works (Experiment 4). On subsequent retention and transfer tests, the paper group performed significantly better than the computer group on 4 of 8 comparisons, and there was no significant difference on the rest. These results support the static media hypothesis, in which static illustrations with printed text reduce extraneous processing and promote germane processing as compared with narrated animations.
Students received a personalized or nonpersonalized version of a narrated animation explaining how the human respiratory system works. The narration for the nonpersonalized version was in formal style, whereas the narration for the personalized version was in conversational style in which "the" was changed to "your" in 12 places. In 3 experiments, students who received the personalized version scored significantly higher on transfer tests but not on retention tests than did students who received the nonpersonalized version. The results are consistent with a cognitive theory of multimedia learning in which personalization causes students to actively process the incoming material.
What can be done to improve student engagement and learning in college lectures? One approach is to ask questions that students answer during the lecture. In two lab experiments, students received a 25-slide PowerPoint lecture in educational psychology that included four inserted multiple-choice questions (questioning group) or four corresponding statements (control group). Students in the questioning group used a personal response system (PRS), in which they responded to questions using a hand-held remote control, saw a graph displaying the percentage of students voting for each answer, and heard the teacher provide an explanation for the correct answer. Students in the control group received the corresponding slide as a statement and heard the teacher provide an explanation. The questioning group outperformed the control group on a retention test in Experiment 1 (d ¼ 1.23) and on a transfer test in Experiment 2 (d ¼ 0.74), but not on other tests. The results are consistent with a generative theory of learning, and encourage the appropriate use of questioning as an instructional method.Consider the following scenario: Students are attending a large college class where the teacher gives a lecture using PowerPoint slides. The teacher presents slides and talks about each one, while the students appear to sit passively. On a subsequent test, the students do not perform well on remembering the material or using it to solve problems. What can be done to increase student involvement in the class? In other words, how can a large lecture class be redesigned to enable student participation?One approach to this problem is to use a questioning technique in which the instructor presents a multiple-choice question covering some of the presented material, each student selects an answer, the tally of votes for each alternative is presented, the instructor calls on a student to briefly justify the correct answer, and the instructor explains his or her reasoning in selecting the correct answer. In this way, all students are able to participate in a large lecture class-albeit in a fairly modest way.Our goal in the present set of laboratory experiments is to determine whether incorporating this kind of questioning technique results in better learning from a lecture as compared to conventional practice. In particular, we compare the learning outcomes of students who receive a 25-slide PowerPoint lecture in educational psychology containing four inserted questions-each on a slide-that involve the questioning technique APPLIED COGNITIVE PSYCHOLOGY
The authors analyzed self-reported SAT scores and actual SAT scores for five different samples of college students (N=650). Students overestimated their actual SAT scores by an average of 25 points (SD=81, d=0.31), with 10% under-reporting, 51% reporting accurately, and 39% over-reporting, indicating a systematic bias towards overreporting. The amount of over-reporting was greater for lower-scoring than higher-scoring students, was greater for upper division than lower division students, and was equivalent for men and women. There was a strong correlation between self-reported and actual SAT scores (r=0.82), indicating high validity of students' memories of their scores. Results replicate previous findings (Kuncel, Credé, & Thomas, 2005) and are consistent with a motivated distortion hypothesis. Caution is suggested in using self-reported SAT scores in psychological research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.