In this study we examined the different functions of text and pictures during text-picture integration in multimedia learning. In Study 1, 144 secondary school students (age = 11 to 14 years; 72 females, 72 males) received six text-picture units under two conditions. In the delayed-question condition, students first read the units without a specific question (no-question phase), to stimulate initial coherence-oriented mental model construction. Afterward the question was presented (question-answering phase), to stimulate task-adaptive mental model specification. In the preposed-question condition, students received a specific question from the beginning, stimulating both kinds of processing. Analyses of the participants' eye movement patterns confirmed the assumption that students allocated a higher percentage of available resources to text processing during the initial mental model construction than during adaptive model specification. Conversely, students allocated a higher percentage of available resources to picture processing during adaptive mental model specification than during the initial mental model construction. In Study 2 (N = 12, age = 12 to 16; seven females, five males), we ruled out that these findings were due to the effect of rereading, by implementing a no-question phase either once or twice. To sum up, texts seem to provide more explicit conceptual guidance in mental model construction than pictures do, whereas pictures support mental model adaptation more than text does, by providing flexible access to specific information for task-oriented updates. Keywords Text-picture integration. Eye tracking. Initial mental model construction. Adaptive mental model specification Text accompanied by static pictures is ubiquitous in textbooks, especially those for the natural sciences. Abundant research has shown that students learn better from text and pictures than from text alone (e.g., Carney & Levin, 2002; DeLeeuw & Mayer, 2008; Mayer, 2009). Nevertheless, it is not yet well understood how text and pictures interact in their conjoint processing (Ortegren, Serra, & England, 2015). When both text and picture are needed for comprehension and learning, students must integrate verbal and pictorial information into one coherent, task-appropriate mental representation, a process known as text-picture integration. An example of the need for text-picture integration is presented in Fig. 1, which originates from a textbook on biology. The text describes the dynamic processes of blood circulation between mother and fetus during pregnancy, which is shown in pictures that point out the main parts using numbers. Global understanding, as well as the answering of specific questions, requires students to integrate the text and picture information. Referring to Wainer's (1992) taxonomy, integration requirements can differ in terms of their complexity. Lowcomplexity questions require only element mappings between text and picture. For example, to answer the question what is the name of the pink area?, students have to s...
Conjoint processing of text and pictures is assumed to possess an inherent asymmetry, because text and pictures serve fundamentally different but complementary functions. Conjoint processing is assumed to start with general, coherence-oriented mental model construction. When certain tasks have to be solved, the mental model is adjusted to the task requirements by adaptive mental model elaboration. We hypothesized that, due to different constraints on cognitive processing, initial mental model construction is more text-driven than picture-driven, whereas adaptive mental model elaboration is more picture-driven than text-driven. We also hypothesized that there are more transitions between text and picture during initial model construction than during adaptive model elaboration and more task–picture transitions than task–text transitions during adaptive mental model elaboration. To test these hypotheses, we selected 6 text–picture units from textbooks on biology and geography, each combined with 3 comprehension items of different complexity. The units and corresponding items were presented to 204 students from Grades 5 to 8 from the higher tier and the lower tier of the German school system. The participants were required to answer the presented items 1 by 1. Their eye movements were analyzed in terms of fixations and transitions between texts, pictures, and items as dependent variables. The independent variables were school tier, grade, and order of presentation. The results confirmed our hypotheses. We presume that the benefits of learning from text and pictures are due to the inherent asymmetry, which allows the learner to combine the specific advantages of both forms of representations.
The German school system employs centrally organized performance assessments (some of which are called “VERA”) as a way of promoting lesson development. In recent years, several German federal states introduced a computer-based performance testing system which will replace the paper-pencil testing system in the future. Scores from computer-based testing are required to be equivalent to paper-pencil testing scores so that the new testing medium does not lead to disadvantages for students. Therefore, the current study aimed at investigating the size of the mode effect and the moderating impact of students’ gender, academic achievement and mainly spoken language in everyday life. In addition, the variance of the mode effect across tasks was investigated. The study was conducted in four German federal states in 2019 using a field experimental design. The test scores of 5140 eighth-graders from 165 schools in the subject German were analysed. The results of multi-level modelling revealed that students’ test scores in the computerized version of the VERA test were significantly lower than in the paper-pencil version. Students with a lower academic achievement were more disadvantaged by the VERA computerized test. The results were inconsistent regarding the interactions between testing mode and students’ gender and mainly spoken language in everyday life. The variance of the mode effect across tasks was high. Research into different subjects and in other federal states and countries under different testing conditions might yield further evidence about the generalizability of these results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.