Collaborative learning has often been associated with the construction of a shared understanding of the situation at hand. The psycholinguistics mechanisms at work while establishing common grounds are the object of scientific controversy. We postulate that collaborative tasks require some level of mutual modelling, i.e. that each partner needs some model of what the other partners know/want/intend at a given time. We use the term "some model" to stress the fact that this model is not necessarily detailed or complete, but that we acquire some representations of the persons we interact with. The question we address is: Does the quality of the partner model depend upon the modeler's ability to represent his or her partner? Upon the modelee's ability to make his state clear to the modeler? Or rather, upon the quality of their interactions? We address this question by comparing the respective accuracies of the models built by different team members. We report on 5 experiments on collaborative problem solving or collaborative learning that vary in terms of tasks (how important it is to build an accurate model) and settings (how difficult it is to build an accurate model). In 4 studies, the accuracy of the model that A built about B was correlated with the accuracy of the model that B built about A, which seems to imply that the quality
The present study is part of a project aiming at empirically investigating the process of modeling the partner's knowledge (Mutual Knowledge Modeling or MKM) in Computer-Supported Collaborative Learning (CSCL) settings. In this study, a macro-collaborative script was used to produce knowledge interdependence (KI) among colearners by providing them with different but complementary information. Prior to collaboration, two students read the same text in the "Same Information" (SI) condition while each of them read one of two complementary texts in the "Complementary Information" (CI) condition. After the collaboration phase, a knowledge modeling questionnaire asked participants to estimate both their own -and their partner's outcome knowledge thanks to Likert-type scales. The relation between the accuracy with which co-learners assess their partner's knowledge and learning has been examined. In addition, we investigated the KI effect on (a) learning performance and (b) the MKM accuracy. Finally, we wondered to what extent the MKM accuracy could mediate the KI effect on learning. Results showed no difference in learning performance between participants who worked on same information and participants who worked on complementary information. We also found that participants were more accurate at assessing their partner's knowledge in the SI condition than in the CI condition. The discussion focuses on methodological limitations and provides new directions for investigating the KI effect on MKM accuracy.
Animated graphics are extensively used in multimedia instructions explaining how natural or artificial dynamic systems work. As animation directly depicts spatial changes over time, it is legitimate to believe that animated graphics will improve comprehension over static graphics. However, the research failed to find clear evidence in favour of animation. Animation may also be used to promote interactions in computersupported collaborative learning. In this setting as well, the empirical studies have not confirmed the benefits that one could intuitively expect from the use of animation. One explanation is that multimedia, including animated graphics, challenges human processing capacities, and in particular imposes a substantial working memory load. We designed an experimental study involving three between-subjects factors: the type of multimedia instruction (with static or animated graphics), the presence of snapshots of critical steps of the system (with or without snapshots) and the learning setting (individual or collaborative). The findings indicate that animation was overall beneficial to retention, while for transfer, only learners studying collaboratively benefited from animated over static graphics. Contrary to our expectations, the snapshots were marginally beneficial to learners studying individually and significantly detrimental to learners studying in dyads. The results are discussed within the multimedia comprehension framework in order to propose the conditions under which animation can benefit to learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.