Argumentation is an important skill to learn. It is valuable not only in many professional contexts, such as the law, science, politics, and business, but also in everyday life. However, not many people are good arguers. In response to this, researchers and practitioners over the past 15-20 years have developed software tools both to support and teach argumentation. Some of these tools are used in individual fashion, to present students with the "rules" of argumentation in a particular domain and give them an opportunity to practice, while other tools are used in collaborative fashion, to facilitate communication and argumentation between multiple, and perhaps distant, participants. In this paper, we review the extensive literature on argumentation systems, both individual and collaborative, and both supportive and educational, with an eye toward particular aspects of the past work. More specifically, we review the types of argument representations that have been used, the various types of interaction design and ontologies that have been employed, and the system architecture issues that have been addressed. In addition, we discuss intelligent and automated features that have been imbued in past systems, such as automatically analyzing the quality of arguments and providing intelligent feedback to support and/or tutor argumentation. We also discuss a variety of empirical studies that have been done with argumentation systems, including, among other aspects, studies that have evaluated the effect of argument diagrams (e.g., textual versus graphical), different representations, and adaptive feedback on learning argumentation. Finally, we conclude by summarizing the "lessons learned" from this large and impressive body of work, particularly focusing on lessons for the CSCL research community and its ongoing efforts to develop computermediated collaborative argumentation systems.
We developed collaborative extensions to 'Vlab', a web-based laboratory that supports students in conducting virtual chemistry experiments. While results from a recent study indicated that VLab promotes chemistry learning, they also revealed that there is room for improvement. We embedded VLab into a collaborative environment that implements a computer-supported collaboration script for guiding students through the stages of scientific experimentation. We describe our pedagogical approach, our collaboration script, and the collaborative learning environment which implements it. We present results from two small-scale studies and a contrasting-case analysis of how adaptive prompts, in addition to the fixed script, affected student behaviour.
During the past two decades a variety of approaches to support argumentation learning in computer-based learning environments have been investigated. We present an approach that combines argumentation diagramming and collaboration scripts, two methods successfully used in the past individually. The rationale for combining the methods is to capitalize on their complementary strengths: Argument diagramming has been shown to help students construct, reconstruct, and reflect on arguments. However, while diagrams can serve as valuable resources, or even guides, during conversations, they do not provide explicit support for the discussion itself. Collaboration scripts, on the other hand, can provide direct support for the discussion, e.g., through sentence openers that encourage high quality discussion moves. Yet, students often struggle to comply with the rules of a script, as evidenced by both the misuse and nonuse of sentence openers. To try to benefit from the advantages of both of these instructional techniques, while minimizing their disadvantages, we combined and experimented with them within a single instructional environment. In particular, we designed a collaboration script that guides student dyads through a process of analyzing, interrelating and evaluating opposing positions on a contentious topic with a goal to jointly generate a well-reasoned conclusion. We compare a baseline version of the script, one that only involves argument diagramming, with an enhanced version that employs an additional peer critique script, implemented with sentence openers, in which student pairs were assigned the roles of a proponent and a constructive critic. The enhanced version of the script led to positive effects: student discussions contained a higher number of elaborative moves and students assessed their argumentation learning more positively.
This paper reports on an aspect of the EC funded Argunaut project which researched and developed awareness tools for moderators of online dialogues. In this study we report on an investigation into the nature of creative thinking in online dialogues and whether or not this creative thinking can be coded for and recognized automatically such that moderators can be alerted when creative thinking is occurring or when it has not occurred after a period of time. We outline a dialogic theory of creativity, as the emergence of new perspectives from the interplay of voices, and the testing of this theory using a range of methods including a coding scheme which combined coding for creative thinking with more established codes for critical thinking, artificial intelligence pattern-matching techniques to see if our codes could be read automatically from maps and 'key event recall' interviews to explore the experience of participants. Our findings are that: (1) the emergence of new perspectives in a graphical dialogue map can be recognized by our coding scheme supported by a machine pattern-matching algorithm in a way that can be used to provide awareness indicators for moderators; (2) that the trigger events leading to the emergence of new perspectives in the online dialogues studied were most commonly disagreements and (3) the spatial representation of messages in a graphically mediated synchronous dialogue environment such as Digalo may offer more affordance for creativity than the much more common scrolling text chat environments. All these findings support the usefulness of our new account of creativity in online dialogues based on dialogic theory and demonstrate that this account can be operationalised through machine coding in a way that can be turned into alerts for moderators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.