Critical reasoning has been recognized as a valuable educational goal since the end of the nineteenth century. However, the educational programs to reach this goal have changed dramatically during the twentieth century and moved to a dialogic approach. The shift to dialogism in programs to promote critical reasoning brings challenges concerning evaluation. We depict such a program here. This program is based on the use of graphic tools for argumentation in e-discussions. We focus on one history teacher who implemented the program in his class during a period of 7 months. In a design-based research cycle, we investigate the process of finding proper criteria to evaluate the program and to improve it. We show that the criteria of coherence, decisiveness and openness are appropriate for evaluating the program as they stem from pedagogical principles (autonomy, collaboration, commitment to reasoning, ethical communication, procedural mediation, etc.) that are central to a dialogic approach for critical reasoning education. We show that the history course was successful according to those criteria, but not successful according to other more traditional criteria. We discuss whether these differential performances suggest new standards for critical reasoning, actions to improve the program, or both.
Despite their potential value for learning purposes, e-discussions do not necessarily lead to desirable results, even when moderated. The study of the moderator's role, especially in synchronous, graphical e-discussions, and the development of appropriate tools to assist moderators are the objectives of the ARGUNAUT project. This project aims at unifying awareness and feedback mechanisms in e-discussion environments, presently implemented on two existing platforms. This system is primarily directed to a human moderator and facilitating moderation, but might also help the students monitor their own interactions. At the heart of system are the interrelations between an off-line AI analysis mechanism and an online monitoring module. This is done through a collaboration of technological and pedagogical teams, showing promising preliminary results.
Moderation of e-discussions can be facilitated by online feedback promoting awareness and understanding of the ongoing discussion. Such feedback may be based on indicators, which combine structural and process-oriented elements (e.g., types of connectors, user actions) with textual elements (discussion content). In the ARGUNAUT project (IST-2005027728, partially funded by the EC, started 12/2005) we explore two main directions for generating such indicators, in the context of a synchronous tool for graphical e-discussion. One direction is the training of machine-learning classifiers to classify discussion units (shapes and paired-shapes) into predefined theoretical categories, using structural and process-oriented attributes. The classifiers are trained with examples categorized by humans, based on content and some contextual cues. A second direction is the use of a pattern matching tool in conjunction with e-discussion XML log files to generate "rules" that find "patterns" combining user actions (e.g., create shape, delete link) and structural elements with content keywords.
Abstract. This demonstration will highlight the pedagogy and functionality of the Metafora system as developed by the end of the second year of the EUfunded (ICT-257872) project. The Metafora system expands the teaching focus beyond domain-specific learning to enable the development of 21st century collaborative competencies necessary to learn in today's complex, fast-paced environment. These competencies -termed collectively as "Learning to Learn together" (L2L2) -include: distributed leadership, planning / organizing the learning process, mutual engagement, seeking and providing help amongst peers, and reflection on the learning process. We summarise here the Metafora system, its learning innovation and our plan for the demonstration and interaction session during which participants will be introduced to L2L2 and Metafora through hands-on experience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.