Formative assessment aims to improve teaching and learning by providing teachers and students with feedback designed to help them to adapt their behavior. To face the increasing number of students in higher education and support this kind of activity, technology-enhanced formative assessment tools emerged. These tools generate data that can serve as a basis for improving the processes and services they provide. Based on literature and using a dataset gathered from the use of a formative assessment tool in higher education whose process, inspired by Mazur's Peer Instruction, consists in asking learners to answer a question before and after a confrontation with peers, we use learning analytics to provide evidence-based knowledge about formative assessment practices. Our results suggest that: (1) Benefits of formative assessment sequences increase when the proportion of correct answers is close to 50% during the first vote; (2) Benefits of formative assessment sequences increase when correct learners' rationales are better rated than incorrect learners' ones; (3) Peer ratings are consistent when correct learners are more confident than incorrect ones; (4) Self-rating is inconsistent in peer rating context; (5) The amount of peer ratings makes no significant difference in terms of sequences benefits. Based on these results, recommendations in formative assessment are discussed and a data-informed formative assessment process is inferred.
To cite this version:Franck Silvestre, Philippe Vidal, Julien Broisin. Reflexive learning, socio-cognitive conflict and peerassessment to improve the quality of feedbacks in online tests. Abstract. Our previous works have introduced the Tsaap-notes platform dedicated to the semi automatic generation of multiple choice questionnaire providing feedbacks: it reuses interactive questions asked by teachers during lectures, as well as the notes taken by students after the presentation of the results as feedbacks integrated into the quizzes.In this paper, we introduce a new feature which aims at increasing the number of contributions of students in order to significantly improve the quality of the feedbacks used in the resulting quizzes. This feature splits the submission of an answer into several distinct phases to harvest explanations given by students, and then applies an algorithm to filter the best contributions to be integrated as feedbacks in the tests. Our approach has been validated by a first experimentation involving master students enrolled in a computer science course.
When formative assessment involves a large number of learners, Technology-Enhanced Formative Assessments are one of the most popular solutions. However, current TEFA processes lack data-informed decision-making. By analyzing a dataset gathered from a formative assessment tool, we provide evidence about how to improve decision-making in processes that ask learners to answer the same question before and after a confrontation with peers. Our results suggest that learners' understanding increases when the proportion of correct answers before the confrontation is close to 50%, or when learners consistently rate peers' rationales. Furthermore, peer ratings are more consistent when learners' confidence degrees are consistent. These results led us to design a decision-making model whose benefits will be studied in future works.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.