While the independent contributions of synchronous and asynchronous interaction in online learning are clear, comparatively less is known about the pedagogical consequences of using both modes in the same environment. In this study, we examine relationships between students' use of asynchronous discussion forums and synchronous private messages (PM). We find that asynchronous notes contain more academic language and less social language, are more difficult to read, and are longer compared to PM. In addition, we find that the most active forum-posters are also the most active PM users, suggesting that PMing is not reducing their contribution to public discourse. Finally, we find that those who frequently PM are less likely to rapidly scan forum notes, and that they spend more time online than those who make less use of PM. We suggest that PM supports asynchronous discussions in the formation of a community of inquiry.
As enrollments and class sizes in postsecondary institutions have increased, instructors have sought automated and lightweight means to identify students who are at risk of performing poorly in a course. This identification must be performed early enough in the term to allow instructors to assist those students before they fall irreparably behind. This study describes a modeling methodology that predicts student final exam scores in the third week of the term by using the clicker data that is automatically collected for instructors when they employ the Peer Instruction pedagogy. The modeling technique uses a support vector machine binary classifier, trained on one term of a course, to predict outcomes in the subsequent term. We applied this modeling technique to five different courses across the computer science curriculum, taught by three different instructors at two different institutions. Our modeling approach includes a set of strengths not seen wholesale in prior work, while maintaining competitive levels of accuracy with that work. These strengths include using a lightweight source of student data, affording early detection of struggling students, and predicting outcomes across terms in a natural setting (different final exams, minor changes to course content), across multiple courses in a curriculum, and across multiple institutions.
Many factors have been cited for poor performance of students in CS1. To investigate how assessment mechanisms may impact student performance, nine experienced CS1 instructors reviewed final examinations from a variety of North American institutions. The majority of the exams reviewed were composed predominantly of high-value, integrative codewriting questions, and the reviewers regularly underestimated the number of CS1 concepts required to answer these questions. An evaluation of the content and cognitive requirements of individual questions suggests that in order to succeed, students must internalize a large amount of CS1 content. This emphasizes the need for focused assessment techniques to provide students with the opportunity to demonstrate their knowledge.
Recent research suggests that the first weeks of a CS1 course have a strong influence on end-of-course student performance. The present work aims to refine the understanding of this phenomenon by using in-class clicker questions as a source of student performance. Clicker questions generate per-lecture and per-question data with which to assess student understanding. This work demonstrates that clicker question performance early in the term predicts student outcomes at the end of the term. The predictive nature of these questions applies to code-writing questions, multiple choice questions, and the final exam as a whole. The most predictive clicker questions are identified and the relationships between these questions and final exam performance are examined.
Recent work in computing suggests that Peer Instruction (PI) is a valuable interactive learning pedagogy: it lowers fail rates, increases retention, and is enjoyed by students and instructors alike. While these findings are promising, they are somewhat incidental if our goal is to understand whether PI is "better" than lecture in terms of student outcomes. Only one recent study in computing has made such a comparison, finding that PI students outperform traditionallytaught students on a CS0 final exam. That work was conducted in a CS0, where the same instructor taught both courses, and where the only outcome measure was final exam grade. Here, I offer a study that complements their work in two ways. First, I argue for and measure self-efficacy as a valued outcome, in addition to that of final exam grade. Second, I offer an inter-instructor CS1 study, whose biases differ from those of intra-instructor studies. I find evidence that PI significantly increases self-efficacy and suggestively increases exam scores compared to a traditional lecture-based CS1 class. I note validity concerns of such an in-situ study and offer a synthesis of this work with the extant PI literature.
We examine student difficulties with CS1 concepts by analyzing a dataset containing 266,852 student responses to weekly code-writing problems. We find that conditionals and loops prove particularly problematic, even when considering "second chance" data; and that, while we observe some evidence of improvement, certain straightforward applications of loops continue to be problematic at the end of the term. Our contribution is the corroboration of earlier findings, and a call to use online repositories of student submissions as rich sources of data on the student learning experience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.