As programming is the basis of many CS courses, meaningful activities in supporting students on their journey towards being better programmers is a matter of utmost importance.Programming is not only about learning simple syntax constructs and their applications, but about honing practical problem-solving skills in meaningful contexts. In this article, we describe our current work on an automated assessment system called Test My Code (TMC), which is one of the feedback and support mechanisms that we use in our programming courses. TMC is an assessment service that (1) enables building of scaffolding into programming exercises;(2) retrieves and updates tasks into the students' programming environment as students work on them, and (3) causes no additional overhead to students' programming process. Instructors benefit from TMC as it can be used to perform code reviews, and collect and send feedback even on fully on-line courses.
MOOCs (massive open online courses) became a hugely popular topic in both academic and non-academic discussions in 2012. Many of the offered MOOCs are somewhat "watereddown versions" of the actual courses given by the MOOC professors at their home universities. At the University of Helsinki, Department of Computer Science, our MOOC on introductory programming is exactly the same course as our first programming course on campus. Our MOOC uses the Extreme Apprenticeship (XA) model for programming education, thus ensuring that students are proceeding stepby-step in the desired direction. As an additional twist, we have used our MOOC as an entrance exam to studies in University of Helsinki. In this paper, we compare the student achievement after one year of studies between two cohorts: the MOOC intake (n=38) and the intake that started their studies during the fall (n=68). The results indicate that student achievement is at least as good on the MOOC intake when compared to the normal intake. An additional benefit is that the students admitted via MOOC are less likely to drop out from their studies during their first year.
We describe an automated assessment system called Test My Code (TMC) that is designed to support instructors' and students' work in programming courses. From the students' point of view, TMC is a transparent assessment service that is integrated to an industry-standard programming environment. TMC is used to provide scaffolding of students' learning during the working process, and to retrieve and update exercises as the students work on them, without causing additional overhead to the learning process. From the instructors' perspective, TMC makes collaborative crafting of exercises easier, supports building exercises with smaller goals that combine into bigger programs, gathers snapshots from students' programming process, collects feedback from students, and has the capability to export course submission data into external systems. TMC has been used in massive open online courses on programming as well as in courses on web-development with hundreds of students.
Systems that record students' programming process have become increasingly popular during the last decade. The granularity of stored data varies across these systems and ranges from storing the final state, e.g. a solution, to storing fine-grained event streams, e.g. every key-press made while working on a task. Researchers that study such data make assumptions based on the granularity. If no fine-grained data exists, the baseline assumption is that a student proceeds in a linear fashion from one recorded state to the next. In this work, we analyze three different granularities of data;(1) submissions, (2) snapshots (i.e. save, compile, run, test events), and (3) keystroke-events. Our study provides insight on the quantity of lost data when storing data at a specific granularity and shows how the lost data varies depending on previous programming experience and the programming assignment type.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.