In this paper, we present our experiences in using two automatic assessment tools, TRAKLA and TRAKLA2, in a second course of programming. In this course, 500--700 students have been enrolled annually during the period 1993--2004. The tools are specifically designed for assessing algorithm simulation exercises in which students simulate the working of algorithms at a conceptual level. Both of these tools allow students to resubmit their solutions after getting feedback. However, the resubmission policy has changed considerably during the period. Those changes reflect the students performance in the exercises. We conclude that an encouraging grading policy, i.e., the more exercises they solve the better grades they achive, combined with an option to resubmit the solution is a very important factor promoting students' learning. However, in order to prevent aimless trial-and-error problem solving method, the number of resubmissions allowed per assignment should be carefully controlled.
Executive SummaryFeedback is an essential element of learning. Students need feedback on their work and their solutions to assignments both when they work manually and while they use a computer. A number of tools have been implemented to automatically assess and give feedback, for example, on programming exercises and algorithmic exercises. However, one problem of the provided feedback is that in most cases its scope is too narrow to support the needs of different types of learners. For example, many systems provide purely verbal feedback.In this paper we consider how exercises with automa tic feedback should be designed to support a broader scope of learners. We discuss the Felder-Silverman learning model, which we use as the framework for our discussion. The model categorizes learners with four different axes: sensing vs. intuitive learners, visual vs. verbal learners, active vs. reflective learners, and sequential vs. global learners. We discuss how all dimensions of the model can be taken into account when designing assignments and automatic feedback. Moreover, we use two modern automatic assessment systems, PILOT and TRAKLA2, as example systems to demonstrate our ideas.We strongly believe that incorporating analysis of learners' preferences into design of courses, automatic feedback systems, and learning environments leads to better learning. As teachers, we should better support the needs of our students, and also train their skills to process information in more versatile ways.Our discussion concentrates on algorithmic assignments. However, in the conclusion we briefly illuminate how similar approach could be used to design better assignments and feedback for programming exercises, as well.
The idea of using visualization technology to enhance the understanding of abstract concepts, like data structures and algorithms, has become widely accepted. Several attempts have been made to introduce a system that levels out the burden of creating new visualizations. However, one of the main obstacles to fully taking advantage of algorithm visualization seems to be the time and effort required to design, integrate and maintain the visualizations.Effortlessness in the context of algorithm visualization is a highly subjective matter including many factors. Thus, we first introduce a taxonomy to characterize effortlessness in algorithm visualization systems. We have identified three main categories based on a survey conducted among CS educators: i) scope, i.e. how wide is the context one can apply the system to ii) integrability, i.e., how easy it is to take in use by a third party, and iii) interaction techniques, i.e., how well does the system support different use cases regularly applied by educators. We will conclude that generic and effortless visualization systems are needed. Such a system, however, needs to combine a range of characteristics implemented in many current AV systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.