Raymond S. Pettit teaches courses in programming, artificial intelligence, objected oriented design, algorithms, theory of computation, and related subjects in ACU's School of Information Technology and Computing. Prior to joining the ACU faculty, he spent twenty years in software development, research, and training the Air Force Research Lab and NASA's Langley Research Center as well as private industry. His current research focuses on how automated assessment tools interact with student learning in university programming courses.
Are Automated Assessment Tools Helpful in ProgrammingCourses?
AbstractAutomated assessment tools (AATs) are growing in popularity in introductory programming courses, but researchers may have a difficult time synthesizing valid data to draw conclusions about the tools' usefulness. Our first step addressing this issue was to break down our overriding question-are automated assessment tools helpful in programming courses?-into four more specific questions: (1) Have AATs proven to be helpful in improving student learning? (2) Do students think that AATs have improved their performance? (3) After having used the tools, do instructors think that the tools have improved their teaching experiences? and (4) Is the assessment performed by AATs accurate enough to be helpful? In discussing the many AATs that exist, many researchers have only reported results relevant to one or two of these specific questions. We address each of our four questions separately and draw on data from 24 different tools to arrive at our conclusions. We determine that the literature demonstrates AATs helpfulness in student learning, instructor support, and assessment accuracy. However, we found results about students' opinions regarding the helpfulness of AATs to be inconclusive. Given our findings, we make suggestions both for instructors using these tools and to researchers creating them.
As automated tools for grading programming assignments become more widely used, it is imperative that we better understand how students are utilizing them. Other researchers have provided helpful data on the role automated assessment tools (AATs) have played in the classroom. In order to investigate improved practices in using AATs for student learning, we sought to better understand how students iteratively modify their programs toward a solution by analyzing more than 45,000 student submissions over 7 semesters in an introductory (CS1) programming course. The resulting metrics allowed us to study what steps students took toward solutions for programming assignments. This paper considers the incremental changes students make and the correlating score between sequential submissions, measured by metrics including source lines of code, cyclomatic (McCabe) complexity, state space, and the 6 Halstead measures of complexity of the program. We demonstrate the value of throttling and show that generating software metrics for analysis can serve to help instructors better guide student learning.
Metacognition and self-regulation are important skills for successful learning and have been discussed and researched extensively in the general education literature for several decades. More recently, there has been growing interest in understanding how metacognitive and self-regulatory skills contribute to student success in the context of computing education. This paper presents a thorough systematic review of metacognition and self-regulation work in the context of computer programming and an in-depth discussion of the theories that have been leveraged in some way. We also discuss several prominent metacognitive and self-regulation theories from the literature outside of computing education – for example, from psychology and education – that have yet to be applied in the context of programming education.
In our investigation, we built a comprehensive corpus of papers on metacognition and self-regulation in programming education, and then employed backward snowballing to provide a deeper examination of foundational theories from outside computing education, some of which have been explored in programming education, and others that have yet to be but hold much promise. In addition, we make new observations about the way these theories are used by the computing education community, and present recommendations on how metacognition and self-regulation can help inform programming education in the future. In particular, we discuss exemplars of studies that have used existing theories to support their design and discussion of results as well as studies that have proposed their own metacognitive theories in the context of programming education. Readers will also find the article a useful resource for helping students in programming courses develop effective strategies for metacognition and self-regulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.