Computer programming is always of high concern for students in introductory programming courses. High rates of failure occur every semester due to lack of adequate skills in programming. No student can become a programmer overnight because such learning requires proper guidance as well as consistent practice with the programming exercises. The role of instructors in the development of students' learning skills is crucial in order to provide feedback on their errors and improve their knowledge accordingly. On the other hand, due to the large number of students, instructors are also overloading themselves to focus on each individual student's errors. To address these issues, researchers have developed numerous Automatic Assessment (AA) systems that not only evaluate the students' programs but also provide instant feedback on their errors as well as abridge the workload of the instructors. Due to the large pool of existing systems, it is difficult to cover each and every system in one study. Therefore, this paper provides a comprehensive overview of some of the existing systems based on the three‐analysis approaches: dynamic, static, and hybrid. Moreover, this paper aims to discuss the strengths and limitations of these systems and suggests some potential recommendations regarding the AA specifications for novice programming, which may help in standardizing these systems.
Assessment of students in computer programming is a challenge for instructors, especially at the introductory programming level, where the number of student enrollment is typically high. Therefore, this study presents a novel approach to assessing students' competency in programming using Bloom's taxonomy. The novelty of the presented approach is based on some rules that quantify the attained competencies with respect to the cognitive levels of Bloom's taxonomy. Unlike previous studies, in which cognitive levels were used as a scale for making the questions while the competency assessment was manually performed, in this study, the rule-based assessment method uses the automatic decision-making process to map the students' competency level directly to the corresponding cognitive levels from the written code without the prior mapping of questions to the cognitive levels. For this reason, the study focuses on the basic topics of the structured Java programming language (i.e. selection, repetition, and modular). The rule-based assessment method has been applied to students' programming code in the introductory level Java course. Data collection has been carried out through conducting an empirical test in which the valid responses of 213 students were collected, which was processed through the rule-based method for competency assessment. Moreover, the quantitative results achieved from the rule-based assessment method were validated by comparing them with the results achieved from the manual assessment. Furthermore, for comparative analysis, several statistical methods were used to identify the difference between the results of the two assessment methods. The outcomes of the comparative analysis have shown the reliability of the proposed rule-based assessment method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.