Computer science and software engineering courses commonly use automated grading systems to evaluate student programming assignments. These systems provide various types of feedback, such as whether student code passes instructor test cases. The literature contains little data on the association between feedback policies and student learning. This work analyzes the association between different types of feedback and student learning, specifically on the topic of software testing. Our study examines a second-semester computer programming course with a total of 1,556 students over two semesters. The course contained five programming projects where students wrote code according to a specification as well as test cases for their code. Students submitted their code and test cases to an automated grading system. These test cases were evaluated by running them against intentionally buggy instructor solutions. The first semester comprised the control group, while the second semester comprised the experiment group. The two groups received different kinds of feedback on their test cases. The control group was shown whether their tests were free of false positives. In addition to the same feedback as the control group, the experiment group was shown how many intentionally buggy instructor solutions their tests exposed. Our results measured the quality of student test cases for the control and experiment groups. After students in the experiment group completed two projects with additional feedback on their test cases, they completed a final project without the additional feedback. Despite not receiving additional feedback, their test cases were of higher quality, exposing on average 5% more buggy solutions than students from the control group. We found this difference to be statistically significant after controlling for GPA and whether students worked alone or with a partner.
An Autonomous Intelligent Radar System (AIRS) deployed on a surveillance aircraft is briefly described. A Net-Centric compliant approach for integrating AIRS is presented. An overview of unmanned autonomous air vehicle research is provided along with a discussion of some of the issues with integrating AIRS aboard these vehicles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.