Proceedings of the 43rd ACM Technical Symposium on Computer Science Education 2012
DOI: 10.1145/2157136.2157202
|View full text |Cite
|
Sign up to set email alerts
|

Running students' software tests against each others' code

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
1

Year Published

2013
2013
2023
2023

Publication Types

Select...
9

Relationship

4
5

Authors

Journals

citations
Cited by 32 publications
(21 citation statements)
references
References 10 publications
0
20
1
Order By: Relevance
“…These results clearly contrast from earlier experimental results published by the authors, where all student test suites were run against all student programs [5]. In that case, student test suites did find bugs in many student programs, leading to the conclusion that student test suites were effective at finding bugs.…”
Section: Master Suite Tests Duplicated By Studentscontrasting
confidence: 99%
See 1 more Smart Citation
“…These results clearly contrast from earlier experimental results published by the authors, where all student test suites were run against all student programs [5]. In that case, student test suites did find bugs in many student programs, leading to the conclusion that student test suites were effective at finding bugs.…”
Section: Master Suite Tests Duplicated By Studentscontrasting
confidence: 99%
“…These results suggest that educators should strive to reinforce test design techniques intended to find bugs, rather than simply confirming that features work as expected. Further, assessment approaches such as running all students' test suites against all other students' programs [5][6] might be a viable alternative for reinforcing the value of test design in the classroom.…”
Section: Discussionmentioning
confidence: 99%
“…As a result, these systems cannot evaluate partial or incomplete submissions and gives no credit in those situations. Edwards et al first presented a solution for assessing partial or incomplete Java programs by applying late binding to test cases [11].…”
Section: Automated Grading Systems and Their Evaluation Measuresmentioning
confidence: 99%
“…Results of applying our solutions in 8 assignments from CS1 and CS2 courses, encompassing 147 student programs, shows the feasibility of the approach and provides insight into the quality of students' tests. To our knowledge, we are the first to successfully apply all-pairs testing [2] in automated grading. Outcomes on applying mutation analysis are under review.…”
Section: Discussionmentioning
confidence: 99%