Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Com 2010
DOI: 10.1145/1869542.1869567
|View full text |Cite
|
Sign up to set email alerts
|

Mutation analysis vs. code coverage in automated assessment of students' testing skills

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
1

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(35 citation statements)
references
References 12 publications
0
34
1
Order By: Relevance
“…Although code coverage provides feedback about what percentage of solution code was executed, it does not assess test adequacy or test quality. As an alternative, Aaltonen et al [5] proposed using mutation analysis to assess the quality of student-written tests in Java assignments.…”
Section: Automated Grading Systems and Their Evaluation Measuresmentioning
confidence: 99%
See 2 more Smart Citations
“…Although code coverage provides feedback about what percentage of solution code was executed, it does not assess test adequacy or test quality. As an alternative, Aaltonen et al [5] proposed using mutation analysis to assess the quality of student-written tests in Java assignments.…”
Section: Automated Grading Systems and Their Evaluation Measuresmentioning
confidence: 99%
“…Aaltonen et al [5] proposed the use of mutation analysis for assessing the quality of student-written tests in Java assignments. They used Javalanche to generate mutants from a student's solution and to run the student's test cases to check how many mutants were detected.…”
Section: Mutation Analysis For Assessing Testsmentioning
confidence: 99%
See 1 more Smart Citation
“…As tests written in compiled languages, such as Java, do not compile against a solution that differs in structure from the author's solution, no capability for all-pairs testing is available. Aaltonen et al [1] proposed using mutation analysis to evaluate adequacy of the tests but computational overhead makes it impractical to generate real-time feedback and use in assessment tools.…”
Section: Background and Related Workmentioning
confidence: 99%
“…The rationale behind code coverage is: the more code executed during testing the higher the chance of finding flaws in them. However, code coverage may falsely indicate test quality as it does not check if the executed code has been tested against expected behavior [1,9]. Moreover a students' solution may be incomplete or incorrect.…”
Section: Introductionmentioning
confidence: 99%