2019
DOI: 10.1002/smr.2238
|View full text |Cite
|
Sign up to set email alerts
|

A machine learning approach for classification of equivalent mutants

Abstract: Mutation testing is a fault-based technique to test the quality of test suites by inducing artificial syntactic faults or mutants in a source program. However, some mutants have the same semantics as original program and cannot be detected by any test suite input known as equivalent mutants. Equivalent mutant problem (EMP) is undecidable as it requires manual human effort to identify a mutant as equivalent or killable. The constraint-based testing (CBT) theory suggests the use of mathematical constraints which… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 48 publications
0
5
0
Order By: Relevance
“…The loss function of both training and validation is shown in Figure 9. The obtained accuracy of 94% outperforms the models proposed in [2,5,7]. Furthermore, our results show fewer False Negatives during testing and lower False Positives, which is significant as it implies that there is a decrease in the risk of labeling a mutant as equivalent when it is not.…”
Section: Discussion Of Resultsmentioning
confidence: 65%
See 4 more Smart Citations
“…The loss function of both training and validation is shown in Figure 9. The obtained accuracy of 94% outperforms the models proposed in [2,5,7]. Furthermore, our results show fewer False Negatives during testing and lower False Positives, which is significant as it implies that there is a decrease in the risk of labeling a mutant as equivalent when it is not.…”
Section: Discussion Of Resultsmentioning
confidence: 65%
“…There were equally lower False Positives, thereby indicating a decrease in the risk of labelling a mutant as equivalent to the strengthening of the test suites. Furthermore, to adequately evaluate our model, we achieved a 94% accuracy, higher than other models [2,5,17], which shows it predicted very few False Negatives during testing, see Figure 10. There were equally lower False Positives, thereby indicating a decrease in the risk of labelling a mutant as equivalent to the strengthening of the test suites.…”
Section: Discussion Of Resultsmentioning
confidence: 86%
See 3 more Smart Citations