2023
DOI: 10.1016/j.infsof.2022.107129
|View full text |Cite
|
Sign up to set email alerts
|

A probabilistic framework for mutation testing in deep neural networks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 18 publications
0
4
0
1
Order By: Relevance
“…Heuristically, we just assume it's a potential point of interest, better than any random point if we are constrained by the number of test environments to be generated and tested. Finally, Tambon et al [36] study more in-depth the effect that the choice of Deep Learning model instances has over the result of Mutation Testing in Supervised Learning using DeepCrime's approach. They showed that this choice can affect the outcome of the Mutation Testing and that using the Bayesian approach can mitigate this issue.…”
Section: Related Workmentioning
confidence: 99%
“…Heuristically, we just assume it's a potential point of interest, better than any random point if we are constrained by the number of test environments to be generated and tested. Finally, Tambon et al [36] study more in-depth the effect that the choice of Deep Learning model instances has over the result of Mutation Testing in Supervised Learning using DeepCrime's approach. They showed that this choice can affect the outcome of the Mutation Testing and that using the Bayesian approach can mitigate this issue.…”
Section: Related Workmentioning
confidence: 99%
“…Tambon et al [12] introduced Probabilistic Mutation Testing (PMT) for DNNs, considering the stochasticity during training. PMT provides consistent decisions on mutant killing and demonstrated effectiveness using three models and eight mutation operators.…”
Section: Mutationmentioning
confidence: 99%
“…Mutation testing involves making small modifications to the model, training data, or source code to reveal potential vulnerabilities and mistakes that may not be identified by conventional methods. Various approaches have been proposed for evaluating test suites of neural networks using novel mutation operators [10,11,12]. Coverage criteria are used to define important areas for test suite evaluation.…”
Section: Introductionmentioning
confidence: 99%
“…Tambon et al (2023) describe silent bugs in popular deep learning frameworks that escape notice due to undetected error propagation.…”
mentioning
confidence: 99%