2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER) 2017
DOI: 10.1109/saner.2017.7884636
|View full text |Cite
|
Sign up to set email alerts
|

Improving fault localization for Simulink models using search-based testing and prediction models

Abstract: Abstract-One promising way to improve the accuracy of fault localization based on statistical debugging is to increase diversity among test cases in the underlying test suite. In many practical situations, adding test cases is not a cost-free option because test oracles are developed manually or running test cases is expensive. Hence, we require to have test suites that are both diverse and small to improve debugging. In this paper, we focus on improving fault localization of Simulink models by generating test… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
25
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 30 publications
(26 citation statements)
references
References 75 publications
(74 reference statements)
1
25
0
Order By: Relevance
“…Our results show that using our optimal prediction model, on average, by generating only 11 test cases, we are able to obtain an accuracy improvement close to that obtained by 25 test cases when our strategy to stop test generation is not used. This paper extends a previous conference paper [43] published at the 24th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2017). This paper offers major extensions over our previous paper in the following areas: (1) We consider a new test objective, output diversity [53], and study its effectiveness for fault localization.…”
supporting
confidence: 72%
See 3 more Smart Citations
“…Our results show that using our optimal prediction model, on average, by generating only 11 test cases, we are able to obtain an accuracy improvement close to that obtained by 25 test cases when our strategy to stop test generation is not used. This paper extends a previous conference paper [43] published at the 24th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2017). This paper offers major extensions over our previous paper in the following areas: (1) We consider a new test objective, output diversity [53], and study its effectiveness for fault localization.…”
supporting
confidence: 72%
“…This paper offers major extensions over our previous paper in the following areas: (1) We consider a new test objective, output diversity [53], and study its effectiveness for fault localization. We further compare output diversity with the three test objectives (i.e., coverage dissimilarity, coverage density and number of dynamic basic blocks) discussed in our previous paper [43]. Our results do not indicate a significant difference in fault localization accuracy results obtained based on output diversity compared to the accuracy results obtained based on the test objectives discussed in our previous work.…”
mentioning
confidence: 58%
See 2 more Smart Citations
“…However, this is a black-box technique that does not attempt to open the model and explain the failure in terms of its internal signals and components. Other approaches are based on fault-localization [5,7,15,16,14], a statistical technique measuring the code coverage in the failed and successful tests. This method provides a limited explanation that does not often help the engineers to understand if the selected code is really faulty and how the fault has propagated across the components resulting on actual failure.…”
Section: Introductionmentioning
confidence: 99%