Proceedings of the 48th Design Automation Conference 2011
DOI: 10.1145/2024724.2024777
|View full text |Cite
|
Sign up to set email alerts
|

Test-case generation for embedded simulink via formal concept analysis

Abstract: Mutation testing suffers from the high computational cost of automated test-vector generation, due to the large number of mutants that can be derived from programs and the cost of generating test-cases in a white-box manner. We propose a novel algorithm for mutation-based test-case generation for Simulink models that combines white-box testing with formal concept analysis. By exploiting similarity measures on mutants, we are able to effectively generate small sets of short test-cases that achieve high coverage… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0
2

Year Published

2011
2011
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 50 publications
(37 citation statements)
references
References 16 publications
0
35
0
2
Order By: Relevance
“…The notion of Simulink model mutations have been addressed previously by Zhan and Clark [18], He et al [19], and Araujo et al [20]. In these examples, they describe mutations that explicitly try to mutate a model's run-time properties.…”
Section: Discussionmentioning
confidence: 99%
“…The notion of Simulink model mutations have been addressed previously by Zhan and Clark [18], He et al [19], and Araujo et al [20]. In these examples, they describe mutations that explicitly try to mutate a model's run-time properties.…”
Section: Discussionmentioning
confidence: 99%
“…Other Simulink model mutation frameworks have been proposed by Zhan and Clark [9] and He et al [10]. Unlike our framework, their motivation is testing and fault analysis of concrete Simulink models, and ensuring sufficient test-case coverage for the generated mutants.…”
Section: Related Workmentioning
confidence: 99%
“…State of the art formal methods tools, including static analysis, theorem proving, and model checking, are insufficient in tackling the challenges in CPS verification and validation [32]. Other verification techniques, including model-based testing [11] and simulation [16] have high learning curves, impractical development costs, and scalability issues. Domain specific tools (e.g., passive distributed assertions [29] and symbolic execution [30]), though more scalable, fail to formally verify either qualitative constraints (e.g., the ordering of events), quantitative ones (e.g., timing), or both.…”
Section: Introductionmentioning
confidence: 99%