2018 IEEE Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE) 2018
DOI: 10.1109/maltesque.2018.8368454
|View full text |Cite
|
Sign up to set email alerts
|

How high will it be? Using machine learning models to predict branch coverage in automated testing

Abstract: Software testing is a crucial component in modern continuous integration development environment. Ideally, at every commit, all the system's test cases should be executed and moreover, new test cases should be generated for the new code. This is especially true in the a Continuous Test Generation (CTG) environment, where the automatic generation of test cases is integrated into the continuous integration pipeline. Furthermore, developers want to achieve a minimum level of coverage for every build of their syst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 25 publications
(19 citation statements)
references
References 26 publications
0
19
0
Order By: Relevance
“…Furthermore, other Machine Learning applications have been experimented for software testing. Daka et al [22] adopted the readability prediction model originally proposed by Buse and Weimer [23] in the context of automatic test case generation with the goal of improving the comprehensibility of the generated tests, while Grano et al [24] preliminarily assessed the feasibility of branch coverage prediction models, showing promising results. Our work can be seen as complementary with respect to these papers, as it aims at estimating test-case effectiveness as measured by mutation score.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, other Machine Learning applications have been experimented for software testing. Daka et al [22] adopted the readability prediction model originally proposed by Buse and Weimer [23] in the context of automatic test case generation with the goal of improving the comprehensibility of the generated tests, while Grano et al [24] preliminarily assessed the feasibility of branch coverage prediction models, showing promising results. Our work can be seen as complementary with respect to these papers, as it aims at estimating test-case effectiveness as measured by mutation score.…”
Section: Related Workmentioning
confidence: 99%
“…At the end of such a procedure, the RANDOM FOREST REGRESSOR results the best algorithm for our prediction model: Indeed, it achieves the best MAE for all the possible models built experimenting the two tools and the four search budget. Is is worth to note that is the first improvement over our previous work . In fact, in this study, we introduce the RFC, comparing it with the three algorithms previously evaluated, ie, Huber Regression, Support Vector Regression, and Multilayer Perceptron.…”
Section: Resultsmentioning
confidence: 99%
“…We averaged the results of different generations because of the nondeterministic nature of the algorithms underlying test‐data generation tools. It is worth to note that such a multiple execution represents an improvement over our previous work, where we executed the tools once per class and with the default search budget only.…”
Section: Data Set and Features Descriptionmentioning
confidence: 99%
See 2 more Smart Citations