2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP) 2019
DOI: 10.1109/icse-seip.2019.00019
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Transition-Based Test Selection Algorithms at Google

Abstract: Continuous Integration traditionally relies on testing every code commit with all impacted tests. This practice requires considerable computational resources, which at Google scale, results in delayed test results and high operational costs. To deal with this issue and provide fast feedback, test selection and prioritization methods aim to execute the tests which are most likely to reveal changes in test results as soon as possible. In this paper we present a simulation framework to support the study and evalu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(26 citation statements)
references
References 20 publications
0
26
0
Order By: Relevance
“…There are several approaches aiming at flakiness‐aware 55 or flakiness‐preventing test selection 60 . There are a few approaches to the prediction of flaky tests 56 .…”
Section: Results and Analysismentioning
confidence: 99%
“…There are several approaches aiming at flakiness‐aware 55 or flakiness‐preventing test selection 60 . There are a few approaches to the prediction of flaky tests 56 .…”
Section: Results and Analysismentioning
confidence: 99%
“…Zhu et al [6 6 ] propose a regression test selection framework to check the output against rules inspired by existing test suites for three techniques. Leong et al [30] propose a test selection algorithm evaluation method and evaluate five potential regression test selection algorithms, finding that the test selection problem remains largely open. Najafi et al [41] studied the impact of test execution history on test selection and prioritization techniques.…”
Section: Evaluation Framework Fo R Similar Techniquesmentioning
confidence: 99%
“…The authors complemented the DeFlaker dataset, maintaining the information on flaky tests, by rerunning 100 times the test suites of each project in the most recent version present in GitHub at the time of the study; test cases that had a consistent outcome across all executions (e.g., the test passes 100 times) were flagged as non-flaky. The dataset is accessible in a replication package available online 7 . Note that the tests labeled as flaky could come from different versions of each software project [12], while the tests labeled as non-flaky all come from the same version (the last at the time of rerun) of each software project [21].…”
Section: B Experimental Materials A: Evaluation Datasetmentioning
confidence: 99%
“…Flakiness hinders regression testing in many ways [2]- [5], especially in a Continuous Integration (CI) environment where ideally all tests must pass before a change can be integrated, or in other words any failing test must be fixed before a release. Indeed, in Google, almost 16% of individual tests contain some form of flakiness [6], and these flaky tests are the cause of 84% of all observed transitions (i.e., changes from pass to fail or the vice versa for the test results across project commits) [7]. A non negligible percentage of flaky tests is observed also in Microsoft: while monitoring five projects over a one-month period, 4.6% individual tests were identified as flaky [8].…”
Section: Introductionmentioning
confidence: 99%