2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP) 2019
DOI: 10.1109/icse-seip.2019.00009
|View full text |Cite
|
Sign up to set email alerts
|

Three Key Checklists and Remedies for Trustworthy Analysis of Online Controlled Experiments at Scale

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 33 publications
0
11
0
Order By: Relevance
“…In addition, this project has proven the feasibility and usefulness of building data quality checks on top of existing third-party experimentation platforms when those platforms do not already perform such checks. We could extend this idea beyond SRM checks and implement additional monitoring capabilities, for example taking guidance from Fabijan et al [2] or Perrin [7].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, this project has proven the feasibility and usefulness of building data quality checks on top of existing third-party experimentation platforms when those platforms do not already perform such checks. We could extend this idea beyond SRM checks and implement additional monitoring capabilities, for example taking guidance from Fabijan et al [2] or Perrin [7].…”
Section: Discussionmentioning
confidence: 99%
“…We also want our experiments to be trustworthy, without creating knowledge or process bottlenecks. Although we will partially rely on checklists such as those suggested by Fabijan et al [2] as well as a structured education curriculum, we are also investing in automation and infrastructure to improve consistency and reliability of our data quality checks.…”
Section: Introductionmentioning
confidence: 99%
“…The trustworthiness aspect of online experiments has been an active area of research [8,14,21,24,42]. Experiments that rely on violated assumptions or are susceptible to implementation or other design errors can lead to untrustworthy results that can compromise the conclusions and the value of the experiment.…”
Section: Experimentation Processes and Platformsmentioning
confidence: 99%
“…Kohavi et al [24] discuss lessons learned from online controlled experiments that can influence the experiment result, such as carryover effects, experiment duration, and statistical power. Fabijan et al [14] provide essential checklists to prevent companies from overlooking critical trustworthiness aspects of online experiments. In our work, we do not specifically focus on trustworthiness aspects of online experiments, but on how to make the experimentation process science-centric.…”
Section: Experimentation Processes and Platformsmentioning
confidence: 99%
“…Apart from lessons learned and common pitfalls reported by experts running experiments (e.g. [5,14]), could we establish an experiment improvement process within a specific company?…”
Section: Synergies Between Disciplinesmentioning
confidence: 99%