Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement 2010
DOI: 10.1145/1852786.1852789
|View full text |Cite
|
Sign up to set email alerts
|

Can we evaluate the quality of software engineering experiments?

Abstract: Context:The authors wanted to assess whether the quality of published human-centric software engineering experiments was improving. This required a reliable means of assessing the quality of such experiments. Aims: The aims of the study were to confirm the usability of a quality evaluation checklist, determine how many reviewers were needed per paper that reports an experiment, and specify an appropriate process for evaluating quality. Method: With eight reviewers and four papers describing human-centric softw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
32
0
2

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 48 publications
(37 citation statements)
references
References 21 publications
(28 reference statements)
2
32
0
2
Order By: Relevance
“…With respect to RQ1, these results suggest that using two reviewers is sufficient providing that there is a period of discussion among the reviewers. These results were similar to those found using the unweighted Kappa [26].…”
Section: Figure 1 Comparison Of the Weighted Kappa Null Distribution supporting
confidence: 88%
See 3 more Smart Citations
“…With respect to RQ1, these results suggest that using two reviewers is sufficient providing that there is a period of discussion among the reviewers. These results were similar to those found using the unweighted Kappa [26].…”
Section: Figure 1 Comparison Of the Weighted Kappa Null Distribution supporting
confidence: 88%
“…Initial results for Studies 1 and 2 have already been presented in [26]. However, for this paper the analyses have been changed to use the more appropriate weighted Kappa and ICC reliability metrics.…”
Section: Results For Studies 1 Andmentioning
confidence: 99%
See 2 more Smart Citations
“…Some researchers realized the need to analyze the quality of experiments in an objective way. In [27], authors described an attempt to develop a procedure for evaluating the quality of experiments by means of a quality checklist. This checklist is classified into three groups:  Questions on aims: the aims of our research are clearly stated for each experiment, specifying hypotheses in each case.…”
Section: Checklist For Evaluating the Quality Of The Experimentsmentioning
confidence: 99%