There have been many changes in statistical theory in the past 30 years, including increased evidence that non-robust methods may fail to detect important results. The statistical advice available to software engineering researchers needs to be updated to address these issues. This paper aims both to explain the new results in the area of robust analysis methods and to provide a large-scale worked example of the new methods. We summarise the results of analyses of the Type 1 error efficiency and power of standard parametric and non-parametric statistical tests when applied to non-normal data sets. We identify parametric and non-parametric methods that are robust to non-normality. We present an analysis of a large-scale software engineering experiment to illustrate their use. We illustrate the use of kernel density plots, and parametric and non-parametric methods using four different software engineering data sets. We explain why the methods are necessary and the rationale for selecting a specific analysis. We suggest using kernel density plots rather than box plots to visualise data distributions. For parametric analysis, we recommend trimmed means, which can support reliable tests of the differences between the central location of two or
When conducting a systematic literature review, researchers usually determine the relevance of primary studies on the basis of the title and abstract. However, experience indicates that the abstracts for many software engineering papers are of too poor a quality to be used for this purpose. A solution adopted in other domains is to employ structured abstracts to improve the quality of information provided. This study consists of a formal experiment to investigate whether structured abstracts are more complete and easier to understand than non-structured abstracts for papers that describe software engineering experiments. We constructed structured versions of the abstracts for a random selection of 25 papers describing software engineering experiments. The 64 participants were each presented with one abstract in its original unstructured form and one in a structured form, and for each one were asked to assess its clarity (measured on a scale of 1 to 10) and completeness (measured with a questionnaire that used 18 items). Based on a regression analysis that adjusted for participant, abstract, type of abstract seen first, knowledge of structured abstracts, 436 Empir Software Eng (2008) 13:435-468 software engineering role, and preference for conventional or structured abstracts, the use of structured abstracts increased the completeness score by 6.65 (SE 0.37, p < 0.001) and the clarity score by 2.98 (SE 0.23, p < 0.001). 57 participants reported their preferences regarding structured abstracts: 13 (23%) had no preference; 40 (70%) preferred structured abstracts; four preferred conventional abstracts. Many conventional software engineering abstracts omit important information. Our study is consistent with studies from other disciplines and confirms that structured abstracts can improve both information content and readability. Although care must be taken to develop appropriate structures for different types of article, we recommend that Software Engineering journals and conferences adopt structured abstracts.
A recent report on the state of the UK information technology (IT) industry based most of its findings and recommendations on expert opinion. It is surprising that the report was unable to incorporate more empirical evidence. This paper aims to assess whether it is necessary to base IT industry and academic policy on expert opinion rather than on empirical evidence. Current evidence related to the rate of project failure is identified and the methods used to accumulate that evidence discussed. This shows that the report failed to identify relevant evidence and most evidence related to project failure is based on convenience samples. The status of empirical research in the computing disciplines is reviewed showing that empirical evidence covers a restricted range of subjects and seldom addresses the 'Society' level of analysis. Other more robust designs that would address large-scale IT questions are discussed. We recommend adopting a more systematic approach to accumulating and reporting evidence. In addition, we propose using quasi-experimental designs developed and used in the social sciences to improve the methodology used for undertaking large-scale empirical studies in software engineering. † The low level of professionalism in software engineering (SE). † The poor standard of education in UK universities and Management schools. † Lack of understanding of the importance of project management. † Lack of appreciation of the need for Risk Management. † Lack of appreciation of the critical role software architects play in IT projects. † The urgent need to promote best practice among IT practitioners. † The need for basic research into complexity and associated issues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.