I E E E S o f t w a r E P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y0 7 4 0 -7 4 5 9 / 1 0 / $ 2 6 . 0 0 © 2 0 1 0 I E E E focus management hardly help increase project success. Over the years, their figures have attracted tremendous attention.However, we question the validity of their figures. Robert Glass 2,3 and Magne Jørgensen and his colleagues 4 indicated that the only way to assess the Chaos results' credibility is to use Standish's data and reiterate their analyses. But there's another way: obtain your own data and reproduce Standish's research to assess its validity. We applied the Standish definitions to our extensive data consisting of 5,457 forecasts of 1,211 real-world projects totaling hundreds of millions of euros. Our research shows that the Standish definitions of successful and challenged projects have four major problems: they're misleading, one-sided, pervert the estimation practice, and result in meaningless figures. Misleading DefinitionsThe Standish Group published the first Chaos report in 1994, which summarized Standish's research findings and aimed to investigate causes of software project failure and find key ways to reduce such failures. 1 The group also intended to identify the scope of software project failures by defining three project categories that we recall verbatim:■ Resolution Type 1, or project success. The project is completed on time and on budget, offering all features and functions as initially specified. ■ Resolution Type 2, or project challenged. The project is completed and operational but over budget and over the time estimate, and offers fewer features and functions than originally specified. ■ Resolution Type 3, or project impaired. The project is cancelled at some point during the development cycle. 1To find answers to their research questions, Standish sent out questionnaires. Their total sample size was 365 respondents representing 8,380 applications. On the basis of the responses, Standish published overall percentages for each project cat-F or many years, researchers and practitioners have analyzed how to successfully manage IT projects. Among them is the Standish Group, which regularly publishes its findings in its Chaos reports. In 1994, Standish reported a shocking 16 percent project success rate, another 53 percent of the projects were challenged, and 31 percent failed outright.
In this article, we show how to quantify the quality of IT forecasts. First, we analyze two metrics previously proposed to analyze IT forecast data-Boehm's cone of uncertainty and DeMarco's Estimating Quality Factor. We show theoretical problems with the cone of uncertainty (for example, that the conical shape of Boehm's cone is not caused by improved estimation, but can also be found when estimation accuracy decreases), and generalize it as a family of distributions that predict IT forecasts on the basis of expected accuracy and predictive bias. With these, we support decision making by providing critical information on IT forecasting quality to IT governors. We illustrate that plotting forecast-to-actual ratios against a predicted distribution reveals potential biases, for instance political, involved with IT forecasting. We illustrate our approach by applying it to four real-world organizations (1824 projects, 12 287 forecasts, 1059+ million Euro). We show that the distribution of forecast to actual ratios vary between organizations in at least three dimensions: in accuracy of estimation, in the tendency of forecasts to converge to the actual over the life of the project, and in systematic bias toward over-and underestimation. Moreover, we illustrate how to use the information to enrich forecast information for decision making. Finally, we point out that systematic biases, if not accounted for, make meaningless often-quoted rates of project success. We survey benchmarks related to forecasting and propose new benchmarks based on our extensive data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.