Empirical studies have shown complexity metrics to be good predictors of testing effort and maintainability in traditional, imperative programming languages. Empirical validation studies have also shown that complexity is a good predictor of initial quality and reliability in object-oriented (OO) software. To date, one of the most empirically validated OO complexity metrics is the Chidamber and Kemerer Weighted Methods in a Class (WMC). However, there are many more OO complexity metrics whose predictive power has not been as extensively explored. In this study, we explore the predictive ability of several complexity-related metrics for OO software that have not been heavily validated. We do this by exploring their ability to measure quality in an evolutionary software process, by correlating these metrics to defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java.
Using statistical techniques such as Spearman's correlation, principal component analysis, binary logistic regression models and their respective validations, we show that some lesser known complexity metrics includingMichura et al.'s standard deviation method complexity and Etzkorn et al.'s average method * Correspondence to: Hector M. Olague, U.S. Army Space and Missile Defense Command,complexity are more consistent predictors of OO quality than any variant of the Chidamber and Kemerer WMC metric. We also show that these metrics are useful in identifying fault-prone classes in software developed using highly iterative or agile software development processes.
When comparing alternative courses of action, modern military decision makers often must consider both the military effectiveness and the ethical consequences of the available alternatives. The basis, design, calibration, and performance of a principles-based computational model of ethical considerations in military decision making are reported in this article. The relative ethical violation (REV) model comparatively evaluates alternative military actions based upon the degree to which they violate contextually relevant ethical principles. It is based on a set of specific ethical principles deemed by philosophers and ethicists to be relevant to military courses of action. A survey of expert and non-expert human decision makers regarding the relative ethical violation of alternative actions for a set of specially designed calibration scenarios was conducted to collect data that was used to calibrate the REV model. Perhaps unsurprisingly, the survey showed that people, even experts, disagreed greatly amongst themselves regarding the scenarios' ethical considerations. Despite this disagreement, two significant results emerged. First, after calibration the REV model performed very well in terms of replicating the ethical assessments of human experts for the calibration scenarios. The REV model outperformed an earlier model that was based on tangible consequences rather than ethical principles, that earlier model performed comparably to human experts, the experts outperformed human non-experts, and the non-experts outperformed random selection of actions. All of these performance comparisons were measured quantitatively and confirmed with suitable statistical tests. Second, although humans tended to value some principles over others, none of the ethical principles involved-even the principle of not harming civilians-completely overshadowed all of the other principles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.