Software security metrics that facilitate decision making at the enterprise design and operations levels are a topic of active research and debate. These metrics are desirable to support deployment decisions, upgrade decisions, and so on; however, no single metric or set of metrics is known to provide universally effective and appropriate measurements. Instead, engineers must choose, for each software system, what to measure, how and how much to measure, and must be able to justify the rationale for how these measurements are mapped to stakeholder security goals. An assurance argument for security (i.e., a security argument) provides comprehensive documentation of all evidence and rationales for justifying belief in a security claim about a software system. In this work, we motivate the need for security arguments to facilitate meaningful and comprehensive security metrics, and present a novel framework for assessing security arguments to generate and interpret security metrics.
The challenges to requirements from linguistic factors are well-known. This work concerns an approach to communicating requirements with greater fidelity among stakeholders through accommodation of cognitive habits and limits. To instantiate this approach, we synthesized linguistic principles into a method to generate high-quality representations of domain concepts to form the base of a project lexicon. The representations are further organized into a knowledge base that records relationships of interest. We hypothesize that the method leads to representations free of certain faults that compromise communicative fidelity. To investigate, we executed a case study in which the method was applied to the domain semantics of a medical device. Our representations compared favorably to pre-existing versions; further, analysis of the artifact as a whole supported new assertions about the application domain. These results indicate that particular language issues can be systematically managed, and that new information is available in the process.
Defect reports generated for faults found during testing provide a rich source of information regarding problematic phrases used in requirements documents. These reports indicate that faults often derive from instances of ambiguous, incorrect or otherwise deficient language. In this paper, we report on a method combining elements of linguistic theory and information retrieval to guide the discovery of problematic phrases throughout a requirements specification, using defect reports and correction requests generated during testing to seed our detection process. We found that phrases known from these materials to be problematic have occurrence properties in requirements documents that both allow the direction of resources to prioritize their correction, and generate insights characterizing more general locations of difficulty within the requirements. Our findings lead to some recommendations for more efficiently and effectively managing certain natural language issues in the creation and maintenance of requirements specifications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.