UML language provides a promising way to overcome software system complexity. In particular, UML is a unified language that handles different aspects of software modeling. However, its features are not independent which is the source of numerous inconsistencies. Present consistency checking techniques are limited either to certain UML features or to certain kinds of inconsistencies. Our study aims at developing a unified checker which is able to handle all inconsistencies on all UML features. This paper develops the translation from UML models to CLP (Constraint Logic Programming) clauses taking advantage of meta-modeling techniques. CLP is also used to express consistency rules. Then CLP solver can automatically detect inconsistencies.
Safety arguments, also called safety cases, are commonly used to demonstrate that adequate efforts have been made to achieve safety goals. Assessing the confidence of such arguments and decision-making is usually done manually and is heavily dependent on subjective expertise. Therefore, there is an urgent need for an approach that can assess confidence in the arguments in order to support decision-making. We therefore propose a quantitative approach, based on Dempster-Shafer (D-S) theory, to formalize and propagate confidence in safety cases. Goal Structuring Notation is adopted. The proposed approach focuses on the following issues regarding argumentation assessment: 1) formal definitions of confidence measures based on belief functions from D-S theory; and 2) the development of confidence aggregation rules for structured safety arguments with the help of Dempster's rule. Definitions of confidence measures and aggregation rules are deduced for single, double, and n-node arguments. Finally, a sensitivity analysis of aggregation rules is used to preliminarily validate this approach.
Confidence in safety critical systems is often justified by safety arguments. The excessive complexity of systems nowadays introduces more uncertainties for the arguments reviewing. This paper proposes a framework to support the argumentation assessment based on experts' decision and confidence in the decision for the lowest level claims of the arguments. Expert opinion is extracted and converted in a quantitative model based on Dempster-Shafer theory. Several types of argument and associated formulas are proposed. A preliminary validation of this framework is realized through a survey for safety experts.
Abstract:Safety is now a major concern in many complex systems such as medical robots. A way to control the complexity of such systems is to manage risk. The first and important step of this activity is risk analysis. During risk analysis, two main studies concerning human factors must be integrated: task analysis and human error analysis. This multidisciplinary analysis often leads to a work sharing between several stakeholders who use their own languages and techniques. This often produces consistency errors and understanding difficulties between them. Hence, this paper proposes to treat the risk analysis on the common expression language UML (Unified Modeling Language) and to handle human factors concepts for task analysis and human error analysis based on the features of this language. The approach is applied to the development of a medical robot for tele-echography.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.