Cyber-physical systems (cps) are composed of various embedded subsystems and require specialized software, firmware, and hardware to coordinate with the rest of the system. These multiple levels of integration expose attack surfaces which can be susceptible to attack vectors that require novel architectural methods to effectively secure against. We present a multilevel hierarchical monitor architecture cybersecurity approach applied to a flight control system. However, the principles present in this paper apply to any cps. Additionally, the real-time nature of these monitors allow for adaptable security, meaning that they mitigate against possible classes of attacks online. This results in an appealing bolt-on solution that is independent of different system designs. Consequently, employing such monitors leads to strengthened system resiliency and dependability of safety-critical cps.
Under the Department of Energy's Light Water Reactor Sustainability Program, within the Plant Modernization research pathway, the Digital I&C Qualification Project is identifying new methods that would be beneficial in qualifying digital I&C systems and devices for safety-related usage. One such method that would be useful in qualifying field components such as sensors and actuators is the concept of testability. The Nuclear Regulatory Commission (NRC) considers testability to be one of two design attributes sufficient to eliminate consideration of software-based or software logic-based common cause failure (the other being diversity). The NRC defines acceptable "testability" as follows: Testability-A system is sufficiently simple such that every possible combination of inputs and every possible sequence of device states are tested and all outputs are verified for every case (100% tested).
Understanding fault types can lead to novel approaches to debugging and runtime verification. Dealing with complex faults, particularly in the challenging area of embedded systems, craves for more powerful tools, which are now becoming available to engineers.
MISTAKES, ERRORS, DEFECTS, BUGS, FAULTS AND ANOMALIESEmbedded systems are everywhere, and they present unique challenges in verification and testing. The real-time nature of many embedded systems produces complex failure modes that are especially hard to detect and prevent. To prevent system failures, we need to understand the nature and common types of software anomalies. We also need to think about how mistakes lead to observable anomalies and how these can be differentiated according to their reproducibility. Another key need for efficient fault detection is the comprehensive observability of a system. This kind of understanding leads to the principle of "scientific debugging".In everyday language, we use a number of words such as bug, fault, error, etc. inconsistently and confusingly to describe the malfunctioning of a software-based system. This also happens to the authors of this paper in their normal life unless they pay strict attention to their choice of words. Therefore, we would like to start with a brief clarification based on the terminology used by the "IEEE Standard Classification for Software Anomalies" [1].• If coders notice a mistake themselves, it is called an error ("a human action that produces an incorrect result "[1]). • If the tester is the first to notice an anomaly ("a product does not meet its requirements" [1], it is called a defect. After confirmation by the developer, it becomes a bug.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.