Evaluating the security of cyber-physical systems throughout their life cycle is necessary to assure that they can be deployed and operated in safety-critical applications, such as infrastructure, military, and transportation. Most safety and security decisions that can have major effects on mitigation strategy options after deployment are made early in the system's life cycle. To allow for a vulnerability analysis before deployment, a sufficient well-formed model has to be constructed. To construct such a model we produce a taxonomy of attributes; that is, a generalized schema for system attributes. This schema captures the necessary specificity that characterizes a possible real system and can also map to the attack vector space associated with the model's attributes. In this way, we can match possible attack vectors and provide architectural mitigation at the design phase. We present a model of a flight control system encoded in the Systems Modeling Language, commonly known as SysML, but also show agnosticism with respect to the modeling language or tool used.
The security of cyber-physical systems is first and foremost a safety problem, yet it is typically handled as a traditional security problem, which means that solutions are based on defending against threats and are often implemented too late. This approach neglects to take into consideration the context in which the system is intended to operate, thus system safety may be compromised. This paper presents a systems-theoretic analysis approach that combines stakeholder perspectives with a modified version of Systems-Theoretic Accident Model and Process (STAMP) that allows decision-makers to strategically enhance the safety, resilience, and security of a cyber-physical system against potential threats. This methodology allows the capture of vital mission-specific information in a model, which then allows analysts to identify and mitigate vulnerabilities in the locations most critical to mission success. We present an overview of the general approach followed by a real example using an unmanned aerial vehicle conducting a reconnaissance mission.
Despite “cyber” being in the name, cyber–physical systems possess unique characteristics that limit the applicability and suitability of traditional cybersecurity techniques and strategies. Furthermore, vulnerabilities to cyber–physical systems can have significant safety implications. The physical and cyber interactions inherent in these systems require that cyber vulnerabilities not only be defended against or prevented, but that the system also be resilient in the face of successful attacks. Given the complex nature of cyber–physical systems, the identification and evaluation of appropriate defense and resiliency strategies must be handled in a targeted and systematic manner. Specifically, what resiliency strategies are appropriate for a given system, where, and which should be implemented given time and/or budget constraints? This paper presents two methodologies: (1) the cyber security requirements methodology and (2) a systems-theoretic, model-based methodology for identifying and prioritizing appropriate resiliency strategies for implementation in a given system and mission. This methodology is demonstrated using a case study based on a hypothetical weapon system. An assessment and comparison of the results from the two methodologies suggest that the techniques presented in this paper can augment and enhance existing systems engineering approaches with model-based evidence.
Cyber‐physical systems (CPS) present a unique modeling challenge due to their numerous heterogeneous components, complex physical interactions, and disjoint communication networks. Modeling CPS to aid security analysis further adds to these challenges, because securing CPS requires not only understanding of the system architecture, but also the system's role within its broader expected service. This is due to the infeasibility of completely securing every single component, network, and part within a CPS. As such it is necessary to be cognizant of the system's expected service, or mission, so that the effects of an exploit can be mitigated and the system can perform its mission at least in a partially degraded manner—in other words, a mission‐aware approach to security. As such, a security analysis methodology based on this philosophy is greatly aided by the creation of a model that combines system architecture information, its admissible behaviors, and its mission context. This paper presents a technique for creating such a model using the Systems Modeling Language.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.