Abstract:In this work, we study the safety approach of synthesizing resilient supervisors against actuator attacks, for cyber-physical systems that can be modeled as discrete-event systems. A constraint based approach for the bounded synthesis of resilient supervisors is developed. The supervisor obfuscation problem, which is proposed in a specific setting of actuator attack, can be naturally modelled and solved using the same approach.
“…There also exist a vast literature on robust control in discrete event systems [2,16,27,28,29,32,35,42,47,49,50]. Robustness in [2,28,29,32,35,42,49] are specific to communication delays, loss of information, or deception attacks.…”
Control systems should enforce a desired property for both expected/modeled situations as well as unexpected/unmodeled environmental situations. Existing methods focus on designing controllers to enforce the desired property only when the environment behaves as expected. However, these methods lack discussion on how the system behaves when the environment is perturbed. In this paper, we propose an approach for analyzing control systems with respect to their tolerance against environmental perturbations. A control system tolerates certain environmental perturbations when it remains capable of guaranteeing the desired property despite the perturbations. Each controller inherently has a level of tolerance against environmental perturbations. We formally define this notion of tolerance and describe a general technique to compute it, for any given regular property. We also present a more efficient method to compute tolerance with respect to invariance properties. Moreover, we introduce and solve new controller synthesis problems based on our notion of tolerance. We demonstrate the application of our framework on an autonomous surveillance example.
“…There also exist a vast literature on robust control in discrete event systems [2,16,27,28,29,32,35,42,47,49,50]. Robustness in [2,28,29,32,35,42,49] are specific to communication delays, loss of information, or deception attacks.…”
Control systems should enforce a desired property for both expected/modeled situations as well as unexpected/unmodeled environmental situations. Existing methods focus on designing controllers to enforce the desired property only when the environment behaves as expected. However, these methods lack discussion on how the system behaves when the environment is perturbed. In this paper, we propose an approach for analyzing control systems with respect to their tolerance against environmental perturbations. A control system tolerates certain environmental perturbations when it remains capable of guaranteeing the desired property despite the perturbations. Each controller inherently has a level of tolerance against environmental perturbations. We formally define this notion of tolerance and describe a general technique to compute it, for any given regular property. We also present a more efficient method to compute tolerance with respect to invariance properties. Moreover, we introduce and solve new controller synthesis problems based on our notion of tolerance. We demonstrate the application of our framework on an autonomous surveillance example.
“…However, robustness in the previous literature is related to communication delays [8], [9], loss of information [10], or model uncertainty [4], [5], [7], [6]. Exceptions to that are [11], [12], [13], [14], where the problem of synthesizing supervisors robust against attacks was investigated. The results of [13] are related to actuator deception attacks.…”
mentioning
confidence: 99%
“…Exceptions to that are [11], [12], [13], [14], where the problem of synthesizing supervisors robust against attacks was investigated. The results of [13] are related to actuator deception attacks.…”
mentioning
confidence: 99%
“…In [11], [12], [13], [15], [14], the problem of synthesizing supervisors robust against attacks was investigated. Our work differs from [11], [12], [14] as we provide a general gametheoretical framework that solves the problem of synthesizing supervisors robust against general classes of sensor deception attacks.…”
mentioning
confidence: 99%
“…A methodology to synthesize the supremal controllable and normal robust supervisor against bounded sensor deception attacks is given in [12]. The results of [13] are related to actuator and sensor replacement deception attacks while actuator and sensor deception attacks are considered in [15]. However, the supervisory control framework in [15] differs from the standard framework since the authors assume that the supervisor can actively change the state of the physical process.…”
We consider feedback control systems where sensor readings may be compromised by a malicious attacker intending on causing damage to the system. We study this problem at the supervisory layer of the control system, using discrete event systems techniques. We assume that the attacker can edit the outputs from the sensors of the system before they reach the supervisory controller. In this context, we formulate the problem of synthesizing a supervisor that is robust against the class of edit attacks on the sensor readings and present a solution methodology for this problem. This methodology blends techniques from games on automata with imperfect information with results from supervisory control theory of partially-observed discrete event systems. Necessary and sufficient conditions are provided for the investigated problem.
A safety verification task involves verifying a system against a desired safety property under certain assumptions about the environment. However, these environmental assumptions may occasionally be violated due to modeling errors or faults. Ideally, the system guarantees its critical properties even under some of these violations, i.e., the system is robust against environmental deviations. This paper proposes a notion of robustness as an explicit, first-class property of a transition system that captures how robust it is against possible deviations in the environment. We modeled deviations as a set of transitions that may be added to the original environment. Our robustness notion then describes the safety envelope of this system, i.e., it captures all sets of extra environment transitions for which the system still guarantees a desired property. We show that being able to explicitly reason about robustness enables new types of system analysis and design tasks beyond the common verification problem stated above. We demonstrate the application of our framework on case studies involving a radiation therapy interface, an electronic voting machine, a fare collection protocol, and a medical pump device.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.