The world has recently undergone the most ambitious mitigation effort in a century [1], consisting of widespread quarantines aimed at preventing the spread of COVID-19 [2]. The use of influential epidemiological models [3]-[6] of COVID-19 helped to encourage decision makers to take drastic non-pharmaceutical interventions. Yet, inherent in these models are often assumptions that the active interventions are static, e.g., that social distancing is enforced until infections are minimized, which can lead to inaccurate predictions that are ever evolving as new data is assimilated. We present a methodology to dynamically guide the active intervention by shifting the focus from viewing epidemiological models as systems that evolve in autonomous fashion to control systems with an "input" that can be varied in time in order to change the evolution of the system. We show that a safety-critical control approach [7] to COVID-19 mitigation gives active intervention policies that formally guarantee the safe evolution of compartmental epidemiological models. This perspective is applied to current US data on cases while taking into account reduction of mobility, and we find that it accurately describes the current trends when time delays [8] associated with incubation and testing are incorporated. Optimal active intervention policies are synthesized to determine future mitigations necessary to bound infections, hospitalizations, and death, both at national and state levels. We therefore provide means in which to model and modulate active interventions with a view toward the phased reopenings that are currently beginning across the US and the world in a decentralized fashion. This framework can be converted into public policies, accounting for the fractured landscape of COVID-19 mitigation in a safety-critical fashion. INDEX TERMS Safety-Critical Control, Epidemiology, Non-Pharmaceutical Intervention, COVID-19 I. INTRODUCTION.
There are two main approaches to safety-critical control. The first one relies on computation of control invariant sets and is presented in the first part of this work. The second approach draws from the topic of optimal control and relies on the ability to realize Model-Predictive-Controllers online to guarantee the safety of a system. In the second approach, safety is ensured at a planning stage by solving the control problem subject for some explicitly defined constraints on the state and control input. Both approaches have distinct advantages but also major drawbacks that hinder their practical effectiveness, namely scalability for the first one and computational complexity for the second. We therefore present an approach that draws from the advantages of both approaches to deliver efficient and scalable methods of ensuring safety for nonlinear dynamical systems. In particular, we show that identifying a backup control law that stabilizes the system is in fact sufficient to exploit some of the set-invariance conditions presented in the first part of this work. Indeed, one only needs to be able to numerically integrate the closed-loop dynamics of the system over a finite horizon under this backup law to compute all the information necessary for evaluating the regulation map and enforcing safety. The effect of relaxing the stabilization requirements of the backup law is also studied, and weaker but more practical safety guarantees are brought forward. We then explore the relationship between the optimality of the backup law and how conservative the resulting safety filter is. Finally, methods of selecting a safe input with varying levels of trade-off between conservatism and computational complexity are proposed and illustrated on multiple robotic systems, namely: a two-wheeled inverted pendulum (Segway), an industrial manipulator, a quadrotor, and a lower body exoskeleton.
A multi-agent partially observable Markov decision process (MPOMDP) is a modeling paradigm used for high-level planning of heterogeneous autonomous agents subject to uncertainty and partial observation. Despite their modeling efficiency, MPOMDPs have not received significant attention in safety-critical settings. In this paper, we use barrier functions to design policies for MPOMDPs that ensure safety. Notably, our method does not rely on discretizations of the belief space, or finite memory. To this end, we formulate sufficient and necessary conditions for the safety of a given set based on discrete-time barrier functions (DTBFs) and we demonstrate that our formulation also allows for Boolean compositions of DTBFs for representing more complicated safe sets. We show that the proposed method can be implemented online by a sequence of one-step greedy algorithms as a standalone safe controller or as a safety-filter given a nominal planning policy. We illustrate the efficiency of the proposed methodology based on DTBFs using a high-fidelity simulation of heterogeneous robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.