The last half decade has seen a steep rise in the number of contributions on safe learning methods for real-world robotic deployments from both the control and reinforcement learning communities. This article provides a concise but holistic review of the recent advances made in using machine learning to achieve safe decision-making under uncertainties, with a focus on unifying the language and frameworks used in control theory and reinforcement learning research. It includes learning-based control approaches that safely improve performance by learning the uncertain dynamics, reinforcement learning approaches that encourage safety or robustness, and methods that can formally certify the safety of a learned control policy. As data- and learning-based robot control methods continue to gain traction, researchers must understand when and how to best leverage them in real-world scenarios where safety is imperative, such as when operating in close proximity to humans. We highlight some of the open challenges that will drive the field of robot learning in the coming years, and emphasize the need for realistic physics-based benchmarks to facilitate fair comparisons between control and reinforcement learning approaches. Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems, Volume 5 is May 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
The last half-decade has seen a steep rise in the number of contributions on safe learning methods for real-world robotic deployments from both the control and reinforcement learning communities. This article provides a concise but holistic review of the recent advances made in using machine learning to achieve safe decision making under uncertainties, with a focus on unifying the language and frameworks used in control theory and reinforcement learning research. Our review includes: learning-based control approaches that safely improve performance by learning the uncertain dynamics, reinforcement learning approaches that encourage safety or robustness, and methods that can formally certify the safety of a learned control policy. As dataand learning-based robot control methods continue to gain traction, researchers must understand when and how to best leverage them in real-world scenarios where safety is imperative, such as when operating in close proximity to humans. We highlight some of the open challenges that will drive the field of robot learning in the coming years, and emphasize the need for realistic physics-based benchmarks to facilitate fair comparisons between control and reinforcement learning approaches.
Numerous control applications, including robotic systems such as unmanned aerial vehicles or assistive robots, are expected to guarantee high performance despite being deployed in unknown and dynamic environments where they are subject to disturbances, unmodeled dynamics, and parametric uncertainties. The fast feedback of adaptive controllers makes them an effective approach for compensating for disturbances and unmodeled dynamics, but adaptive controllers seldom achieve high performance, nor do they guarantee state and input constraint satisfaction. In this article we propose a robust adaptive model predictive controller for guaranteed fast and accurate stabilization in the presence of model uncertainties. The proposed approach combines robust model predictive control (RMPC) with an underlying discrete‐time ℓ1 adaptive controller. We refer to this combined controller as an RMPC‐ℓ1 controller. The ℓ1 adaptive controller forces the system to behave close to a linear reference model despite the presence of parametric uncertainties. However, the true dynamics of the ℓ1 adaptive controlled system may deviate from the linear reference model. In this work we prove that this deviation is bounded and use it as the modeling error of the linear reference model. We combine ℓ1 adaptive control with an RMPC that leverages the linear reference model and the modeling error. We prove stability and recursive feasibility of the proposed RMPC‐ℓ1. Furthermore, we validate the feasibility, performance, and accuracy of the proposed RMPC‐ℓ1 on a stabilization task in a numerical experiment. We demonstrate that the proposed RMPC‐ℓ1 outperforms ℓ1 adaptive control, robust MPC, and other baseline controllers in all metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.