Machine learning techniques have been shown to outperform many rule-based systems for the decision-making of autonomous vehicles. However, applying machine learning is challenging due to the possibility of executing unsafe actions and slow learning rates. We address these issues by presenting a reinforcement learning-based approach, which is combined with formal safety verification to ensure that only safe actions are chosen at any time. We let a deep reinforcement learning (RL) agent learn to drive as close as possible to a desired velocity by executing reasonable lane changes on simulated highways with an arbitrary number of lanes. By making use of a minimal state representation, consisting of only 13 continuous features, and a Deep Q-Network (DQN), we are able to achieve fast learning rates. Our RL agent is able to learn the desired task without causing collisions and outperforms a complex, rule-based agent that we use for benchmarking.
Abstract-Validating the safety of self-driving vehicles requires an enormous amount of testing. By applying formal verification methods, we can prove the correctness of the vehicles' behavior, which at the same time reduces remaining risks and the need for extensive testing. However, current safety approaches do not consider liabilities of traffic participants if a collision occurs. Utilizing formalized traffic rules to verify motion plans allows this problem to be solved. We present a novel approach for verifying the safety of lane change maneuvers, using formalized traffic rules according to the Vienna Convention on Road Traffic. This allows us to provide additional guarantees that if a collision occurs, the selfdriving vehicle is not responsible. Furthermore, we consider misbehavior of other traffic participants during lane changes and propose feasible solutions to avoid or mitigate a potential collision. The approach has been evaluated using real traffic data provided by the NGSIM project as well as simulated lane changes.
Falsification aims to disprove the safety of systems by providing counterexamples that lead to a violation of safety properties. In this work, we present two novel falsification methods to reveal safety flaws in adaptive cruise control (ACC) systems of automated vehicles. Our methods use rapidlyexploring random trees to generate motions for a leading vehicle such that the ACC under test causes a rear-end collision. By considering unsafe states and searching backward in time, we are able to drastically improve computation times and falsify even sophisticated ACC systems. The obtained collision scenarios reveal safety flaws of the ACC under test and can be directly used to improve the system's design. We demonstrate the benefits of our methods by successfully falsifying the safety of state-of-the-art ACC systems and comparing the results to that of existing approaches.
Ensuring the safety of self-driving vehicles is a challenging task, especially if other traffic participants severely deviate from the predicted behavior. One solution is to ensure that the vehicle is able to execute a collision-free evasive trajectory at any time. However, a fast method to plan these socalled fail-safe trajectories does not yet exist. Our new approach plans fail-safe trajectories in arbitrary traffic scenarios by incorporating convex optimization techniques. By integrating safety verification in the planner, we are able to generate fail-safe trajectories in real-time, which are guaranteed to be safe. At the same time, we minimize jerk to provide enhanced comfort for passengers. The proposed benefits are demonstrated in different urban and highway scenarios using the CommonRoad benchmark suite and compared to a widelyused sampling-based planner.
Safe motion planning requires that a vehicle reaches a set of safe states at the end of the planning horizon. However, safe states of vehicles have not yet been systematically defined in the literature, nor does a computationally efficient way to obtain them for online motion planning exist. To tackle the aforementioned issues, we introduce invariably safe sets. These are regions that allow vehicles to remain safe for an infinite time horizon. We show how invariably safe sets can be computed and propose a tight under-approximation which can be obtained efficiently in linear time with respect to the number of traffic participants. We use invariably safe sets to lift safety verification from finite to infinite time horizons. In addition, our sets can be used to determine the existence of feasible evasive maneuvers and the criticality of scenarios by computing the time-to-react metric.
Set-based predictions can ensure the safety of planned motions, since they provide a bounded region which includes all possible future states of nondeterministic models of other traffic participants. However, while autonomous vehicles are tested in urban environments, a set-based prediction tailored to pedestrians does not exist yet. This paper addresses this problem and presents an approach for set-based predictions of pedestrians using reachability analysis. We obtain tight overapproximations of pedestrians' reachable occupancy by incorporating the dynamics of pedestrians, contextual information, and traffic rules. In addition, since pedestrians often disregard traffic rules, our constraints automatically adapt so that such behaviors are included in the prediction. Using datasets of recorded pedestrians, we validate our proposed method and demonstrate its use for evasive maneuver planning of automated vehicles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.