Abstract. Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples. In this work, we establish that reinforcement learning techniques based on Deep Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and verify the transferability of adversarial examples across different DQN models. Furthermore, we present a novel class of attacks based on this vulnerability that enable policy manipulation and induction in the learning process of DQNs. We propose an attack mechanism that exploits the transferability of adversarial examples to implement policy induction attacks on DQNs, and demonstrate its efficacy and impact through experimental study of a game-learning scenario.
Intelligent Transportation Systems (ITS) aim at integrating sensing, control, analysis, and communication technologies into travel infrastructure and transportation to improve mobility, comfort, safety, and efficiency. Car manufacturers are continuously creating smarter vehicles, and advancements in roadways and infrastructure are changing the feel of travel. Traveling is becoming more efficient and reliable with a range of novel technologies, and research and development in ITS. Safer vehicles are introduced every year with greater considerations for passenger and pedestrian safety, nevertheless, the new technology and increasing connectivity in ITS present unique attack vectors for malicious actors. Smart cities with connected public transportation systems introduce new privacy concerns with the data collected about passengers and their travel habits. In this paper, we provide a comprehensive classification of security and privacy vulnerabilities in ITS. Furthermore, we discuss challenges in addressing security and privacy issues in ITS and contemplate potential mitigation techniques. Finally, we highlight future research directions to make ITS more safe, secure, and privacy-preserving.
With the rapidly growing interest in autonomous navigation, the body of research on motion planning and collision avoidance techniques has enjoyed an accelerating rate of novel proposals and developments. However, the complexity of new techniques and their safety requirements render the bulk of current benchmarking frameworks inadequate, thus leaving the need for efficient comparison techniques unanswered. This work proposes a novel framework based on deep reinforcement learning for benchmarking the behavior of collision avoidance mechanisms under the worst-case scenario of dealing with an optimal adversarial agent, trained to drive the system into unsafe states. We describe the architecture and flow of this framework as a benchmarking solution, and demonstrate its efficacy via a practical case study of comparing the reliability of two collision avoidance mechanisms in response to intentional collision attempts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.