The automotive industry has witnessed an increasing level of development in the past decades; from manufacturing manually operated vehicles to manufacturing vehicles with a high level of automation. With the recent developments in Artificial Intelligence (AI), automotive companies now employ blackbox AI models to enable vehicles to perceive their environments and make driving decisions with little or no input from a human. With the hope to deploy autonomous vehicles (AV) on a commercial scale, the acceptance of AV by society becomes paramount and may largely depend on their degree of transparency, trustworthiness, and compliance with regulations. The assessment of the compliance of AVs to these acceptance requirements can be facilitated through the provision of explanations for AVs' behaviour. Explainability is therefore seen as an important requirement for AVs. AVs should be able to explain what they have 'seen', done, and might do in environments in which they operate.In this paper, we provide a comprehensive survey of the existing body of work around explainable autonomous driving. First, we open with a motivation for explanations by highlighting and emphasising the importance of transparency, accountability, and trust in AVs; and examining existing regulations and standards related to AVs. Second, we identify and categorise the different stakeholders involved in the development, use, and regulation of AVs and elicit their explanation requirements for AV. Third, we provide a rigorous review of previous work on explanations for the different AV operations (i.e., perception, localisation, planning, control, and system management). Finally, we identify pertinent challenges and provide recommendations, such as a conceptual framework for AV explainability. This survey aims to provide the fundamental knowledge required of researchers who are interested in explainability in AVs.
Autonomous vehicles operating in dynamic environments are required to account for other traffic participants. By interpreting sensor information and assessing the collision risk with vehicles, cyclists, and pedestrians, near-misses and accidents can be prevented. Moreover, by explaining risk factors to developers and engineers the overall safety of autonomous driving can be increased in future deployments.In this paper, we have designed, developed, and evaluated an approach for predicting the collision risk with other road users based on a planar 2D collision model. To this end, we have trained interpretable machine learning models to classify and predict the risk of collisions on a range of features extracted from sensor data. Further, we present methods for inferring and explaining the factors mostly contributing to the risk. Using counterfactual inference, our approach allows us to determine the factors which highly influence the risk and should in turn be minimised. Experimental results on real-world driving data show that collision risk can be effectively predicted and explained for different time horizons as well as different types of traffic participants such as cars, cyclists and pedestrians.
Autonomous vehicles (AVs) have the potential to change the way we commute, travel, and transport our goods. The deployment of AVs in society, however, requires that people understand, accept, and trust them. Intelligible explanations can help different AV stakeholders to assess AVs' behaviours, and in turn, increase their confidence and foster trust. In a user study (N = 101), we examined different explanation types (based on investigatory queries) provided by an AV and their effect on people using the trust determinant factors. Our quantitative and qualitative analysis shows that explanations with causal attributions improved task performance and understanding when assessing driving events but did not directly improve perceived trust. This underlines the potential need for additional measures and research to enhance trust in AVs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.