2021
DOI: 10.48550/arxiv.2112.11561
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable artificial intelligence for autonomous driving: An overview and guide for future research directions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(35 citation statements)
references
References 0 publications
0
35
0
Order By: Relevance
“…However, we need to test our system on many more interesting scenarios so that we can generate a more varied set of explanations if we want to be certain that XAVI does indeed work properly and is useful for passengers. Besides the scenarios by Albrecht et al (2021), we can base further evaluation on the scenarios presented by Wiegand et al (2020) which were specifically collected to evaluate XAI in autonomous driving scenarios. In the future, it will also be important to run a user study on how the generated explanations affect trust and knowledge levels in humans, as our ultimate goal is to achieve trustworthy autonomous driving.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…However, we need to test our system on many more interesting scenarios so that we can generate a more varied set of explanations if we want to be certain that XAVI does indeed work properly and is useful for passengers. Besides the scenarios by Albrecht et al (2021), we can base further evaluation on the scenarios presented by Wiegand et al (2020) which were specifically collected to evaluate XAI in autonomous driving scenarios. In the future, it will also be important to run a user study on how the generated explanations affect trust and knowledge levels in humans, as our ultimate goal is to achieve trustworthy autonomous driving.…”
Section: Discussionmentioning
confidence: 99%
“…We propose a human-centric explanation generation method called eXplainable Autonomous Vehicle Intelligence (XAVI), focusing on high-level motion planning and prediction. XAVI is based on an existing transparent, inherently interpretable motion planning and prediction system called IGP2 [Albrecht et al, 2021]. An example of the output of our system is shown in Figure 1.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Longo et al [60] and Ahmed et al [61] also reviewed XAI applied in industry. In the automotive field, explainable AI is used to create transparency and understand model decisions [62], but it turns out that XAI itself is not sufficient to increase the trust [63]. Other explanations need to be provided, not only for developers, but also for the end-user.…”
Section: Review Of Explainable Industrial Ai Applicationsmentioning
confidence: 99%