The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2023
DOI: 10.1109/tvcg.2022.3201101
|View full text |Cite
|
Sign up to set email alerts
|

When, Where and How Does it Fail? A Spatial-Temporal Visual Analytics Approach for Interpretable Object Detection in Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 45 publications
0
1
0
Order By: Relevance
“…It is possible to empirically evaluate the Scorecard levels using methods and metrics that have been tailored to XAI evaluation (see . The work of Wang et al (2022) (see above) included an evaluation by experts, indicating that the generation of explorable explanations was successful and the visual elements included with the spatial temporal feature selection and querying had explanatory value.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is possible to empirically evaluate the Scorecard levels using methods and metrics that have been tailored to XAI evaluation (see . The work of Wang et al (2022) (see above) included an evaluation by experts, indicating that the generation of explorable explanations was successful and the visual elements included with the spatial temporal feature selection and querying had explanatory value.…”
Section: Discussionmentioning
confidence: 99%
“…None of the systems was scored at Level 5 (Diagnosis of failures). The system described by Wang et al (2022) achieved Level 5 but was scored at Level 6 by default, that is, because it achieved that higher Level. We suspect that there are more XAI systems now that would be scorable at Level 5.…”
Section: Limitation Of the Methodologymentioning
confidence: 99%
“…As for the XAI system for NLP, Li et al [45] provided a unified interpretive method for interpreting NLP models for text classification. Attempts have also been made in the broaderer application scenarios of AI, such as healthcare [9] and autonomous driving [28], [83].…”
Section: Visual Explanation For Machine Learningmentioning
confidence: 99%