2021
DOI: 10.1002/ail2.40
|View full text |Cite
|
Sign up to set email alerts
|

XAITK: The explainable AI toolkit

Abstract: Recent advances in artificial intelligence (AI), driven mainly by deep neural networks, have yielded remarkable progress in fields, such as computer vision, natural language processing, and reinforcement learning. Despite these successes, the inability to predict how AI systems will behave "in the wild" impacts almost all stages of planning and deployment, including research and development, verification and validation, and user trust and acceptance. The field of explainable artificial intelligence (XAI) seeks… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 17 publications
(16 reference statements)
0
3
0
Order By: Relevance
“…A possible extension could include the utilization of recent state-of-the-art models [41] with more advanced and complex architectures as well as the use of heatmap different saliency algorithms for heatmap calculation. Some interesting works presented by RichardWebster et al [42] and Hu et al [43] used and proposed several algorithms for calculating saliency algorithms. An adoption of the proposed approach to their frameworks could provide useful conclusions from the factual and counterfactual explanations.…”
Section: Discussionmentioning
confidence: 99%
“…A possible extension could include the utilization of recent state-of-the-art models [41] with more advanced and complex architectures as well as the use of heatmap different saliency algorithms for heatmap calculation. Some interesting works presented by RichardWebster et al [42] and Hu et al [43] used and proposed several algorithms for calculating saliency algorithms. An adoption of the proposed approach to their frameworks could provide useful conclusions from the factual and counterfactual explanations.…”
Section: Discussionmentioning
confidence: 99%
“…Explainability can be achieved by visualizing the attention maps of the model, which show which parts of the image the model is focusing on when making a prediction. Other methods include saliency maps (Petsiuk et al 2021), which highlight the most important pixels for a prediction, and decision trees, which provide a simple and interpretable representation of the model's decision process (Hu et al 2023). Therefore, few-shot object detection methods have shown promising results in detecting novel objects in aerial images with limited annotated samples.…”
Section: Datasetsmentioning
confidence: 99%
“…They found that both SHAP and LIME lacked all three of these qualities, leading to the conclusion that more investigation is needed in the area of explainability. In this context, it is also worth mentioning the explainable AI (XAI) toolkit [10] that is built on top of DARPA's efforts in XAI. The toolkit is purported to be a resource for everyone wanting to use AI responsibly.…”
Section: Introductionmentioning
confidence: 99%