2022
DOI: 10.1016/j.cag.2021.09.002
|View full text |Cite
|
Sign up to set email alerts
|

A survey of visual analytics for Explainable Artificial Intelligence methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 107 publications
(36 citation statements)
references
References 46 publications
0
32
0
Order By: Relevance
“…It is often difficult to decide whether a paper is closer to SciVis than CV or CG. Ultimately, we check if the paper meets one of the two conditions for possible inclusion: (1) does the publication appear in a VIS venue? (2) has the publication significantly influenced SciVis work?…”
Section: Scope Of Surveymentioning
confidence: 99%
See 2 more Smart Citations
“…It is often difficult to decide whether a paper is closer to SciVis than CV or CG. Ultimately, we check if the paper meets one of the two conditions for possible inclusion: (1) does the publication appear in a VIS venue? (2) has the publication significantly influenced SciVis work?…”
Section: Scope Of Surveymentioning
confidence: 99%
“…Generally speaking, there are two AI+VIS directions: AI4VIS (i.e., designing AI solutions for solving VIS problems) and VIS4AI (i.e., applying VIS techniques for explainable AI). We refer interested readers to recent surveys on AI+VIS [1], [22], [34], [69], [104], [150], [165], [172] to gain a comprehensive overview of this research area. These prior surveys focus on visual analytics (VA) and information visualization (InfoVis).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A more elaborate taxonomy was proposed by Das et al [46], who considered dividing XAI techniques based on three criteria: scope (considering global or local explanations), methodology (if the technique focuses on the input data or model parameters), and usage (if is model-agnostic or model-specific). Regarding the scope, local explanations provide insights regarding a particular forecast, while global explanations attempt to describe the overall model behavior [47].…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%
“…Scatterplots are frequently used to visualize data distribution, using some dimensionality reduction technique to map the high-dimensional dataset into two dimensions [57,58]. Color-coded instances are frequently used in classification tasks, and interactive interfaces provided, to enable the user focus on specific instances and conduct further research [47]. To represent feature contributions, horizontal bar plots [52,59,60], breakdown plots [61,62], heatmaps [63,64], Partial Dependence Plots [65], or Accumulated Local Effects Plots [66] are used.…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%