2020 IEEE Pacific Visualization Symposium (PacificVis) 2020
DOI: 10.1109/pacificvis48177.2020.7090
|View full text |Cite
|
Sign up to set email alerts
|

ExplainExplore: Visual Exploration of Machine Learning Explanations

Abstract: DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 30 publications
(22 citation statements)
references
References 52 publications
(77 reference statements)
0
19
0
Order By: Relevance
“…This approach is popular in Machine Learning research [3,4], but adoption in the visualization community has been limited so far. Notable exceptions include Prospector [19], which uses 1D partial dependence as means to explore the prediction space, ExplainExplore [20] uses 2D partial dependence and incorporates feature contribution methods, and the Whatif tool [21] enables testing hypotheses by means of data perturbations. These systems enable the understanding of single predictions (local perspective) whereas STRATEGYATLAS aims to build an understanding of the model as a whole (multiple instances at once, or global perspective).…”
Section: Visualization For Model Analysismentioning
confidence: 99%
“…This approach is popular in Machine Learning research [3,4], but adoption in the visualization community has been limited so far. Notable exceptions include Prospector [19], which uses 1D partial dependence as means to explore the prediction space, ExplainExplore [20] uses 2D partial dependence and incorporates feature contribution methods, and the Whatif tool [21] enables testing hypotheses by means of data perturbations. These systems enable the understanding of single predictions (local perspective) whereas STRATEGYATLAS aims to build an understanding of the model as a whole (multiple instances at once, or global perspective).…”
Section: Visualization For Model Analysismentioning
confidence: 99%
“…Most of them focus on the visual presentation of such results. This includes saliency maps for Deep Neural Networks [ 57 ], task specific visualisations [ 58 ] and more general frameworks [ 59 ], which are still narrowed to only one phase of the DM process and hardly use any domain knowledge to enhance explanations nor interpretability of the models. For some examples, in [ 60 ], the authors demonstrate how the combination of deep tensor and knowledge graph embedding methods can be used for generating explanations for a model in intrusion detection and genomic medicine.…”
Section: Overview Of Semantic Data Mining Approachesmentioning
confidence: 99%
“…Color-coded instances are frequently used in classification tasks, and interactive interfaces provided, to enable the user focus on specific instances and conduct further research [47]. To represent feature contributions, horizontal bar plots [52,59,60], breakdown plots [61,62], heatmaps [63,64], Partial Dependence Plots [65], or Accumulated Local Effects Plots [66] are used.…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%