2019
DOI: 10.1109/tvcg.2019.2934629
|View full text |Cite
|
Sign up to set email alerts
|

explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning

Abstract: MLModel ML Input ML Output Model State XAI Method XAI Input XAI Output explanations (visualizations, verbalizations, surrogate models, etc.)Abstract-We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models;(2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models. Our framework combines an iterative XAI pipeline with eight global monitoring and steering mechanisms, including qu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
146
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 174 publications
(163 citation statements)
references
References 61 publications
1
146
0
Order By: Relevance
“…Spinner et al [SSSE20] develop a pipeline for model analysis and comparison that organizes such methods into three categories that focus on model input, model output, and model internals. In the latter category, a range of methods have been proposed to “open up” those black boxes and provide a view of the inner workings of ML models.…”
Section: Related Workmentioning
confidence: 99%
“…Spinner et al [SSSE20] develop a pipeline for model analysis and comparison that organizes such methods into three categories that focus on model input, model output, and model internals. In the latter category, a range of methods have been proposed to “open up” those black boxes and provide a view of the inner workings of ML models.…”
Section: Related Workmentioning
confidence: 99%
“…Aiming a user-centric view into decision-making for fraud detection, the domain of visual analytics has also been providing contributions through visualization tools and capabilities for fraud detection [21]. Indeed, some researchers have acknowledged the importance of the human-computer interaction or human-in-the-loop perspectives contributing to research in XAI, and the need to investigate new human-AI interfaces for Explainable AI [22][23][24][25][26]. Therefore, a user-centric perspective is essential for reviewing fraud cases, whether in XAI or visual analytics research, as every wrong decision made causes financial harm for customers and organizations.…”
Section: Introductionmentioning
confidence: 99%
“…However, most approaches only use data analysis and only use machine learning for recommendations [11,22,25]. Decision support using machine learning techniques to provide explanations is a recent development, and as such, the amount of work is scarce [9,12,50].…”
Section: Visual Analyticsmentioning
confidence: 99%