2019
DOI: 10.1109/tvcg.2019.2934619
|View full text |Cite
|
Sign up to set email alerts
|

The What-If Tool: Interactive Probing of Machine Learning Models

Abstract: A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
261
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 311 publications
(292 citation statements)
references
References 21 publications
0
261
0
1
Order By: Relevance
“…In contrast, we provide a range of views that are linked through subsets of the classification data and allow in‐depth comparisons between models. The What‐If‐Tool [WPB*20] enables users to compose a range of visualizations, including bar chars and confusion matrices on subsets of their data by slicing sets based on feature values. In contrast, we enable the creation of more complex subsets based on set algebra, and enable comparisons between models based on these sets.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, we provide a range of views that are linked through subsets of the classification data and allow in‐depth comparisons between models. The What‐If‐Tool [WPB*20] enables users to compose a range of visualizations, including bar chars and confusion matrices on subsets of their data by slicing sets based on feature values. In contrast, we enable the creation of more complex subsets based on set algebra, and enable comparisons between models based on these sets.…”
Section: Related Workmentioning
confidence: 99%
“…There has been development of semi-automated tools to help practitioners detect subgroup biases [2,3,4,7,9,20,21,22]. However, these tools are often designed in isolation from users [18].…”
Section: Background and Related Workmentioning
confidence: 99%
“…To address these types of issues, several approaches exist: for instance, researchers have recently released a tool called 'What-If', an open-source application that lets practitioners not only visualize their data, but also test the performance of their ML model in hypothetical situations, for instance modifying some characteristics of data points and analyzing subsequent model behavior, by measuring fairness metrics such as Equal Opportunity and Demographic Parity [Wexler et al, 2019]. Other approaches address bias by changing the training procedure or the structure of ML models themselves, for instance by transforming the raw data in a space in which discriminatory information cannot be found [Zemel et al, 2013] or using a variational autoencoder to learn the latent structure from the dataset and using this structure to re-weight the importance of specific data points during model training [Ribeiro et al, 2016].…”
Section: Numerical Biasmentioning
confidence: 99%