2019
DOI: 10.1109/tvcg.2019.2934262
|View full text |Cite
|
Sign up to set email alerts
|

FairSight: Visual Analytics for Fairness in Decision Making

Abstract: Fig. 1. We propose a design framework to protect individuals and groups from discrimination in algorithm-assisted decision making. A visual analytic system, FairSight, is implemented based on our proposed framework, to help data scientists and practitioners make fair decisions. The decision is made through ranking individuals who are either members of a protected group (orange bars) or a non-protected group (green bars). (a) The system provides a pipeline to help users understand the possible bias in a machine… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
91
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 83 publications
(92 citation statements)
references
References 30 publications
(32 reference statements)
0
91
0
1
Order By: Relevance
“…Cabrera et al [CEH*19] present a method for analyzing fairness by generating subsets of the data with different prediction performance. In contrast, Ahn and Lin [AL20] focus on identifying instance‐level bias. In contrast, our approach supports a broad range of tasks with flexible mechanisms to analyze different aspects of data and classifier performance.…”
Section: Related Workmentioning
confidence: 99%
“…Cabrera et al [CEH*19] present a method for analyzing fairness by generating subsets of the data with different prediction performance. In contrast, Ahn and Lin [AL20] focus on identifying instance‐level bias. In contrast, our approach supports a broad range of tasks with flexible mechanisms to analyze different aspects of data and classifier performance.…”
Section: Related Workmentioning
confidence: 99%
“…Using different textures to encode true/false positives/negatives, this tool allows fast and accurate estimation of performance metrics at multiple levels of detail. Recently, the issue of model fairness has drawn growing attention [80,83,97]. For example, Ahn et al [80] proposed a framework named FairSight and implemented a visual analytics system to support the analysis of fairness in ranking problems.…”
Section: Analyzing Training Resultsmentioning
confidence: 99%
“…Recently, the issue of model fairness has drawn growing attention [80,83,97]. For example, Ahn et al [80] proposed a framework named FairSight and implemented a visual analytics system to support the analysis of fairness in ranking problems. They divided the machine learning pipeline into three phases (data, model, and outcome) and then measured the bias both at individual and group levels using different measures.…”
Section: Analyzing Training Resultsmentioning
confidence: 99%
“…The paper presents an interactive visualization technique to assist novice users of ML to understand, examine, and verify the performance of predictive models. FairSight [AL20] is another tool designed to accomplish different concepts of fairness in ranking decisions. To achieve that, FairSight distinguishes the required actions (understanding, computing, and others) that can possibly lead to fairer decision making.…”
Section: In‐depth Categorization Of Trust Against Facets Of Interamentioning
confidence: 99%
“…With the comparison of various models and learning methods, it allows users to become knowledgeable about their usability. Fair‐Sight [AL20] which was discussed earlier along with the What‐If Tool [WPB*20] are both two recent examples of how fairness is a trending unexplored subject in the community. The What‐If Tool (which was not discussed yet) enables domain experts to assess the performance of models in hypothetical scenarios, analyze the significance of several data features, and visualize model functionality across many ML models and batches of input data.…”
Section: In‐depth Categorization Of Trust Against Facets Of Interamentioning
confidence: 99%