The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019
DOI: 10.48550/arxiv.1909.05167
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency

Kacper Sokol,
Raul Santos-Rodriguez,
Peter Flach

Abstract: Machine learning algorithms can take important decisions, sometimes legally binding, about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, qualities such as fairness, accountability and transparency of predictive systems are of paramount importance. Recent literature suggested voluntary self-reporting on these aspects of predictive systemse.g., data sheets for data sets -but their scope is ofte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…To show the importance of selecting a good surrogate model and the difference in explanations that it can produce we explain a carefully selected data point from the two moons data set. The two moons data set -shown in Figure 3 and generated with scikit-learn 11 -is a synthetic 2-dimensional, binary classification data set with a complex decision boundary. It is suitable for this type of experiments as depending on which data point is chosen the resulting explanations can be quite diverse.…”
Section: A3: Decision Tree-based Surrogate Explainer For Tabular Datamentioning
confidence: 99%
“…To show the importance of selecting a good surrogate model and the difference in explanations that it can produce we explain a carefully selected data point from the two moons data set. The two moons data set -shown in Figure 3 and generated with scikit-learn 11 -is a synthetic 2-dimensional, binary classification data set with a complex decision boundary. It is suitable for this type of experiments as depending on which data point is chosen the resulting explanations can be quite diverse.…”
Section: A3: Decision Tree-based Surrogate Explainer For Tabular Datamentioning
confidence: 99%
“…Sokol, Santos-Rodriguez, and Flach (2019) already showed how counterfactual explanations can be used to check individual fairness. They consider an instance to be treated unfairly if that instance received the undesirable label and there exists a counterfactual explanation for that instance that includes at least one protected attribute change (Sokol et al (2019)). We follow this approach when we use counterfactual explanations to identify explicit bias for an individual.…”
Section: Counterfactual Fairness (Bis)mentioning
confidence: 99%
“…However, our metric will be different as it does not need to assume a causal graph (Kusner et al ( 2017)), and does not use the distance to the counterfactual like Sharma et al (2019), but will look at the actual explanations of decisions instead. Furthermore, we will use counterfactual explanations not only to show explicit bias, as done by Sokol et al (2019), but also to get insights into the implicit bias, which is arguably the more challenging problem.…”
Section: (X) ∈ {+ −}mentioning
confidence: 99%
See 1 more Smart Citation
“…These works propose methods to generate appropriate test data inputs for the model and prediction on those inputs characterizes fairness. Some research has been conducted to build automated tools [2,64,67] and libraries [8] for fairness. In addition, empirical studies have been conducted to compare, contrast between fairness aspects, interventions, tradeoffs, developers concerns, and human aspects of fairness [10,26,33,35,77].…”
Section: Related Workmentioning
confidence: 99%