FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency
Kacper Sokol,
Raul Santos-Rodriguez,
Peter Flach
Abstract:Machine learning algorithms can take important decisions, sometimes legally binding, about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, qualities such as fairness, accountability and transparency of predictive systems are of paramount importance. Recent literature suggested voluntary self-reporting on these aspects of predictive systemse.g., data sheets for data sets -but their scope is ofte… Show more
“…To show the importance of selecting a good surrogate model and the difference in explanations that it can produce we explain a carefully selected data point from the two moons data set. The two moons data set -shown in Figure 3 and generated with scikit-learn 11 -is a synthetic 2-dimensional, binary classification data set with a complex decision boundary. It is suitable for this type of experiments as depending on which data point is chosen the resulting explanations can be quite diverse.…”
Section: A3: Decision Tree-based Surrogate Explainer For Tabular Datamentioning
Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i.e., can be retrofitted). The Local Interpretable Model-agnostic Explanations (LIME) algorithm is often mistakenly unified with a more general framework of surrogate explainers, which may lead to a belief that it is the solution to surrogate explainability. In this paper we empower the community to "build LIME yourself" (bLIMEy) by proposing a principled algorithmic framework for building custom local surrogate explainers of black-box model predictions, including LIME itself. To this end, we demonstrate how to decompose the surrogate explainers family into algorithmically independent and interoperable modules and discuss the influence of these component choices on the functional capabilities of the resulting explainer, using the example of LIME.
“…To show the importance of selecting a good surrogate model and the difference in explanations that it can produce we explain a carefully selected data point from the two moons data set. The two moons data set -shown in Figure 3 and generated with scikit-learn 11 -is a synthetic 2-dimensional, binary classification data set with a complex decision boundary. It is suitable for this type of experiments as depending on which data point is chosen the resulting explanations can be quite diverse.…”
Section: A3: Decision Tree-based Surrogate Explainer For Tabular Datamentioning
Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i.e., can be retrofitted). The Local Interpretable Model-agnostic Explanations (LIME) algorithm is often mistakenly unified with a more general framework of surrogate explainers, which may lead to a belief that it is the solution to surrogate explainability. In this paper we empower the community to "build LIME yourself" (bLIMEy) by proposing a principled algorithmic framework for building custom local surrogate explainers of black-box model predictions, including LIME itself. To this end, we demonstrate how to decompose the surrogate explainers family into algorithmically independent and interoperable modules and discuss the influence of these component choices on the functional capabilities of the resulting explainer, using the example of LIME.
“…Sokol, Santos-Rodriguez, and Flach (2019) already showed how counterfactual explanations can be used to check individual fairness. They consider an instance to be treated unfairly if that instance received the undesirable label and there exists a counterfactual explanation for that instance that includes at least one protected attribute change (Sokol et al (2019)). We follow this approach when we use counterfactual explanations to identify explicit bias for an individual.…”
Section: Counterfactual Fairness (Bis)mentioning
confidence: 99%
“…However, our metric will be different as it does not need to assume a causal graph (Kusner et al ( 2017)), and does not use the distance to the counterfactual like Sharma et al (2019), but will look at the actual explanations of decisions instead. Furthermore, we will use counterfactual explanations not only to show explicit bias, as done by Sokol et al (2019), but also to get insights into the implicit bias, which is arguably the more challenging problem.…”
Section: (X) ∈ {+ −}mentioning
confidence: 99%
“…As already highlighted by Sokol et al (2019), counterfactual explanations can be used to highlight explicit bias in a model, by searching for explanations that contain the sensitive attribute. We detect explicit bias by searching for counterfactual explanations that consist only of the sensitive attribute.…”
This paper studies how counterfactual explanations can be used to assess the fairness of a model. Using machine learning for high-stakes decisions is a threat to fairness as these models can amplify bias present in the dataset, and there is no consensus on a universal metric to detect this. The appropriate metric and method to tackle the bias in a dataset will be case-dependent, and it requires insight into the nature of the bias first. We aim to provide this insight by integrating explainable AI (XAI) research with the fairness domain. More specifically, apart from being able to use (Predictive) Counterfactual Explanations to detect explicit bias when the model is directly using the sensitive attribute, we show that it can also be used to detect implicit bias when the model does not use the sensitive attribute directly but does use other correlated attributes leading to a substantial disadvantage for a protected group. We call this metric $PreCoF$, or Predictive Counterfactual Fairness. Our experimental results show that our metric succeeds in detecting occurrences of implicit bias in the model by assessing which attributes are more present in the explanations of the protected group compared to the unprotected group. These results could help policymakers decide on whether this discrimination is justified or not.
“…These works propose methods to generate appropriate test data inputs for the model and prediction on those inputs characterizes fairness. Some research has been conducted to build automated tools [2,64,67] and libraries [8] for fairness. In addition, empirical studies have been conducted to compare, contrast between fairness aspects, interventions, tradeoffs, developers concerns, and human aspects of fairness [10,26,33,35,77].…”
In recent years, many incidents have been reported where machine learning models exhibited discrimination among people based on race, sex, age, etc. Research has been conducted to measure and mitigate unfairness in machine learning models. For a machine learning task, it is a common practice to build a pipeline that includes an ordered set of data preprocessing stages followed by a classifier. However, most of the research on fairness has considered a single classifier based prediction task. What are the fairness impacts of the preprocessing stages in machine learning pipeline? Furthermore, studies showed that often the root cause of unfairness is ingrained in the data itself, rather than the model. But no research has been conducted to measure the unfairness caused by a specific transformation made in the data preprocessing stage. In this paper, we introduced the causal method of fairness to reason about the fairness impact of data preprocessing stages in ML pipeline. We leveraged existing metrics to define the fairness measures of the stages. Then we conducted a detailed fairness evaluation of the preprocessing stages in 37 pipelines collected from three different sources. Our results show that certain data transformers are causing the model to exhibit unfairness. We identified a number of fairness patterns in several categories of data transformers. Finally, we showed how the local fairness of a preprocessing stage composes in the global fairness of the pipeline. We used the fairness composition to choose appropriate downstream transformer that mitigates unfairness in the machine learning pipeline.
CCS CONCEPTS• Software and its engineering → Software creation and management; • Computing methodologies → Machine learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.