The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.21105/joss.01904
|View full text |Cite
|
Sign up to set email alerts
|

FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 6 publications
0
18
0
Order By: Relevance
“…Many testing strategies have been developed [3,17,49] to detect unfairness in software systems. Recently, a few tools have been proposed [2,4,44,48] to enhance fairness of ML classifiers. However, we are not aware how much fairness issues exist in ML models from practice.…”
Section: Introductionmentioning
confidence: 99%
“…Many testing strategies have been developed [3,17,49] to detect unfairness in software systems. Recently, a few tools have been proposed [2,4,44,48] to enhance fairness of ML classifiers. However, we are not aware how much fairness issues exist in ML models from practice.…”
Section: Introductionmentioning
confidence: 99%
“…3. LIME -Local Interpretable Model-agnostic Explanations ( 25) are a standard tool of explainable AI and are implemented in (22). They provide a local measure of feature contribution that can be used with any machine learning classifier.…”
Section: Interpretability Methodsmentioning
confidence: 99%
“…In this study we developed graphical methods to explain which textual elements contribute to the classification of a health records from CAP by a machine learning algorithm. We also used methods from the FAT Forensics toolbox (22) and the TreeInterpreter Python package (23) to quantify feature contributions. These contributions were then displayed to the user in an interpretable and visually engaging format.…”
Section: Introductionmentioning
confidence: 99%
“…These works propose methods to generate appropriate test data inputs for the model and prediction on those inputs characterizes fairness. Some research has been conducted to build automated tools [2,64,67] and libraries [8] for fairness. In addition, empirical studies have been conducted to compare, contrast between fairness aspects, interventions, tradeoffs, developers concerns, and human aspects of fairness [10,26,33,35,77].…”
Section: Related Workmentioning
confidence: 99%