Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering 2020
DOI: 10.1145/3324884.3418932
|View full text |Cite
|
Sign up to set email alerts
|

Making fair ML software using trustworthy explanation

Abstract: Machine learning software is being used in many applications (finance, hiring, admissions, criminal justice) having huge social impact. But sometimes the behavior of this software is biased and it shows discrimination based on some sensitive attributes such as sex, race etc. Prior works concentrated on finding and mitigating bias in ML models. A recent trend is using instance-based modelagnostic explanation methods such as LIME [36] to find out bias in the model prediction. Our work concentrates on finding sho… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
4

Relationship

3
7

Authors

Journals

citations
Cited by 26 publications
(12 citation statements)
references
References 31 publications
0
12
0
Order By: Relevance
“…• Baseline: We used a logistic regression model for creating baseline results. Logistic regression is widely used in the fairness domain as baseline model [36,[42][43][44]68]. We used scikit-learn implementation with 'l2' regularization, 'lbfgs' solver, and maximum iteration of 1000.…”
Section: Methodology 31 Modelsmentioning
confidence: 99%
“…• Baseline: We used a logistic regression model for creating baseline results. Logistic regression is widely used in the fairness domain as baseline model [36,[42][43][44]68]. We used scikit-learn implementation with 'l2' regularization, 'lbfgs' solver, and maximum iteration of 1000.…”
Section: Methodology 31 Modelsmentioning
confidence: 99%
“…Previous studies showed that certain features contribute more to the predictive quality of the model [28,56]. Feature importance in prediction and corelation of features with the sensitive attribute also led to bias detection [16,31] in ML models. However, does creating new features (by removing certain semantics) from a potentially biased feature increase the fairness, is an open question.…”
Section: Fairness Analysis Of Stagesmentioning
confidence: 99%
“…Table 1 shows seven fairness datasets used in this work. These datasets are very popular in the fairness domain and have been used by many prior researchers [9,[11][12][13][14]. All of these datasets contain at least one protected attribute.…”
Section: Fairness In ML Softwarementioning
confidence: 99%