2023
DOI: 10.1016/j.ipm.2022.103224
|View full text |Cite
|
Sign up to set email alerts
|

Auditing fairness under unawareness through counterfactual reasoning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Bias auditing tools typically rely on a combination of several methods to detect and analyze bias in AI systems. These methods can include fairness metrics, counterfactual analysis, sensitivity analysis, algorithmic transparency, and adversarial testing [25][26][27]. For example, a bias auditing tool may apply fairness metrics to spotlight potential biases in a model and then use counterfactual analysis to understand the underlying causes of the bias.…”
Section: Bias Auditing Toolsmentioning
confidence: 99%
“…Bias auditing tools typically rely on a combination of several methods to detect and analyze bias in AI systems. These methods can include fairness metrics, counterfactual analysis, sensitivity analysis, algorithmic transparency, and adversarial testing [25][26][27]. For example, a bias auditing tool may apply fairness metrics to spotlight potential biases in a model and then use counterfactual analysis to understand the underlying causes of the bias.…”
Section: Bias Auditing Toolsmentioning
confidence: 99%
“…However, defining what constitutes fairness in AI is a complex and multifaceted task. So far, there are mainly seven types of definitions, including individual fairness [40,41], group fairness [42], equality of opportunity [11], disparate treatment [43], fairness through unawareness [44,45], disparate impact [46], and subgroup fairness [47].…”
Section: Definitionmentioning
confidence: 99%
“…Once the model performs the prediction task, the customer will be provided with an explanation, especially in case of rejection. In previous work [17][18][19], the authors provide different pipelines for generating natural language-based explanations, using both Shapley values and Counterfactual reasoning. As a game theory approach, the Shapley values give ranked feature importance of the most discriminating features for the decision task.…”
Section: A Trustworthy Credit Assessment Platformmentioning
confidence: 99%