2021
DOI: 10.48550/arxiv.2112.05700
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions

Abstract: In a world of daily emerging scientific inquisition and discovery, the prolific launch of machine learning across industries comes to little surprise for those familiar with the potential of ML. Neither so should the congruent expansion of ethics-focused research that emerged as a response to issues of bias and unfairness that stemmed from those very same applications. Fairness research, which focuses on techniques to combat algorithmic bias, is now more supported than ever before. A large portion of fairness … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 60 publications
0
6
0
Order By: Relevance
“…While AI fairness checklists are important in organisations to formalise ad-hoc processes, the checklists need to be aligned with the existing workflows to avoid their misuse [62]. Richardson and Gilbert [63] reviews various popular software toolkits and checklists as the solutions to tackle algorithm bias. Seedat et al [64] provides checklists at the four stages of the ML pipeline: Data, Training, Testing, and Deployment.…”
Section: Ai Ethics Checklists and Guidelinesmentioning
confidence: 99%
See 1 more Smart Citation
“…While AI fairness checklists are important in organisations to formalise ad-hoc processes, the checklists need to be aligned with the existing workflows to avoid their misuse [62]. Richardson and Gilbert [63] reviews various popular software toolkits and checklists as the solutions to tackle algorithm bias. Seedat et al [64] provides checklists at the four stages of the ML pipeline: Data, Training, Testing, and Deployment.…”
Section: Ai Ethics Checklists and Guidelinesmentioning
confidence: 99%
“…Thus, systematic selection of appropriate variables and collection of relevant data is necessary to reduce algorithm bias [63].…”
Section: Layer 2: Data Collection and Selection Layermentioning
confidence: 99%
“…Allocative harms encompass problems arising from how algorithmic decisions are distributed unevenly to different groups of people [22,162]. These harms occur when a system withholds information, opportunities, or resources [22] from historically marginalized groups in domains that affect material well-being [127], such as housing [41], employment [176], social services [15,176], finance [100], education [102], and healthcare [139].…”
Section: Allocative Harms: Inequitable Distribution Of Resourcesmentioning
confidence: 99%
“…Other research examples, once informed on practitioners' needs, focused on designing different AI fairness solutions: checklists to be aligned with teams' workflows and organizational ad-hoc processes, fairness frameworks or internal algorithmic auditing protocols designed for industrial applications [61,91]. Recently, Richardson and Gilbert [97] proposed a complete industry framework of stakeholders and fairness recommendations while specifying operationalization pitfalls. Ibáñez and Olmeda [47] distinguished two main perspectives on operationalizing fairness practices in organizations: a bottom-up, reactive approach, where prior organizational processes restrain best practices, or top-down, where a proactive approach is set in place according to the translation of principles and methods as actionable, iterative steps designed with stakeholders' needs and concerns in mind.…”
Section: An Industry Perspective On Bias and Fairness In Aimentioning
confidence: 99%