2022
DOI: 10.48550/arxiv.2207.10223
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fairness Testing: A Comprehensive Survey and Analysis of Trends

Abstract: Software systems are vulnerable to fairness bugs and frequently exhibit unfair behaviors, making software fairness an increasingly important concern for software engineers. Research has focused on helping software engineers to detect fairness bugs automatically. This paper provides a comprehensive survey of existing research on fairness testing. We collect 113 papers and organise them based on the testing workflow (i.e., the testing activities) and the testing components (i.e., where to find fairness bugs) for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 127 publications
0
8
0
Order By: Relevance
“…Additionally, we compare our approaches with two state-of-the-art bias mitigation methods for word embeddings: Hard Debiasing (HD) by Bolukbasi et al [20] and Linear Projection (LP) by Dev and Phillips [29] as they are the most representative and their implementation is publicly available. 5 Moreover, in Section 5.4, we provide information on the runtime and space complexity of the search approaches.…”
Section: Rq3 Multi-objective Optimisation: Are Multi-objective Search...mentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, we compare our approaches with two state-of-the-art bias mitigation methods for word embeddings: Hard Debiasing (HD) by Bolukbasi et al [20] and Linear Projection (LP) by Dev and Phillips [29] as they are the most representative and their implementation is publicly available. 5 Moreover, in Section 5.4, we provide information on the runtime and space complexity of the search approaches.…”
Section: Rq3 Multi-objective Optimisation: Are Multi-objective Search...mentioning
confidence: 99%
“…Fairness in software systems aims to provide algorithms that operate in a non-discriminatory manner [1], with respect to protected attributes such as gender, race, or age. 1 Ensuring fairness is a crucial non-functional property of modern software systems [2][3][4][5][6], especially those that rely on Artificial Intelligence (AI) and Machine Learning (ML) algorithms to make decisions that can dramatically affect peoples' lives such as criminal justice [7,8], finance [9], and recruitment [10]. For example, it has been found that software systems used for recidivism assessment in justice courts are more likely to falsely label black defendants as future criminals at almost twice the rate as white defendants [7].…”
Section: Introductionmentioning
confidence: 99%
“…Finally, concerning assessing quality attributes in ML systems, there is an intense research activity primarily related to the fairness-testing domain [20]. In general, the problem of fairness assurance can be defined as a search-based problem among different ML algorithms and fairness methods [20]. Many tools have been proposed for the automatic fairness test, such as [18,63,69] to cite a few.…”
Section: Related Workmentioning
confidence: 99%
“…Many tools have been proposed for the automatic fairness test, such as [18,63,69] to cite a few. However, these tools tend to require programming skills and thus are unfriendly to nontechnical stakeholders [20]. In our work, we aim to fill this gap by proposing a low-code framework that, generating and executing suitable experiments, supports (also not expert) users in the quality-based development of ML systems, by returning the trained ML model with best quality.…”
Section: Related Workmentioning
confidence: 99%
“…Of the solutions proposed to deal with these challenges, the most frequently adopted by the ML community is the use of statistical equivalency metrics such as an equal rate of errors between demographic groups [7]. This includes work to enable the detection and analysis of fairness concerns [8] as well as structured processes for the analysis of fairness concerns within a data set [9]. This emerging research area of statistical analysis of fairness exists as part of a growing body of work focused on ethical computing [10], and stands alongside established work in the Human-Computer interaction (HCI) field.…”
mentioning
confidence: 99%