Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of 2019
DOI: 10.1145/3338906.3338937
|View full text |Cite
|
Sign up to set email alerts
|

Black box fairness testing of machine learning models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
174
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 136 publications
(176 citation statements)
references
References 12 publications
1
174
0
1
Order By: Relevance
“…Besides coverage criteria, A large body of testing methods was proposed for testing machine learning models, such as fuzzing [18,25,44,62,63,68,71,80], symbolic execution [3,23,51,55], runtime validation [54,64], fairness testing [3,62,77], etc. DeepTest [59] utilizes nine types of realistic image transformations for generating test images, which discovered more than 1,000 erroneous behaviors of DNNs used in autonomous driving systems.…”
Section: Related Workmentioning
confidence: 99%
“…Besides coverage criteria, A large body of testing methods was proposed for testing machine learning models, such as fuzzing [18,25,44,62,63,68,71,80], symbolic execution [3,23,51,55], runtime validation [54,64], fairness testing [3,62,77], etc. DeepTest [59] utilizes nine types of realistic image transformations for generating test images, which discovered more than 1,000 erroneous behaviors of DNNs used in autonomous driving systems.…”
Section: Related Workmentioning
confidence: 99%
“…Intuitively, "model fairness" is highly correlated to attributes of people. In fact, this design choice is consistently taken by the majority, if not all, of previous research in this field [Galhotra et al, 2017;Udeshi et al, 2018;Aggarwal et al, 2019].…”
Section: Sentence Perturbator Pmentioning
confidence: 99%
“…Existing research has proposed various techniques to expose fairness violations in NLP models. However, these works suffer from various drawbacks, such as relying on heavy-weight (unscalable) statistical or symbolic analysis tools, using pre-defined templates for input generation, or processing only structured data tables [Galhotra et al, 2017;Udeshi et al, 2018;Aggarwal et al, 2019]. Inspired by relevant research in software engineering, we advocate for a focus on formulating the analysis of model fairness as a specific software testing task where typical NLP models are treated as a "black-box."…”
Section: Biased Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…For all these reasons, this paper explores ML bias mitigation. In the recent software engineering literature, we have found some works to identify bias in machine learning software systems [7,11]. But there is no prior work done to explain the reason behind the bias and also removing the bias from the software.…”
Section: Introductionmentioning
confidence: 99%