2020
DOI: 10.1007/978-3-030-58475-7_49
|View full text |Cite
|
Sign up to set email alerts
|

Towards Formal Fairness in Machine Learning

Abstract: One of the challenges of deploying machine learning (ML) systems is fairness. Datasets often include sensitive features, which ML algorithms may unwittingly use to create models that exhibit unfairness. Past work on fairness offers no formal guarantees in their results. This paper proposes to exploit formal reasoning methods to tackle fairness. Starting from an intuitive criterion for fairness of an ML model, the paper formalises it, and shows how fairness can be represented as a decision problem, given some l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 23 publications
(17 citation statements)
references
References 51 publications
(82 reference statements)
0
17
0
Order By: Relevance
“…The notation used throughout the paper is summarized in Table 1 (see Page 18). (We should note that some of the notation introduced in this paper has also been used in a number of recent works [93,139,94,86,118,87,98,119,100,89,55,83,121,82,88]. )…”
Section: Summary Of Notationmentioning
confidence: 99%
See 1 more Smart Citation
“…The notation used throughout the paper is summarized in Table 1 (see Page 18). (We should note that some of the notation introduced in this paper has also been used in a number of recent works [93,139,94,86,118,87,98,119,100,89,55,83,121,82,88]. )…”
Section: Summary Of Notationmentioning
confidence: 99%
“…A dataset is consistent if for any point x in feature space, contains an instance (x, c) for at most one class c ∈ K. A classifier is exact if it correctly classifies any instance in training data, and that training data is consistent. (A classifier is perfect if it is exact and is of smallest size [87,90].) Furthermore, we assume that a DT learner will not branch on variables that take constant value on all the instances in training data that are consistent with the already chosen literals.…”
Section: Path Explanation Redundancy In Theorymentioning
confidence: 99%
“…Although recent years have witnessed a growing interest in finding explanations of machine learning (ML) models (Lipton 2018;Guidotti et al 2019;Weld and Bansal 2019;Monroe 2021), explanations have been studied from different perspectives and in different branches of AI at least since the 80s (Shanahan 1989;Falappa, Kern-Isberner, and Simari 2002;Pérez and Uzcátegui 2003), including more recently in constraint programming (Amilhastre, Fargier, and Marquis 2002;Bogaerts et al 2020;Gamba, Bogaerts, and Guns 2021). In the case of ML models, non-heuristic explanations have been studied in recent years (Shih, Choi, and Darwiche 2018;Ignatiev, Narodytska, and Marques-Silva 2019a;Shih, Choi, and Darwiche 2019;Narodytska et al 2019;Ignatiev, Narodytska, and Marques-Silva 2019b,c;Darwiche and Hirth 2020;Ignatiev et al 2020a;Ignatiev 2020;Audemard, Koriche, and Marquis 2020;Marques-Silva et al 2020;Barceló et al 2020;Ignatiev et al 2020b;Izza, Ignatiev, and Marques-Silva 2020;Wäldchen et al 2021;Izza and Marques-Silva 2021;Malfa et al 2021;Ignatiev and Marques-Silva 2021;Cooper and Marques-Silva 2021;Huang et al 2021;Audemard et al 2021;Marques-Silva and Ignatiev 2022;Ignatiev et al 2022;Shrotri et al 2022). Some of these earlier works studied explanations for classifiers represented wit...…”
Section: Related Workmentioning
confidence: 99%
“…Backurs et al (2019), Chierichetti et al (2017), and Mahabadi andVakilian (2020), andHuang et al (2019). Feldman et al (2015), , Friedler et al (2019), Ignatiev et al (2020), Schelter et al (2020), Valdivia et al (2021.…”
Section: Dutch Census Datasetmentioning
confidence: 99%