Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems 2018
DOI: 10.1145/3173574.3174230
|View full text |Cite
|
Sign up to set email alerts
|

A Qualitative Exploration of Perceptions of Algorithmic Fairness

Abstract: Algorithmic systems increasingly shape information people are exposed to as well as influence decisions about employment, finances, and other opportunities. In some cases, algorithmic systems may be more or less favorable to certain groups or individuals, sparking substantial discussion of algorithmic fairness in public policy circles, academia, and the press. We broaden this discussion by exploring how members of potentially affected communities feel about algorithmic fairness. We conducted workshops and inte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
118
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 192 publications
(119 citation statements)
references
References 52 publications
1
118
0
Order By: Relevance
“…Significant effort in the fair ML community has focused on the development of statistical definitions of fairness [14,25,32,83] and algorithmic methods to assess and mitigate biases in relation to these definitions [3,17,49,68]. In contrast, the HCI community has studied unfairness in ML systems through political, social, and psychological lenses, among others (e.g., [15,47,94,108]). For example, HCI researchers have empirically studied users' expectations and perceptions related to fairness in algorithmic systems, finding that these do not always align with existing statistical definitions [16,71,72,108].…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Significant effort in the fair ML community has focused on the development of statistical definitions of fairness [14,25,32,83] and algorithmic methods to assess and mitigate biases in relation to these definitions [3,17,49,68]. In contrast, the HCI community has studied unfairness in ML systems through political, social, and psychological lenses, among others (e.g., [15,47,94,108]). For example, HCI researchers have empirically studied users' expectations and perceptions related to fairness in algorithmic systems, finding that these do not always align with existing statistical definitions [16,71,72,108].…”
Section: Background and Related Workmentioning
confidence: 99%
“…In contrast, the HCI community has studied unfairness in ML systems through political, social, and psychological lenses, among others (e.g., [15,47,94,108]). For example, HCI researchers have empirically studied users' expectations and perceptions related to fairness in algorithmic systems, finding that these do not always align with existing statistical definitions [16,71,72,108]. Other work has focused on auditing widely-used ML products from the outside [8,21,29,62], and often concluded with high-level calls to action aimed at those responsible for developing and maintaining these systems or for regulating their use [36,93,97].…”
Section: Background and Related Workmentioning
confidence: 99%
“…This is supported also by recent impossibility results that show some fairness definitions cannot coexist (Kleinberg, Mullainathan, and Raghavan 2016). Since the public is affected by these algorithmic systems, it is important to investigate public views of algorithmic fairness (Lee and Baykal 2017;Lee, Kim, and Lizarondo 2017;Lee 2018;Binns et al 2018;Woodruff et al 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Lee and Baykal [20] investigate people's perceptions of fair division algorithms (e.g., those designed to divide rent among tenants) compared to discussion-based group decision-making methods. Woodruff et al [31] conduct workshops and interviews with participants belonging to certain marginalized groups (by race or class) in the US to understand their reactions to algorithmic unfairness.…”
Section: Related Workmentioning
confidence: 99%