Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization 2019
DOI: 10.1145/3320435.3320442
|View full text |Cite
|
Sign up to set email alerts
|

What Makes an Image Tagger Fair?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…Annotators (16 papers). In this work, human annotators assign gender for users, providers or subjects [3,30,70]. In [20], the authors use a dataset where the gender of book authors was annotated by library professionals.…”
Section: Gender Determinationmentioning
confidence: 99%
“…Annotators (16 papers). In this work, human annotators assign gender for users, providers or subjects [3,30,70]. In [20], the authors use a dataset where the gender of book authors was annotated by library professionals.…”
Section: Gender Determinationmentioning
confidence: 99%
“…While there exists all these mathematical fairness definitions and metrics, they tend to be conflicting and it is impossible to comply with all of them simultaneously, as shown by Chouldechova et al [38]. Consequently, few papers [18,62,105,106,195] study how the fairness of data-driven decisionsupport systems is perceived in order to choose the most relevant definitions taking into account stakeholders' preferences and mathematical trade-offs. Srivastava et al [173] show that one simple definition of fairness (demographic parity) solely matches the expectations of users of hypothetical systems.…”
Section: Conflicting Perceptions Of Fairnessmentioning
confidence: 99%
“…After choosing a fairness definition, deciding how to transform age into a categorical attribute can have direct bias consequences. Defining protected classes (male, black, [10][11][12][13][14][15][16][17][18][19][20][21][22][23]) or (male, black, [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]) as protected attributes would both surface and measure different biases. Different mappings of age to its protected class "young" can create different system behaviours: the granularity of the categories chosen would influence both the performance and fairness of the trained inference model.…”
Section: Bias-aware Schema Designmentioning
confidence: 99%
“…Another set of works in the HCI domain, analyzes crowdsourced data from the OpenStreetMap to detect any potential biases such as gender and geographic information bias [30,109]. In a similar vein, two other studies run a crowdsourcing study to detect any bias on human versus algorithmic decision making [52,7]. Green and Chen [52] run a crowdsourcing study to examine the influence of algorithmic risk assessment to human decision making, while Barlas et.…”
Section: Direct or Indirect Discrimination Discoverymentioning
confidence: 99%
“…al. [7] compared human and algorithmic generated descriptions of people images in a crowdsourcing study in an attempt to identify what is perceived as fair when describing the depicted person. The execution of a crowdsourcing study for discrimination detection has also been used in IR systems [40,88].…”
Section: Direct or Indirect Discrimination Discoverymentioning
confidence: 99%