Proceedings of the 2017 ACM on Web Science Conference 2017
DOI: 10.1145/3091478.3091512
|View full text |Cite
|
Sign up to set email alerts
|

Young People's Policy Recommendations on Algorithm Fairness

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 5 publications
0
10
0
Order By: Relevance
“…Models can be biased based on a lack of representations in the training data or how the model makes decisions, e.g., the selected input variables. The model outcomes should be traceable back to input characteristics [2,23,[50][51][52]. Model values or choices become obsolete.…”
Section: Training Data Qualitiesmentioning
confidence: 99%
“…Models can be biased based on a lack of representations in the training data or how the model makes decisions, e.g., the selected input variables. The model outcomes should be traceable back to input characteristics [2,23,[50][51][52]. Model values or choices become obsolete.…”
Section: Training Data Qualitiesmentioning
confidence: 99%
“…The literature suggests that human-related factors influence the perception of AI systems. For example, empirical studies have found that age (Grgić-Hlača et al, 2020;Helberger et al, 2020;Vallejos et al, 2017), educational level (Helberger et al, 2020), self-interest (Grgić-Hlača et al, 2020;Wang et al, 2020), familiarity with algorithms (Saha et al, 2020), and concerns about data collection (Araujo et al, 2020) have effects on the perception of algorithmic fairness. Hancock et al (2011) performed a meta-analysis of factors influencing trust in human-robot interaction and identified, among others, demographics and attitudes toward robots as possible predictors.…”
Section: Public Preferences For Ai Ethics Guidelinesmentioning
confidence: 99%
“…For example, the Fairness Concerned group should be addressed in different ways than the Safety Concerned or the Human in the Loop groups. Several studies have already been conducted on the inclusion of stakeholders in the design process, especially for fairness (Vallejos et al, 2017;Webb et al, 2018).…”
Section: Ethical Design and Demands Of Potential Stakeholder Groupsmentioning
confidence: 99%
“…They are used here to explore the impact of algorithmic biases on young people and generate their recommendation for a fairer online world that is best aligned to young people's expectations and concerns. The first series of juries produced a rich dataset that is continuing to showcase concerns and provide recommendations [29,31,51,52].…”
Section: Engaging Young People In Researchmentioning
confidence: 99%