2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533138
|View full text |Cite
|
Sign up to set email alerts
|

Robots Enact Malignant Stereotypes

Abstract: Stereotypes, bias, and discrimination have been extensively documented in Machine Learning (ML) methods such as Computer Vision (CV) [18,80], Natural Language Processing (NLP) [6], or both, in the case of large image and caption models such as OpenAI CLIP [14].In this paper, we evaluate how ML bias manifests in robots that physically and autonomously act within the world. We audit one of several recently published CLIP-powered robotic manipulation methods, presenting it with objects that have pictures of human… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 58 publications
0
8
0
1
Order By: Relevance
“…Similarly, Hundt et al (2022) reported that many training datasets have demonstrated spew racism, sexism, and other detrimental biases. They called for measures and regulations governing the design and training of robots and other AI systems.…”
Section: Some Worries and Concernsmentioning
confidence: 97%
“…Similarly, Hundt et al (2022) reported that many training datasets have demonstrated spew racism, sexism, and other detrimental biases. They called for measures and regulations governing the design and training of robots and other AI systems.…”
Section: Some Worries and Concernsmentioning
confidence: 97%
“…Social hierarchy Like data-driven algorithms, robots are beginning to show behaviors reproducing malicious stereotypes [12] existing in the social world (racism, patriarchy, etc.). These behaviors, reproducing the logics coming from authoritarian structures of the social world, which is largely underestimated, must become a major problem of robotics.…”
Section: Robots As Means Of Productionsmentioning
confidence: 99%
“…1) produced by robots with ethical and socio-political impacts. Initial experiments are underway [12], but this area of research remains largely under-investigated.…”
Section: ) Erin (Data and Embodiment)mentioning
confidence: 99%

Guidelines for Robotics

Fiolet,
Topart,
Lahleb
et al. 2024
Preprint
“…[6][7][8] Famously, ML-driven tools mirror the structural racism and cultural bias of the system from which the data are sourced. [9][10][11] Further, in the automated model selection process, the overall quality of model prediction may be achieved by sacrificing the quality of prediction for marginalized persons and other underrepresented minority groups in exchange for superior quality of prediction for majority groups. 12 Ultimately, the use of an ML tool to direct decisions may create homogeneity in decisions among similar patients, and as those patients become part of the data set for future predictions, they may enrich the data set in unbalanced and potentially self-fulfilling ways.…”
Section: Challenges Of Implementing Ml-based Prediction In Practicementioning
confidence: 99%
“… 6 8 Famously, ML-driven tools mirror the structural racism and cultural bias of the system from which the data are sourced. 9 11 Further, in the automated model selection process, the overall quality of model prediction may be achieved by sacrificing the quality of prediction for marginalized persons and other underrepresented minority groups in exchange for superior quality of prediction for majority groups. 12 …”
Section: Challenges Of Implementing Ml-based Prediction In Practicementioning
confidence: 99%