2022
DOI: 10.48550/arxiv.2201.10853
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Feminist Perspective on Robot Learning Processes

Abstract: As different research works report and daily life experiences confirm, learning models can result in biased outcomes. The biased learned models usually replicate historical discrimination in society and typically negatively affect the less represented identities. Robots are equipped with these models that allow them to operate, performing tasks more complex every day. The learning process consists of different stages depending on human judgments. Moreover, the resulting learned models for robot decisions rely … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 10 publications
(17 reference statements)
0
1
0
Order By: Relevance
“…Discrimination in relation to artificial systems has been analysed from different point of view, also from a feminist perspective, as a phenomenon that may depend on several causal factors and having various consequences (Bardzell 2010;Esposito et al 2020). It has been pointed out, for example, that it is important to ensure that artificial agents implemented with learning algorithms do not have biases within these same algorithms regarding some characteristics of the target populations (Hurtado and Mejia 2022). Otherwise, they could lead to the exclusion of some contexts, or parts of the population, from their learning algorithms, and thus perpetuate or emphasise some features and domains at the expense of others.…”
Section: Discussionmentioning
confidence: 99%
“…Discrimination in relation to artificial systems has been analysed from different point of view, also from a feminist perspective, as a phenomenon that may depend on several causal factors and having various consequences (Bardzell 2010;Esposito et al 2020). It has been pointed out, for example, that it is important to ensure that artificial agents implemented with learning algorithms do not have biases within these same algorithms regarding some characteristics of the target populations (Hurtado and Mejia 2022). Otherwise, they could lead to the exclusion of some contexts, or parts of the population, from their learning algorithms, and thus perpetuate or emphasise some features and domains at the expense of others.…”
Section: Discussionmentioning
confidence: 99%