2022
DOI: 10.1007/s11948-022-00376-3
|View full text |Cite
|
Sign up to set email alerts
|

Gender Bias and Conversational Agents: an ethical perspective on Social Robotics

Abstract: The increase in the spread of conversational agents urgently requires to tackle the ethical issues linked to their design. In fact, developers frequently include in their products cues that trigger social biases in order to maximize the performance and the quality of human-machine interactions. The present paper discusses whether and to what extent it is ethically sound to intentionally trigger gender biases through the design of virtually embodied conversational agents. After outlining the complex dynamics in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 53 publications
0
5
0
Order By: Relevance
“…Thus, given the results reported here, and in the light of the other studies cited above, it can be said that there is probably a need for a proactive approach, i.e. one that involves ethically acceptable design of artificial systems (Fossa and Sucameli 2022). It would then be possible to implement technologies that counteract discrimination towards people suffering from negative stereotypes.…”
Section: Discussionmentioning
confidence: 59%
“…Thus, given the results reported here, and in the light of the other studies cited above, it can be said that there is probably a need for a proactive approach, i.e. one that involves ethically acceptable design of artificial systems (Fossa and Sucameli 2022). It would then be possible to implement technologies that counteract discrimination towards people suffering from negative stereotypes.…”
Section: Discussionmentioning
confidence: 59%
“…Participants were told that they would be participating in a survey about how people react to different voices, and following the completion of a consent form, they heard each of the 4 Siri voices reading the Rainbow Passage (Fairbanks, 1960), in randomized order. Listeners were initially asked if they heard the clip well, and then were permitted to play the clip as many times as they wanted.…”
Section: Methodsmentioning
confidence: 99%
“…Additionally, listeners in studies of synthesized voices also reproduced the types of social biases that researchers have observed in studies of natural voices, especially with respect to gender. For example, several studies have found that humans are more likely to be abusive to digital assistants with female names and voices than those with male names and voices (Penny, 2016;Fossa and Sucameli, 2022). Similarly, Jackson et al (2020) found that listeners judge "female-sounding" assistants more harshly than "malesounding" robots when they do not comply with user directions, indicating gendered expectations about robot compliance (Jackson et al, 2020).…”
Section: Perception Of Synthesized Voicesmentioning
confidence: 99%
“…Our efforts culminated in finding a wide array of areas for improvement, particularly concerning the role of psychological factors [91], gender and intersectionality considerations, and protective stops that play a crucial role in ensuring user safety [61,70]. Diversity, inclusion, and intersectional considerations have been identified in the literature as potential access and discriminatory problems for 'non-average' body sizes, genders, and pediatric populations [18,36,42,54,98]. Persons differ considerably in size and body form, so accommodating robots to each potential user is a complicated task, yet there seems to be no alternative to embracing complexity and Fig.…”
Section: Findings: Diversity Observations Concerning Lower-limb Exosk...mentioning
confidence: 99%