CUI 2021 - 3rd Conference on Conversational User Interfaces 2021
DOI: 10.1145/3469595.3469597
|View full text |Cite
|
Sign up to set email alerts
|

LGBTQ-AI? Exploring Expressions of Gender and Sexual Orientation in Chatbots

Abstract: Chatbots are popular machine partners for task-oriented and social interactions. Human-human computer-mediated communication research has explored how people express their gender and sexuality in online social interactions, but little is known about whether and in what way chatbots do the same. We conducted semi-structured interviews with 5 text-based conversational agents to explore this topic Through these interviews, we identified 6 common themes around the expression of gender and sexual identity: identity… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 23 publications
(29 reference statements)
0
4
0
Order By: Relevance
“…After removing 98.23% (1163/1184) of irrelevant items, 21 studies were examined in depth. Of these 21 studies, 12 (57%) [17][18][19][20][21][22][23][24][25][26][27][28] were excluded with reasons (Textbox 3): 3 (25%) [21,23,25] were excluded as they were chatbot acceptance studies only without implementation of the chatbot project; 3 (25%) [17,18,22] were excluded as they described digital tools, which, despite being AI-enhanced, did not fully meet our inclusion criteria (not being able to dynamically interact with the client or patient); 1 (8%) [17] was a usability study that lacked the implementation part; 1 (8%) [19] was not retained as it performed a content analysis of health-related information concerning HIV pre-exposure prophylaxis (PrEP); another study (8%) [20] was excluded as it explored how text-based conversational agents express their gender and sexual identity, which was not related to our topic of interest and research aims or objectives; and 3 (25%) further studies [24,27,28] were excluded as the population also comprised non-LGBTQ individuals. Finally, an app called Rainbow Austin [26] was excluded because of lack of details.…”
Section: Search Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…After removing 98.23% (1163/1184) of irrelevant items, 21 studies were examined in depth. Of these 21 studies, 12 (57%) [17][18][19][20][21][22][23][24][25][26][27][28] were excluded with reasons (Textbox 3): 3 (25%) [21,23,25] were excluded as they were chatbot acceptance studies only without implementation of the chatbot project; 3 (25%) [17,18,22] were excluded as they described digital tools, which, despite being AI-enhanced, did not fully meet our inclusion criteria (not being able to dynamically interact with the client or patient); 1 (8%) [17] was a usability study that lacked the implementation part; 1 (8%) [19] was not retained as it performed a content analysis of health-related information concerning HIV pre-exposure prophylaxis (PrEP); another study (8%) [20] was excluded as it explored how text-based conversational agents express their gender and sexual identity, which was not related to our topic of interest and research aims or objectives; and 3 (25%) further studies [24,27,28] were excluded as the population also comprised non-LGBTQ individuals. Finally, an app called Rainbow Austin [26] was excluded because of lack of details.…”
Section: Search Resultsmentioning
confidence: 99%
“…Furthermore, young individuals favored using LGBTQ-specific communities within their existing social media accounts while maintaining a level of privacy regarding their personal information. After the formative work, the authors conducted a week-long exploratory study involving a group of 20 LGBTQ youth aged 14 to 20 years from rural areas out of an initial list of 348 survey respondents recruited via social media and gauged their social media self-efficacy, perceived isolation, and depressive symptoms through pre-and posttest assessments. Half of the participants self-identified as transgender, and 35% identified as cisgender gay men or lesbians.…”
Section: Realbotmentioning
confidence: 99%
“…There are many aspects of safety problems, and the most commonly considered issues includes toxicity and offensive words in generation (Baheti et al, 2021;Cercas Curry and Rieser, 2018;, bias (Henderson et al, 2018;Barikeri et al, 2021;Lee et al, 2019), privacy (Weidinger et al, 2021), sensitive topics (Xu et al, 2020;, etc. In the conversational unsafety measurement (Cercas Curry and Rieser, 2018;Edwards et al, 2021;, adversarial learning for safer bots (Xu et al, 2020;Gehman et al, 2020) and bias mitigation Xu et al, 2020;Thoppilan et al, 2022) strategies, unsafe behaviour detecting task plays an important role. Additionally, recent works in largescale language models (Rae et al, 2021;Thoppilan et al, 2022) show that the increasing model scales have no substantial relationship with the bias safety level.…”
Section: Dialogue Safetymentioning
confidence: 99%
“…Inheriting from pre-trained language models, dialog safety issues, including toxicity and offensiveness (Baheti et al, 2021;Cercas Curry and Rieser, 2018;, bias (Henderson et al, 2018;Barikeri et al, 2021;Lee et al, 2019), privacy (Weidinger et al, 2021), and sensitive topics Sun et al, 2021), are exceeding studied and increasingly drawing attention. In the conversational unsafety measurement (Cercas Curry and Rieser, 2018;Sun et al, 2021;Edwards et al, 2021), adversarial learning for safer bots Gehman et al, 2020) and bias mitigation Thoppilan et al, 2022) strategies, unsafety behaviour detecting task plays an important role.…”
Section: Dialog Safety and Social Biasmentioning
confidence: 99%