Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544548.3580970
|View full text |Cite
|
Sign up to set email alerts
|

Conceptualizing Algorithmic Stigmatization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 150 publications
0
2
0
Order By: Relevance
“…AI conversational agents could be helpful in identifying users at risk of suicide and self-harm, but it is imperative that we not rely on AI alone to detect imminent risks. Furthermore, there are open questions about monitoring stigmatized topics like suicide, as it can perpetuate algorithmic stigmatization [5] for individuals struggling with mental health issues and create an environment where they feel constantly surveilled. Researchers should investigate the social implications of deploying AI in sensitive contexts such as mental health support based on four algorithmic stigma elements (i.e., labeling, stereotyping, separation, status loss/discrimination) [5] which include representational and/or allocative harms from the perspective of youth.…”
Section: Ai As Complimentary Response Based On the Context And Sensit...mentioning
confidence: 99%
See 1 more Smart Citation
“…AI conversational agents could be helpful in identifying users at risk of suicide and self-harm, but it is imperative that we not rely on AI alone to detect imminent risks. Furthermore, there are open questions about monitoring stigmatized topics like suicide, as it can perpetuate algorithmic stigmatization [5] for individuals struggling with mental health issues and create an environment where they feel constantly surveilled. Researchers should investigate the social implications of deploying AI in sensitive contexts such as mental health support based on four algorithmic stigma elements (i.e., labeling, stereotyping, separation, status loss/discrimination) [5] which include representational and/or allocative harms from the perspective of youth.…”
Section: Ai As Complimentary Response Based On the Context And Sensit...mentioning
confidence: 99%
“…The notion of education as a solution to the lack of expertise of support givers was validated by a later study where researchers [81] developed an AI system called 'Hailey' which helped develop more empathetic responses to support seekers in a mental health peer support platform. AI does have many limitations, such as its inability to understand human nuances [13], ethical risks of misclassification for sensitive topics [71], and potential algorithmic stigmatization and harms [5]. This has led to continued efforts to explore the potential of AI as a supplementary, rather than a primary, source of support for youth.…”
Section: Introductionmentioning
confidence: 99%