2023
DOI: 10.2196/50696
|View full text |Cite
|
Sign up to set email alerts
|

Ethical Challenges in AI Approaches to Eating Disorders

Abstract: The use of artificial intelligence (AI) to assist with the prevention, identification, and management of eating disorders and body image concerns is exciting, but it is not without risk. Technology is advancing rapidly, and ensuring that responsible standards are in place to mitigate risk and protect users is vital to the success and safety of technologies and users.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 14 publications
(18 reference statements)
0
5
0
Order By: Relevance
“…The first ED prevention-oriented chatbot, "KIT" (the updated version is called "JEM"), demonstrated initial feasibility with positive reviews from young help-seeking individuals along with their caregivers (Beilharz et al, 2021). More recently, variations in AI algorithms have highlighted potential limitations and ethical considerations, given examples of chatbots providing inappropriate messages to users (Sharp et al, 2023). In a highly publicized incident in 2023, a chatbot named "Tessa" utilized by a not-for-profit ED organization was taken down shortly after being made available to the public after reports emerged it had made inappropriate comments to users seeking support for an ED, including the reinforcement of risk associated language and encouraging weight loss to those seeking ED support (Sankaran, 2023).…”
Section: Models For Unstructured Datamentioning
confidence: 99%
See 1 more Smart Citation
“…The first ED prevention-oriented chatbot, "KIT" (the updated version is called "JEM"), demonstrated initial feasibility with positive reviews from young help-seeking individuals along with their caregivers (Beilharz et al, 2021). More recently, variations in AI algorithms have highlighted potential limitations and ethical considerations, given examples of chatbots providing inappropriate messages to users (Sharp et al, 2023). In a highly publicized incident in 2023, a chatbot named "Tessa" utilized by a not-for-profit ED organization was taken down shortly after being made available to the public after reports emerged it had made inappropriate comments to users seeking support for an ED, including the reinforcement of risk associated language and encouraging weight loss to those seeking ED support (Sankaran, 2023).…”
Section: Models For Unstructured Datamentioning
confidence: 99%
“…Although rapid progress in technology offers the promise to advance prevention and treatment efforts, clinicians and researchers must ensure that ethical safeguards are in place that protect the health and well-being of end users. As experts work to expand standards of care and guidelines on implementation and best practice protocols, considerations relating to safety should be maximized to reduce any chance of harm or added risk (Buruk et al, 2020;Sharp et al, 2023).…”
Section: Governance Of Ai: Issues Relating To Bias Ethics and Safetymentioning
confidence: 99%
“…With a considerable rise in people seeking treatment for eating disorders and subsequent increases in waitlist times [30], there is a critical need for a resource that is evidence-based to provide more timely support for those seeking treatment. Although, with technology rapidly developing, it is crucial that advances in the field maintain ethical and safe conduct [31]. For example, the aforementioned new generative artificial intelligence feature of Tessa chatbot led to the agent providing harmful dieting and weight loss advice [29].…”
Section: Recent Researchmentioning
confidence: 99%
“…Thus, careful consideration is required in the development of a digital resource as there is potential of significant harm if the technology malfunctions [32]. It is recommended that a multidisciplinary team approach is implemented into the design process as working with experts from different disciplines, such as mental health clinicians, researchers, developers, individuals with lived experience and ethicists ensures that appropriate safeguards are in place [31].…”
Section: Recent Researchmentioning
confidence: 99%
“…The potential to offer differential diagnosis based on multimodal data sources (e.g., medical records, genetic results, neuroimaging data) remains appealing but as yet untested. Evidence of the true potential for supporting care remains elusive, and the harm caused to the eating disorder community by the public release (and rapid repudiation within one week) of the Tessa chatbot highlights that more robust evidence is necessary than that currently collected 9 . Like other medical devices, evidence of clinical claims should be supported by high-quality randomized controlled trials that employ digital placebo groups (e.g., a non-therapeutic chatbot).…”
mentioning
confidence: 99%