AI in Clinical Medicine 2023
DOI: 10.1002/9781119790686.ch28
|View full text |Cite
|
Sign up to set email alerts
|

AI in Surgery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 5 publications
0
1
0
Order By: Relevance
“…Atomwise [123,124], BenevolentAI [125], Insilico Medicine [126] Diagnostics and imaging Analysis of medical images, such as X-rays, MRIs, or CT scans and faster speed and accuracy of detection of abnormalities than human specialists. Aidoc [127,128], Zebra Medical Vision [129],…”
Section: Drug Discoverymentioning
confidence: 99%
“…Atomwise [123,124], BenevolentAI [125], Insilico Medicine [126] Diagnostics and imaging Analysis of medical images, such as X-rays, MRIs, or CT scans and faster speed and accuracy of detection of abnormalities than human specialists. Aidoc [127,128], Zebra Medical Vision [129],…”
Section: Drug Discoverymentioning
confidence: 99%
“…For instance, if the training data skews the representativeness of one demographic group versus another, LLM-based suggestions in telehealth could lead to or exacerbate gender, racial, or ethnic inequities. This concern is not merely theoretical; commercially accessible LLMs have exhibited racial and gender biases in non-medical contexts, and these very models have been found to propagate stereotypes related to race within the field of medicine [23]. Biases may also arise from LLMs learning stigmatizing representations of language in training data [153], and propagating those through inappropriate language or portraying mental health issues in a negative light.…”
Section: Potential Harmsmentioning
confidence: 99%
“…Several mental healthcare organizations and companies [18][19][20] have also begun to research the integration of LLMs into the design of their services. This increased use of LLMs in service delivery has been met with excitement [21], but also with justified skepticism, given potential racial or gender biases [22,23] and unexpected outputs [24] from LLM-based chatbots. For example, in June 2023, the National Eating Disorder Association was forced to shut down a chatbot created to provide clinically validated information after the chatbot provided harmful and dangerous advice to users, including diet and weight loss advice [24,25].…”
Section: Introductionmentioning
confidence: 99%
“…This paper sets out on an excursion to investigate the sweeping effect of LLMs on the medical care scene. We want to explain how these models are reclassifying the eventual fate of medication and to address the moral contemplations that go with this groundbreaking innovation [3][4][5].…”
Section: Introductionmentioning
confidence: 99%