2024
DOI: 10.1101/2024.04.10.24305470
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simulated Misuse of Large Language Models and Clinical Credit Systems

James Anibal,
Hannah Huth,
Jasmine Gunkel
et al.

Abstract: Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI models may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources based on unfair, inaccurate, or unjust criteria. For example, a social credit system uses big data to assess trustworthiness in society, punishing those who score poorly based on evaluation metrics defined only by a pow… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 50 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?