Proceedings of the 5th International Conference on Conversational User Interfaces 2023
DOI: 10.1145/3571884.3603754
|View full text |Cite
|
Sign up to set email alerts
|

Deceptive AI Ecosystems: The Case of ChatGPT

Abstract: ChatGPT, an AI chatbot, has gained popularity for its capability in generating human-like responses. However, this feature carries several risks, most notably due to its deceptive behaviour such as offering users misleading or fabricated information that could further cause ethical issues. To better understand the impact of ChatGPT on our social, cultural, economic, and political interactions, it is crucial to investigate how ChatGPT operates in the real world where various societal pressures influence its dev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 52 publications
0
6
0
Order By: Relevance
“…Additionally, papers explore the potential for generative models to aid in criminal activities 82 , incidents of self-harm 72 , identity theft 35 , or impersonation 83 . Furthermore, the literature investigates risks posed by LLMs when generating advice in high-stakes domains such as health 84 , safety-related issues 85 , as well as legal or financial matters 86 .…”
Section: Safetymentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, papers explore the potential for generative models to aid in criminal activities 82 , incidents of self-harm 72 , identity theft 35 , or impersonation 83 . Furthermore, the literature investigates risks posed by LLMs when generating advice in high-stakes domains such as health 84 , safety-related issues 85 , as well as legal or financial matters 86 .…”
Section: Safetymentioning
confidence: 99%
“…Papers not only critically analyze various types of reasoning errors in LLMs 87 but also examine risks associated with specific types of misinformation, such as medical hallucinations 97 . Given the propensity of LLMs to produce flawed outputs accompanied by overconfident rationales 93 and fabricated references 86 , many sources stress the necessity of manually validating and fact-checking the outputs of these models 92,98,99 .…”
Section: Safetymentioning
confidence: 99%
See 1 more Smart Citation
“…Trust also plays a crucial role in human-AI interactions. Individuals' trust in AI may be compromised if they feel deceived (Evans et al, 2021;Zhan et al, 2023). In a chess puzzle task where participants collaborated with an AI teammate, participants were less likely to accept the AI teammate's decisions if they were deceived into thinking they were working with another human rather than an AI (Zhang et al, 2023).…”
Section: Trustmentioning
confidence: 99%
“…When AI systems generate false or misleading information that appears authentic and reliable, humans may unknowingly accept and propagate it. The possibility of AI producing misleading and deceptive information on a large scale to users is a serious concern, as it could have adverse impacts on those unable to distinguish between 'fact' and 'fiction', leading to detrimental consequences (Zhan et al, 2023). While deception in human contexts has received extensive study, empirical research about deception of artificial intelligence remains relatively underexplored.…”
Section: Introductionmentioning
confidence: 99%