2023
DOI: 10.7759/cureus.37432
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References

Abstract: BackgroundChatbots are computer programs that use artificial intelligence (AI) and natural language processing (NLP) to simulate conversations with humans. One such chatbot is ChatGPT, which uses the third-generation generative pre-trained transformer (GPT-3) developed by OpenAI. ChatGPT has been praised for its ability to generate text, but concerns have been raised about its accuracy and precision in generating data, as well as legal issues related to references. This study aims to investigate the frequency … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

2
39
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 108 publications
(66 citation statements)
references
References 5 publications
2
39
0
Order By: Relevance
“…This suggests that whilst ChatGPT can identify patterns and organize data, it has limitations in fully understanding the underlying meaning and context of the information (Sinha et al, 2023). To mitigate the problems of generating 'hallucinatory' or fabricated responses (Athaluri et al, 2023;Masters, 2023;Thirunavukarasu et al, 2023), this study took a cautious and specific approach to eliciting answers about endodontics. This was achieved using a prompt that clearly indicated the desired type of answer: 'Only yes or no answer'.…”
Section: Discussionmentioning
confidence: 99%
“…This suggests that whilst ChatGPT can identify patterns and organize data, it has limitations in fully understanding the underlying meaning and context of the information (Sinha et al, 2023). To mitigate the problems of generating 'hallucinatory' or fabricated responses (Athaluri et al, 2023;Masters, 2023;Thirunavukarasu et al, 2023), this study took a cautious and specific approach to eliciting answers about endodontics. This was achieved using a prompt that clearly indicated the desired type of answer: 'Only yes or no answer'.…”
Section: Discussionmentioning
confidence: 99%
“…Then, there were questions that prompted rating the appropriateness of LLM use for 23 different tasks in clinical practice, research and education on a 5-point Likert scale (i.e., highly appropriate to highly inappropriate). Those tasks represented a sample of proposed LLM uses that were synthesized form the literature and included, but were not limited to, optimizing alerts for clinical decision support, providing a differential diagnosis, writing a discharge summary, recommending treatment options, translating radiology reports into layperson language, writing scientific manuscripts, and generating personalized study plans for students or trainees among others 2,6,7,8,[15][16][17][18][19][20][21][22][23][24][25] .…”
Section: Study Design and Samplingmentioning
confidence: 99%
“…The phenomenon of AI hallucination, wherein an LLM generates false information based on patterns that do not exist in the source material, provides evidence of this problem. Why AI tools create content that is false or misleading is not fully understood and reflects an underlying degree of uncertainty (Athaluri et al, 2023).…”
mentioning
confidence: 99%