2023
DOI: 10.1001/jamaophthalmol.2023.2754
|View full text |Cite
|
Sign up to set email alerts
|

Performance of an Upgraded Artificial Intelligence Chatbot for Ophthalmic Knowledge Assessment

Abstract: This cross-sectional study assesses the accuracy of answers generated by an updated version of a popular chatbot to board certification examination preparation questions.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

1
23
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 40 publications
(35 citation statements)
references
References 4 publications
1
23
1
Order By: Relevance
“…Lin and colleagues make the argument that since this time, the accuracy of ChatGPT has increased. We completely agree and note the findings of our recent investigation published in JAMA Ophthalmology . Here, we found that ChatGPT-4 correctly answered 105 of 125 questions of the same question bank in March 2023 (84%), remarkably outperforming the previous model of this chatbot …”
supporting
confidence: 91%
“…Lin and colleagues make the argument that since this time, the accuracy of ChatGPT has increased. We completely agree and note the findings of our recent investigation published in JAMA Ophthalmology . Here, we found that ChatGPT-4 correctly answered 105 of 125 questions of the same question bank in March 2023 (84%), remarkably outperforming the previous model of this chatbot …”
supporting
confidence: 91%
“…In March 2023, OpenAI unveiled the latest iteration of the chatbot, which is built on the generative pretrained transformer (GPT)–4 architecture. In JAMA Ophthalmology , Mihalache et al examine the performance of the updated version of this chatbot on a question bank designed to help candidates prepare for ophthalmology board examinations, building on previous work evaluating the older GPT-3.5 architecture …”
mentioning
confidence: 99%
“…Mihalache et al expand on this idea by evaluating performance of the updated version of this chatbot on the OphthoQuestions bank, which contains questions relevant to the Ophthalmic Knowledge Assessment Program and Written Qualifying Examination. Their earlier study using GPT-3 reported that 73 of 125 text-based multiple-choice questions (58%) were answered correctly, which is fairly moderate performance considering that 25% would be answered correctly on random chance alone.…”
mentioning
confidence: 99%
See 2 more Smart Citations