2024
DOI: 10.1080/02602938.2023.2299059
|View full text |Cite
|
Sign up to set email alerts
|

ChatGPT performance on multiple choice question examinations in higher education. A pragmatic scoping review

Philip Newton,
Maira Xiromeriti
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
1

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 41 publications
0
5
1
Order By: Relevance
“…Authentic extended conversations like these, using the chatbot as an agent to think with have, been called for , but, to our knowledge, have not been reported and are a novel aspect of this investigation. These experiences diverge from early investigations that described the limitations of a chatbot for answering exam questions , and differ from more recent investigations that describe improved exam performance , because they shift the focus from the capability of a chatbot to answer standalone questions to its role in fostering reflection on the part of the user. Unlike earlier investigations, this work included an element of user introspection that highlights the technology’s potential to enrich a teacher’s lesson planning.…”
Section: Discussioncontrasting
confidence: 65%
“…Authentic extended conversations like these, using the chatbot as an agent to think with have, been called for , but, to our knowledge, have not been reported and are a novel aspect of this investigation. These experiences diverge from early investigations that described the limitations of a chatbot for answering exam questions , and differ from more recent investigations that describe improved exam performance , because they shift the focus from the capability of a chatbot to answer standalone questions to its role in fostering reflection on the part of the user. Unlike earlier investigations, this work included an element of user introspection that highlights the technology’s potential to enrich a teacher’s lesson planning.…”
Section: Discussioncontrasting
confidence: 65%
“…Historical data corroborate the extensive nature of academic cheating, with studies revealing a significant proportion of students admitting to such behaviour [16,17]. These concerns around negative trends in academic integrity in online exams were already well advanced when ChatGPT was released, and a new and significant compounding factor was introduced [18][19][20][21]. ChatGPT's ability to answer exam questions has already been demonstrated in multiple studies [21][22][23].…”
Section: Introductionmentioning
confidence: 77%
“… 30 Both studies were conducted using an earlier version of ChatGPT, and more recent iterations are likely to exhibit enhanced performance. 27 , 31 A scoping review by Newton and Xiromeriti 32 revealed that ChatGPT’s performance varied across different evaluation methods and subjects, with ChatGPT 3 passing 20.3% and ChatGPT-4 passing 92.9% of exams. Notably, ChatGPT 3 outperformed human students in 10.9% of exams, while ChatGPT 4 did so in 35%, indicating significant performance improvement in the more advanced version.…”
Section: Discussionmentioning
confidence: 99%