2023
DOI: 10.1002/widm.1487
|View full text |Cite
|
Sign up to set email alerts
|

Review of artificial intelligence‐based question‐answering systems in healthcare

Abstract: Use of conversational agents, like chatbots, avatars, and robots is increasing worldwide. Yet, their effectiveness in health care is largely unknown. The aim of this advanced review was to assess the use and effectiveness of conversational agents in various fields of health care. A literature search, analysis, and synthesis were conducted in February 2022 in PubMed and CINAHL. The included evidence was analyzed narratively by employing the principles of thematic analysis. We reviewed articles on artificial int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(9 citation statements)
references
References 75 publications
0
6
0
Order By: Relevance
“…Considering the rising prevalence of psychiatric disorders and concomitant challenges in providing care, it seemed likely that nonprofessionals would also turn to the chatbot for mental health issues at the time of GPT-3.5's release [8,49,50]. Hence, it is conceivable that GPT-3.5's training data set includes not only a substantial and reliable portion of psychiatric data, but also its developers might have first fine-tuned ChatGPT specifically in this domain in anticipation of its high demand [51][52][53]. Thus, the developers might have also fine-tuned GPT-4 specifically in internal medicine and surgery, possibly reacting to a high demand in this area from users of its' predecessor.…”
Section: Principal Findingsmentioning
confidence: 99%
“…Considering the rising prevalence of psychiatric disorders and concomitant challenges in providing care, it seemed likely that nonprofessionals would also turn to the chatbot for mental health issues at the time of GPT-3.5's release [8,49,50]. Hence, it is conceivable that GPT-3.5's training data set includes not only a substantial and reliable portion of psychiatric data, but also its developers might have first fine-tuned ChatGPT specifically in this domain in anticipation of its high demand [51][52][53]. Thus, the developers might have also fine-tuned GPT-4 specifically in internal medicine and surgery, possibly reacting to a high demand in this area from users of its' predecessor.…”
Section: Principal Findingsmentioning
confidence: 99%
“…A specific subset of AI in healthcare is patient-facing conversational AI agents and chatbots, which directly interact with patients to perform tasks ranging from symptom self-diagnosis and treatment recommendations to medication management [ 6 ]. These include various modalities such as text-based chatbots [ 7 ], voice assistants [ 8 ], and wearable devices [ 9 ].…”
Section: Introductionmentioning
confidence: 99%
“…To bridge this gap, question-answering (QA) systems have emerged as a tool to enhance knowledge and understanding on numerous topics by providing short and precise answers to questions posed in natural language. 24 This is achieved through natural language processing (NLP), a branch of artificial intelligence (AI) with rapid developments and vast applications using large language models (LLM) for QA. Question-answering systems possess an abundance of domain knowledge, where biomedical QA systems can be trained on evidence-based medical information to increase the accessibility of expert opinions.…”
Section: Introductionmentioning
confidence: 99%
“…This mimics direct access to an expert by providing timely and accurate responses to user's queries, allowing them to access evidence-based information in real-time. These QA systems have been applied for use in clinical decision support, 25 , 26 medical examinations, 27 , 28 consumer health questions 29 and to improve numerous health outcomes, 24 , 25 including sleep outcomes in university settings. 30 However, despite the QA system's abundance of knowledge, providing information to patients alone in this form is unlikely to be sufficient to promote behavioural change.…”
Section: Introductionmentioning
confidence: 99%