The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2020
DOI: 10.1136/bmjopen-2020-040269
|View full text |Cite
|
Sign up to set email alerts
|

How accurate are digital symptom assessment apps for suggesting conditions and urgency advice? A clinical vignettes comparison to GPs

Abstract: ObjectivesTo compare breadth of condition coverage, accuracy of suggested conditions and appropriateness of urgency advice of eight popular symptom assessment apps.DesignVignettes study.Setting200 primary care vignettes.Intervention/comparatorFor eight apps and seven general practitioners (GPs): breadth of coverage and condition-suggestion and urgency advice accuracy measured against the vignettes’ gold-standard.Primary outcome measures(1) Proportion of conditions ‘covered’ by an app, that is, not excluded bec… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

4
205
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 130 publications
(225 citation statements)
references
References 27 publications
4
205
0
Order By: Relevance
“…The results of this study are in line with previous SC analyses [ 12 , 17 , 18 ]. Research supported by Ada Health GmbH shows that Ada had the highest top 3 suggestion diagnostic accuracy (70.5%) compared to other SCs [ 19 ], and the correct condition was among the first three results in 83% in an Australian assessment study [ 20 ]. Similarly to our results, the majority of patients would recommend Ada (85.3%) to friends or relatives [ 21 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The results of this study are in line with previous SC analyses [ 12 , 17 , 18 ]. Research supported by Ada Health GmbH shows that Ada had the highest top 3 suggestion diagnostic accuracy (70.5%) compared to other SCs [ 19 ], and the correct condition was among the first three results in 83% in an Australian assessment study [ 20 ]. Similarly to our results, the majority of patients would recommend Ada (85.3%) to friends or relatives [ 21 ].…”
Section: Discussionmentioning
confidence: 99%
“…In contrast to Rheport, Ada is supported by artificial intelligence and does not use a fixed questionnaire. Ada covers a great variety of different conditions [ 19 ] and is not limited to IRDs, whereas Rheport is exclusively meant for the triage of new suspected IRD patients. The study setting was deliberately chosen risk-adverse, so the use of the SCs did not have any clinical implications.…”
Section: Discussionmentioning
confidence: 99%
“…Despite the mounting number of symptom checkers available and the adoption of this technology by various credible health institutions and entities such as the UK National Health Service (NHS) and the government of Australia [9,10], knowledge surrounding this technology is limited [11]. The scarce literature on symptom checker accuracy suggests that the quality of diagnostic and triage advice differs based on the digital platform used [12] with those enabled by artificial intelligence having a higher percentage of listing the correct diagnosis first [13].…”
Section: Introductionmentioning
confidence: 99%
“…The use of artificial intelligence (AI) is expected to reduce diagnostic errors in outpatients [ 6 , 7 ]. However, online symptom checkers, which generate AI-driven differential-diagnosis lists alone, failed to show high diagnostic accuracy [ 8 , 9 , 10 ]. On the other hand, a previous study demonstrated that providing AI-driven differential-diagnosis lists with basic patient information such as age, sex, risk factors, past medical history, and current reason for medical appointment could improve the diagnostic accuracy of physicians [ 11 ].…”
Section: Introductionmentioning
confidence: 99%