The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2020
DOI: 10.1101/2020.05.07.20093872
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Original research: How accurate are digital symptom assessment apps for suggesting conditions and urgency advice?: a clinical vignettes comparison to GPs

Abstract: Objectives To compare breadth of condition coverage, accuracy of suggested conditions and appropriateness of urgency advice of 8 popular symptom assessment apps with each other and with 7 General Practitioners.Design Clinical vignettes study.Setting 200 clinical vignettes representing real-world scenarios in primary care.Intervention/comparator Condition coverage, suggested condition accuracy, and urgency advice performance was measured against the vignettes' gold-standard diagnoses and triage level. Primary o… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
7
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
4

Relationship

5
3

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 27 publications
2
7
0
Order By: Relevance
“…The finding of this study, that the Ada app has relatively high condition suggestion performance and urgency advice performance compared to other available symptom assessment applications, reflects the finding of other studies [1114]. The Ada app was recently compared to general practitioners (GPs) and competitor apps in a 200 vignettes study [11].…”
Section: Discussionsupporting
confidence: 75%
“…The finding of this study, that the Ada app has relatively high condition suggestion performance and urgency advice performance compared to other available symptom assessment applications, reflects the finding of other studies [1114]. The Ada app was recently compared to general practitioners (GPs) and competitor apps in a 200 vignettes study [11].…”
Section: Discussionsupporting
confidence: 75%
“…In addition to usability, novel digital approaches must undergo rigorous evaluation of diagnostic coverage, accuracy, and safety. In a preprint from our group (currently undergoing peer review), we evaluated the performance of 8 popular symptom checkers against one another and 7 human GP raters, as well as a gold-standard diagnostic suggestion using 200 clinical vignettes [ 31 ]. There was a range of coverage from the apps, with up to half of potential users being ineligible to use the symptom checker because they were too young, too old, or were pregnant; Ada offered 99.0% of users a suggested condition diagnosis.…”
Section: Discussionmentioning
confidence: 99%
“…Despite the mounting number of symptom checkers available and the adoption of this technology by various credible health institutions and entities such as the UK National Health Service (NHS) and the government of Australia [ 9 , 10 ], knowledge surrounding this technology is limited [ 11 ]. The scarce literature on symptom checker accuracy suggests that the quality of diagnostic and triage advice differs based on the digital platform used [ 12 ] with those enabled by artificial intelligence having a higher percentage of listing the correct diagnosis first [ 13 ].…”
Section: Introductionmentioning
confidence: 99%