2013
DOI: 10.1186/1472-6947-13-105
|View full text |Cite
|
Sign up to set email alerts
|

Using value of information to guide evaluation of decision supports for differential diagnosis: is it time for a new look?

Abstract: BackgroundDecision support systems for differential diagnosis have traditionally been evaluated on the basis of criteria how sensitively and specifically they are able to identify the correct diagnosis established by expert clinicians.DiscussionThis article questions whether evaluation criteria pertaining to identifying the correct diagnosis are most appropriate or useful. Instead it advocates evaluation of decision support systems for differential diagnosis based on the criterion of maximizing value of inform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…The following recommendations could be considered for HTA of precision medicine technologies, including those advanced by the ACEMID consortium to deal with specific issues: Clarification of the intended position (or positions) of the diagnostic test in the clinical pathway, and assessment of the cost‐effectiveness of each position (eg, triage, add‐on, replacement), with estimates of the proportionate use in each position. Use of “base case” models that are updated with test performance characteristics (eg, sensitivity and specificity) as they learn and develop. These models could be created during AI algorithm testing, with preliminary inputs from software developers. Use of value of information analysis 24 to determine whether meta‐analyses could reduce the uncertainty in economic models associated with small denominators of subpopulations. Use of observational cohorts, indirect evidence comparisons, and registry data to assess comparative effectiveness where randomised trials are not possible. Consideration of distributional cost‐effectiveness analysis to provide an equity weighting, with a higher willingness to pay threshold if the technology can reduce inequities in access to dermatology or other specialist services and improve early detection among disadvantaged populations. Incorporation of patient and clinician preferences for imaging, biomarker or AI‐assisted diagnoses, assessed through quantitative methods such as discrete choice experiments. …”
Section: Recommendationsmentioning
confidence: 99%
See 2 more Smart Citations
“…The following recommendations could be considered for HTA of precision medicine technologies, including those advanced by the ACEMID consortium to deal with specific issues: Clarification of the intended position (or positions) of the diagnostic test in the clinical pathway, and assessment of the cost‐effectiveness of each position (eg, triage, add‐on, replacement), with estimates of the proportionate use in each position. Use of “base case” models that are updated with test performance characteristics (eg, sensitivity and specificity) as they learn and develop. These models could be created during AI algorithm testing, with preliminary inputs from software developers. Use of value of information analysis 24 to determine whether meta‐analyses could reduce the uncertainty in economic models associated with small denominators of subpopulations. Use of observational cohorts, indirect evidence comparisons, and registry data to assess comparative effectiveness where randomised trials are not possible. Consideration of distributional cost‐effectiveness analysis to provide an equity weighting, with a higher willingness to pay threshold if the technology can reduce inequities in access to dermatology or other specialist services and improve early detection among disadvantaged populations. Incorporation of patient and clinician preferences for imaging, biomarker or AI‐assisted diagnoses, assessed through quantitative methods such as discrete choice experiments. …”
Section: Recommendationsmentioning
confidence: 99%
“…These models could be created during AI algorithm testing, with preliminary inputs from software developers. Use of value of information analysis 24 to determine whether meta‐analyses could reduce the uncertainty in economic models associated with small denominators of subpopulations. Use of observational cohorts, indirect evidence comparisons, and registry data to assess comparative effectiveness where randomised trials are not possible.…”
Section: Recommendationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Or would you use the software if it reduced the number of diagnostic errors that are made without unduly increasing time and resource burden? (4) Suppose you are not faced with a complex model but rather a simple scenario: comparing two RCTs for a new cancer drug. Drug A compared to standard therapy increased life expectancy by 1 month at a p of 0.01; Drug B increased life expectancy by 10 years but at a p of 0.06 – Does satisfying statistical criteria really impact which drug you should choose?…”
Section: Dear Editormentioning
confidence: 99%
“…5. as a guide to evaluate decision support for differential diagnosis [14]; 6. as a decision analysis technique to identify the most beneficial factors in health economic models [5,15,16,17].…”
Section: Introductionmentioning
confidence: 99%