BackgroundWhen data needed to inform parameters in decision models are lacking, formal elicitation of expert judgement can be used to characterise parameter uncertainty. Although numerous methods for eliciting expert opinion as probability distributions exist, there is little research to suggest whether one method is more useful than any other method. This study had three objectives: (i) to obtain subjective probability distributions characterising parameter uncertainty in the context of a health technology assessment; (ii) to compare two elicitation methods by eliciting the same parameters in different ways; (iii) to collect subjective preferences of the experts for the different elicitation methods used.MethodsTwenty-seven clinical experts were invited to participate in an elicitation exercise to inform a published model-based cost-effectiveness analysis of alternative treatments for prostate cancer. Participants were individually asked to express their judgements as probability distributions using two different methods – the histogram and hybrid elicitation methods – presented in a random order.Individual distributions were mathematically aggregated across experts with and without weighting. The resulting combined distributions were used in the probabilistic analysis of the decision model and mean incremental cost-effectiveness ratios and the expected values of perfect information (EVPI) were calculated for each method, and compared with the original cost-effectiveness analysis.Scores on the ease of use of the two methods and the extent to which the probability distributions obtained from each method accurately reflected the expert’s opinion were also recorded.ResultsSix experts completed the task. Mean ICERs from the probabilistic analysis ranged between £162,600–£175,500 per quality-adjusted life year (QALY) depending on the elicitation and weighting methods used. Compared to having no information, use of expert opinion decreased decision uncertainty: the EVPI value at the £30,000 per QALY threshold decreased by 74–86 % from the original cost-effectiveness analysis. Experts indicated that the histogram method was easier to use, but attributed a perception of more accuracy to the hybrid method.ConclusionsInclusion of expert elicitation can decrease decision uncertainty. Here, choice of method did not affect the overall cost-effectiveness conclusions, but researchers intending to use expert elicitation need to be aware of the impact different methods could have.Electronic supplementary materialThe online version of this article (doi:10.1186/s12874-016-0186-3) contains supplementary material, which is available to authorized users.
BACKGROUND: Elicitation is a technique that can be used to obtain probability distribution from experts about unknown quantities. We conducted a methodology review of reports where probability distributions had been elicited from experts to be used in model-based health technology assessments.METHODS: Databases including MEDLINE, EMBASE and the CRD database were searched from inception to April 2013. Reference lists were checked and citation mapping was also used. Studies describing their approach to the elicitation of probability distributions were included. Data was abstracted on pre-defined aspects of the elicitation technique. Reports were critically appraised on their consideration of the validity, reliability and feasibility of the elicitation exercise.RESULTS: Fourteen articles were included. Across these studies, the most marked features were heterogeneity in elicitation approach and failure to report key aspects of the elicitation method. The most frequently used approaches to elicitation were the histogram technique and the bisection method. Only three papers explicitly considered the validity, reliability and feasibility of the elicitation exercises.CONCLUSION: Judged by the studies identified in the review, reports of expert elicitation are insufficient in detail and this impacts on the perceived usability of expert-elicited probability distributions. In this context, the wider credibility of elicitation will only be improved by better reporting and greater standardisation of approach. Until then, the advantage of eliciting probability distributions from experts may be lost. elicitation, expert opinion, subjective probabilities, health technology assessment, decision-analytic models, uncertainty The elicitation of probability distributions from experts to enhance the modelling process is an integral part of health technology assessment The majority of reports presenting expert elicitation of probability distribution were incomplete, making critical appraisal of these exercises difficult. By disseminating reports of such exercises conducted in health technology assessment, research is encouraged towards building a framework for conducting and evaluating the elicitation of probability distributions.
In the drive towards faster patient access to treatments, health technology assessment (HTA) agencies are increasingly faced with reliance on evidence from surrogate endpoints, leading to increased decision uncertainty. This study undertook an updated survey of methodological guidance for using surrogate endpoints across international HTA agencies. We reviewed HTA and economic evaluation methods guidance from European, Australian and Canadian HTA agencies. We considered how guidelines addressed the methods for handling surrogate endpoints, including (1) level of evidence, (2) methods of validation, and (3) thresholds of acceptability. Across the 73 HTA agencies surveyed, 29 (40%) had methodological guidelines that made specific reference to consideration of surrogate outcomes. Of the 45 methods documents analysed, the majority [27 (60%)] were non-technology specific, 15 (33%) focused on pharmaceuticals and three (7%) on medical devices. The principles of the European network for Health Technology Assessment (EUnetHTA) guidelines published in 2015 on the handling of surrogate endpoints appear to have been adopted by many European HTA agencies, i.e. preference for final patient-relevant outcomes and reliance on surrogate endpoints with biological plausibility and epidemiological evidence of the association between the surrogate and final endpoint. Only a small number of HTA agencies (UK National Institute for Care and Excellence; the German Institute for Medical Documentation and Information and Institute for Quality and Efficiency in Health Care; the Australian Pharmaceutical Benefits Advisory Committee; and the Canadian Agency for Drugs and Technologies in Health) have developed more detailed prescriptive criteria for the acceptance of surrogate endpoints, e.g. meta-analyses of randomised controlled trials showing strong association between the treatment effect on the surrogate and final outcomes. As the decision uncertainty associated with reliance on surrogate endpoints carries a risk to patients and society, there is a need for HTA agencies to develop more detailed methodological guidance for consistent selection and evaluation of health technologies that lack definitive final patient-relevant outcome evidence at the time of the assessment.
In informing decisions, utilising health technology assessment (HTA), expert elicitation can provide valuable information, particularly where there is a less-developed evidence-base at the point of market access. In these circumstances, formal methods to elicit expert judgements are preferred to improve the accountability and transparency of the decision-making process, help reduce bias and the use of heuristics, and also provide a structure that allows uncertainty to be expressed. Expert elicitation is the process of transforming the subjective and implicit knowledge of experts into their quantifiable expressions. The use of expert elicitation in HTA is gaining momentum, and there is particular interest in its application to diagnostics, medical devices and complex interventions such as in public health or social care. Compared with the gathering of experimental evidence, elicitation constitutes a reasonably low-cost source of evidence. Given its inherent subject nature, the potential biases in elicited evidence cannot be ignored and, due to its infancy in HTA, there is little guidance to the analyst wishing to conduct a formal elicitation exercise. This article attempts to summarise the stages of designing and conducting an expert elicitation, drawing on key literature and examples, most of which are not in HTA. In addition, we critique their applicability to HTA, given its distinguishing features. There are a number of issues that the analyst should be mindful of, in particular the need to appropriately characterise the uncertainty associated with model inputs and the fact that there are often numerous parameters required, not all of which can be defined using the same quantities. This increases the need for the elicitation task to be as straightforward as possible for the expert to complete.
Objectives Medical devices are potentially good candidates for coverage with evidence development (CED) schemes, as clinical data at market entry are often sparse and (cost-)effectiveness depends on real-world use. The objective of this research was to explore the diffusion of CED schemes for devices in Europe, and the factors that favour or hamper their utilization. Methods We conducted structured interviews with 25 decision-makers from 22 European countries to explore the characteristics of existing CED programmes for devices, and how decision makers perceived 13 pre-identified challenges associated with initiating and operating CED schemes for devices. We also collected data on individual schemes that were either initiated or still ongoing in the last 5 years. Results We identified seven countries with CED programmes for devices and 78 ongoing schemes. The characteristics of CED programmes varied across countries, including eligibility criteria, roles and responsibilities of stakeholders, funding arrangements, and type of decisions being contemplated at the outset of each scheme. We observed a high variability in how decision makers perceived CED-related challenges possibly reflecting country-specific arrangements and different experiences with CED. One general finding across all countries was that relatively little attention was paid to the evaluation of schemes, both during and at their completion. Conclusions CED programmes for devices with different characteristics exist in Europe. Decision-makers’ perceptions differ on the challenges associated with these schemes. More exchange of knowledge and experience will help decision makers anticipate the likely challenges in CED schemes for devices, and to learn from good practices existing elsewhere.
BackgroundExpert opinion is often sought to complement available information needed to inform model-based economic evaluations in health technology assessments. In this context, we define expert elicitation as the process of encoding expert opinion on a quantity of interest, together with associated uncertainty, as a probability distribution. When availability for face-to-face expert elicitation with a facilitator is limited, elicitation can be conducted remotely, overcoming challenges of finding an appropriate time to meet the expert and allowing access to experts situated too far away for practical face-to-face sessions. However, distance elicitation is associated with reduced response rates and limited assistance for the expert during the elicitation session. The aim of this study was to inform the development of a remote elicitation tool by exploring the influence of mode of elicitation on elicited beliefs.MethodsAn Excel-based tool (EXPLICIT) was developed to assist the elicitation session, including the preparation of the expert and recording of their responses.General practitioners (GPs) were invited to provide expert opinion about population alcohol consumption behaviours. They were randomised to complete the elicitation by either a face-to-face meeting or email. EXPLICIT was used in the elicitation sessions for both arms.ResultsFifteen GPs completed the elicitation session. Those conducted by email were longer than the face-to-face sessions (13 min 30 s vs 10 min 26 s, p = 0.1) and the email-elicited estimates contained less uncertainty. However, the resulting aggregated distributions were comparable.ConclusionsEXPLICIT was useful in both facilitating the elicitation task and in obtaining expert opinion from experts via email. The findings support the opinion that remote, self-administered elicitation is a viable approach within the constraints of HTA to inform policy making, although poor response rates may be observed and additional time for individual sessions may be required.
Background Tools based on diagnostic prediction models are available to help general practitioners diagnose cancer. It is unclear whether or not tools expedite diagnosis or affect patient quality of life and/or survival. Objectives The objectives were to evaluate the evidence on the validation, clinical effectiveness, cost-effectiveness, and availability and use of cancer diagnostic tools in primary care. Methods Two systematic reviews were conducted to examine the clinical effectiveness (review 1) and the development, validation and accuracy (review 2) of diagnostic prediction models for aiding general practitioners in cancer diagnosis. Bibliographic searches were conducted on MEDLINE, MEDLINE In-Process, EMBASE, Cochrane Library and Web of Science) in May 2017, with updated searches conducted in November 2018. A decision-analytic model explored the tools’ clinical effectiveness and cost-effectiveness in colorectal cancer. The model compared patient outcomes and costs between strategies that included the use of the tools and those that did not, using the NHS perspective. We surveyed 4600 general practitioners in randomly selected UK practices to determine the proportions of general practices and general practitioners with access to, and using, cancer decision support tools. Association between access to these tools and practice-level cancer diagnostic indicators was explored. Results Systematic review 1 – five studies, of different design and quality, reporting on three diagnostic tools, were included. We found no evidence that using the tools was associated with better outcomes. Systematic review 2 – 43 studies were included, reporting on prediction models, in various stages of development, for 14 cancer sites (including multiple cancers). Most studies relate to QCancer® (ClinRisk Ltd, Leeds, UK) and risk assessment tools. Decision model In the absence of studies reporting their clinical outcomes, QCancer and risk assessment tools were evaluated against faecal immunochemical testing. A linked data approach was used, which translates diagnostic accuracy into time to diagnosis and treatment, and stage at diagnosis. Given the current lack of evidence, the model showed that the cost-effectiveness of diagnostic tools in colorectal cancer relies on demonstrating patient survival benefits. Sensitivity of faecal immunochemical testing and specificity of QCancer and risk assessment tools in a low-risk population were the key uncertain parameters. Survey Practitioner- and practice-level response rates were 10.3% (476/4600) and 23.3% (227/975), respectively. Cancer decision support tools were available in 83 out of 227 practices (36.6%, 95% confidence interval 30.3% to 43.1%), and were likely to be used in 38 out of 227 practices (16.7%, 95% confidence interval 12.1% to 22.2%). The mean 2-week-wait referral rate did not differ between practices that do and practices that do not have access to QCancer or risk assessment tools (mean difference of 1.8 referrals per 100,000 referrals, 95% confidence interval –6.7 to 10.3 referrals per 100,000 referrals). Limitations There is little good-quality evidence on the clinical effectiveness and cost-effectiveness of diagnostic tools. Many diagnostic prediction models are limited by a lack of external validation. There are limited data on current UK practice and clinical outcomes of diagnostic strategies, and there is no evidence on the quality-of-life outcomes of diagnostic results. The survey was limited by low response rates. Conclusion The evidence base on the tools is limited. Research on how general practitioners interact with the tools may help to identify barriers to implementation and uptake, and the potential for clinical effectiveness. Future work Continued model validation is recommended, especially for risk assessment tools. Assessment of the tools’ impact on time to diagnosis and treatment, stage at diagnosis, and health outcomes is also recommended, as is further work to understand how tools are used in general practitioner consultations. Study registration This study is registered as PROSPERO CRD42017068373 and CRD42017068375. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology programme and will be published in full in Health Technology Assessment; Vol. 24, No. 66. See the NIHR Journals Library website for further project information.
Background Surrogate endpoints (i.e., intermediate endpoints intended to predict for patient-centered outcomes) are increasingly common. However, little is known about how surrogate evidence is handled in the context of health technology assessment (HTA). Objectives 1) To map methodologies for the validation of surrogate endpoints and 2) to determine their impact on acceptability of surrogates and coverage decisions made by HTA agencies. Methods We sought HTA reports where evaluation relied on a surrogate from 8 HTA agencies. We extracted data on the methods applied for surrogate validation. We assessed the level of agreement between agencies and fitted mixed-effects logistic regression models to test the impact of validation approaches on the agency’s acceptability of the surrogate endpoint and their coverage recommendation. Results Of the 124 included reports, 61 (49%) discussed the level of evidence to support the relationship between the surrogate and the patient-centered endpoint, 27 (22%) reported a correlation coefficient/association measure, and 40 (32%) quantified the expected effect on the patient-centered outcome. Overall, the surrogate endpoint was deemed acceptable in 49 (40%) reports ( k-coefficient 0.10, P = 0.004). Any consideration of the level of evidence was associated with accepting the surrogate endpoint as valid (odds ratio [OR], 4.60; 95% confidence interval [CI], 1.60–13.18, P = 0.005). However, we did not find strong evidence of an association between accepting the surrogate endpoint and agency coverage recommendation (OR, 0.71; 95% CI, 0.23–2.20; P = 0.55). Conclusions Handling of surrogate endpoint evidence in reports varied greatly across HTA agencies, with inconsistent consideration of the level of evidence and statistical validation. Our findings call for careful reconsideration of the issue of surrogacy and the need for harmonization of practices across international HTA agencies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.