We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an “out-of-sample penalty” and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance weighting and equal weighting is 0.001
SummaryObjectiveAn estimated 6–10 million people in India live with active epilepsy, and less than half are treated. We analyze the health and economic benefits of three scenarios of publicly financed national epilepsy programs that provide: (1) first‐line antiepilepsy drugs (AEDs), (2) first‐ and second‐line AEDs, and (3) first‐ and second‐line AEDs and surgery.MethodsWe model the prevalence and distribution of epilepsy in India using IndiaSim, an agent‐based, simulation model of the Indian population. Agents in the model are disease‐free or in one of three disease states: untreated with seizures, treated with seizures, and treated without seizures. Outcome measures include the proportion of the population that has epilepsy and is untreated, disability‐adjusted life years (DALYs) averted, and cost per DALY averted. Economic benefit measures estimated include out‐of‐pocket (OOP) expenditure averted and money‐metric value of insurance.ResultsAll three scenarios represent a cost‐effective use of resources and would avert 800,000–1 million DALYs per year in India relative to the current scenario. However, especially in poor regions and populations, scenario 1 (which publicly finances only first‐line therapy) does not decrease the OOP expenditure or provide financial risk protection if we include care‐seeking costs. The OOP expenditure averted increases from scenarios 1 through 3, and the money‐metric value of insurance follows a similar trend between scenarios and typically decreases with wealth. In the first 10 years of scenarios 2 and 3, households avert on average over US$80 million per year in medical expenditure.SignificanceExpanding and publicly financing epilepsy treatment in India averts substantial disease burden. A universal public finance policy that covers only first‐line AEDs may not provide significant financial risk protection. Covering costs for both first‐ and second‐line therapy and other medical costs alleviates the financial burden from epilepsy and is cost‐effective across wealth quintiles and in all Indian states.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.