Objectives To identify factors that differentiate between effective and ineffective computerised clinical decision support systems in terms of improvements in the process of care or in patient outcomes.Design Meta-regression analysis of randomised controlled trials.Data sources A database of features and effects of these support systems derived from 162 randomised controlled trials identified in a recent systematic review. Trialists were contacted to confirm the accuracy of data and to help prioritise features for testing.Main outcome measures "Effective" systems were defined as those systems that improved primary (or 50% of secondary) reported outcomes of process of care or patient health. Simple and multiple logistic regression models were used to test characteristics for association with system effectiveness with several sensitivity analyses.Results Systems that presented advice in electronic charting or order entry system interfaces were less likely to be effective (odds ratio 0.37, 95% confidence interval 0.17 to 0.80). Systems more likely to succeed provided advice for patients in addition to practitioners (2.77, 1.07 to 7.17), required practitioners to supply a reason for over-riding advice (11.23, 1.98 to 63.72), or were evaluated by their developers (4.35, 1.66 to 11.44). These findings were robust across different statistical methods, in internal validation, and after adjustment for other potentially important factors. ConclusionsWe identified several factors that could partially explain why some systems succeed and others fail. Presenting decision support within electronic charting or order entry systems are associated with failure compared with other ways of delivering advice. Odds of success were greater for systems that required practitioners to provide reasons when over-riding advice than for systems that did not. Odds of success were also better for systems that provided advice concurrently to patients and practitioners. Finally, most systems were evaluated by their own developers and such evaluations were more likely to show benefit than those conducted by a third party.
BackgroundComputerized clinical decision support systems (CCDSSs) are claimed to improve processes and outcomes of primary preventive care (PPC), but their effects, safety, and acceptance must be confirmed. We updated our previous systematic reviews of CCDSSs and integrated a knowledge translation approach in the process. The objective was to review randomized controlled trials (RCTs) assessing the effects of CCDSSs for PPC on process of care, patient outcomes, harms, and costs.MethodsWe conducted a decision-maker-researcher partnership systematic review. We searched MEDLINE, EMBASE, Ovid's EBM Reviews Database, Inspec, and other databases, as well as reference lists through January 2010. We contacted authors to confirm data or provide additional information. We included RCTs that assessed the effect of a CCDSS for PPC on process of care and patient outcomes compared to care provided without a CCDSS. A study was considered to have a positive effect (i.e., CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive.ResultsWe added 17 new RCTs to our 2005 review for a total of 41 studies. RCT quality improved over time. CCDSSs improved process of care in 25 of 40 (63%) RCTs. Cumulative scientifically strong evidence supports the effectiveness of CCDSSs for screening and management of dyslipidaemia in primary care. There is mixed evidence for effectiveness in screening for cancer and mental health conditions, multiple preventive care activities, vaccination, and other preventive care interventions. Fourteen (34%) trials assessed patient outcomes, and four (29%) reported improvements with the CCDSS. Most trials were not powered to evaluate patient-important outcomes. CCDSS costs and adverse events were reported in only six (15%) and two (5%) trials, respectively. Information on study duration was often missing, limiting our ability to assess sustainability of CCDSS effects.ConclusionsEvidence supports the effectiveness of CCDSSs for screening and treatment of dyslipidaemia in primary care with less consistent evidence for CCDSSs used in screening for cancer and mental health-related conditions, vaccinations, and other preventive care. CCDSS effects on patient outcomes, safety, costs of care, and provider satisfaction remain poorly supported.
This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers.Differences between health systems may often result in a policy or programme option that is used in one setting not being feasible or acceptable in another. Or these differences may result in an option not working in the same way in another setting, or even achieving different impacts in another setting. A key challenge that policymakers and those supporting them must face is therefore the need to understand whether research evidence about an option can be applied to their setting. Systematic reviews make this task easier by summarising the evidence from studies conducted in a variety of different settings. Many systematic reviews, however, do not provide adequate descriptions of the features of the actual settings in which the original studies were conducted. In this article, we suggest questions to guide those assessing the applicability of the findings of a systematic review to a specific setting. These are: 1. Were the studies included in a systematic review conducted in the same setting or were the findings consistent across settings or time periods? 2. Are there important differences in on-the-ground realities and constraints that might substantially alter the feasibility and acceptability of an option? 3. Are there important differences in health system arrangements that may mean an option could not work in the same way? 4. Are there important differences in the baseline conditions that might yield different absolute effects even if the relative effectiveness was the same? 5. What insights can be drawn about options, implementation, and monitoring and evaluation? Even if there are reasonable grounds for concluding that the impacts of an option might differ in a specific setting, insights can almost always be drawn from a systematic review about possible options, as well as approaches to the implementation of options and to monitoring and evaluation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.