Background Providing health professionals with quantitative summaries of their clinical performance when treating specific groups of patients (“feedback”) is a widely used quality improvement strategy, yet systematic reviews show it has varying success. Theory could help explain what factors influence feedback success, and guide approaches to enhance effectiveness. However, existing theories lack comprehensiveness and specificity to health care. To address this problem, we conducted the first systematic review and synthesis of qualitative evaluations of feedback interventions, using findings to develop a comprehensive new health care-specific feedback theory. Methods We searched MEDLINE, EMBASE, CINAHL, Web of Science, and Google Scholar from inception until 2016 inclusive. Data were synthesised by coding individual papers, building on pre-existing theories to formulate hypotheses, iteratively testing and improving hypotheses, assessing confidence in hypotheses using the GRADE-CERQual method, and summarising high-confidence hypotheses into a set of propositions. Results We synthesised 65 papers evaluating 73 feedback interventions from countries spanning five continents. From our synthesis we developed Clinical Performance Feedback Intervention Theory (CP-FIT), which builds on 30 pre-existing theories and has 42 high-confidence hypotheses. CP-FIT states that effective feedback works in a cycle of sequential processes; it becomes less effective if any individual process fails, thus halting progress round the cycle. Feedback’s success is influenced by several factors operating via a set of common explanatory mechanisms: the feedback method used, health professional receiving feedback, and context in which feedback takes place. CP-FIT summarises these effects in three propositions: (1) health care professionals and organisations have a finite capacity to engage with feedback, (2) these parties have strong beliefs regarding how patient care should be provided that influence their interactions with feedback, and (3) feedback that directly supports clinical behaviours is most effective. Conclusions This is the first qualitative meta-synthesis of feedback interventions, and the first comprehensive theory of feedback designed specifically for health care. Our findings contribute new knowledge about how feedback works and factors that influence its effectiveness. Internationally, practitioners, researchers, and policy-makers can use CP-FIT to design, implement, and evaluate feedback. Doing so could improve care for large numbers of patients, reduce opportunity costs, and improve returns on financial investments. Trial registration PROSPERO, CRD42015017541 Electronic supplementary material The online version of this article (10.1186/s13012-019-0883-5) contains supplementary material, which is available to authorized users.
Audit and feedback (A&F) is a commonly used quality improvement (QI) approach. A Cochrane review indicates that A&F is generally effective and leads to modest improvements in professional practice but with considerable variation in the observed effects. While we have some understanding of factors that enhance the effects of A&F, further research needs to explore when A&F is most likely to be effective and how to optimise it. To do this, we need to move away from two-arm trials of A&F compared with control in favour of head-to-head trials of different ways of providing A&F. This paper describes implementation laboratories involving collaborations between healthcare organisations providing A&F at scale, and researchers, to embed head-to-head trials into routine QI programmes. This can improve effectiveness while producing generalisable knowledge about how to optimise A&F. We also describe an international meta-laboratory that aims to maximise cross-laboratory learning and facilitate coordination of A&F research.
Background Audit and feedback (A&F) is a common quality improvement strategy with highly variable effects on patient care. It is unclear how A&F effectiveness can be maximised. Since the core mechanism of action of A&F depends on drawing attention to a discrepancy between actual and desired performance, we aimed to understand current and best practices in the choice of performance comparator. Methods We described current choices for performance comparators by conducting a secondary review of randomised trials of A&F interventions and identifying the associated mechanisms that might have implications for effective A&F by reviewing theories and empirical studies from a recent qualitative evidence synthesis. Results We found across 146 trials that feedback recipients’ performance was most frequently compared against the performance of others (benchmarks; 60.3%). Other comparators included recipients’ own performance over time (trends; 9.6%) and target standards (explicit targets; 11.0%), and 13% of trials used a combination of these options. In studies featuring benchmarks, 42% compared against mean performance. Eight (5.5%) trials provided a rationale for using a specific comparator. We distilled mechanisms of each comparator from 12 behavioural theories, 5 randomised trials, and 42 qualitative A&F studies. Conclusion Clinical performance comparators in published literature were poorly informed by theory and did not explicitly account for mechanisms reported in qualitative studies. Based on our review, we argue that there is considerable opportunity to improve the design of performance comparators by (1) providing tailored comparisons rather than benchmarking everyone against the mean, (2) limiting the amount of comparators being displayed while providing more comparative information upon request to balance the feedback’s credibility and actionability, (3) providing performance trends but not trends alone, and (4) encouraging feedback recipients to set personal, explicit targets guided by relevant information. Electronic supplementary material The online version of this article (10.1186/s13012-019-0887-1) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.