References to studies excluded from this review Chan 2004A {published data only} Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles.. JAMA 2004;291:2457-65. Hrobjartsson A, Chan AW, Haahr MT, Gotzsche PC, Altman DG. Selective reporting of positive outcomes in randomised trials-secondary publication. A comparison of protocols with published reports. Ugeskrift for Laeger 2005; 167:3189-91. Chan 2004B {published data only} Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ 2004;171:735-40. Cronin 2004 {published data only} Cronin E, Sheldon T. Factors influencing the publication of health research.
BackgroundClinical practice guidelines are typically written for healthcare providers but there is increasing interest in producing versions for the public, patients and carers. The main objective of this review is to identify and synthesise evidence of the public’s attitudes towards clinical practice guidelines and evidence-based recommendations written for providers or the public, together with their awareness of guidelines.MethodsWe included quantitative and qualitative studies of any design reporting on public, patient (and their carers) attitudes and awareness of guidelines written for providers or patients/public. We searched electronic databases including MEDLINE, PSYCHINFO, ERIC, ASSIA and the Cochrane Library from 2000 to 2012. We also searched relevant websites, reviewed citations and contacted experts in the field. At least two authors independently screened, abstracted data and assessed the quality of studies. We conducted a thematic analysis of first and second order themes and performed a separate narrative synthesis of patient and public awareness of guidelines.ResultsWe reviewed 5415 records and included 26 studies (10 qualitative studies, 13 cross sectional and 3 randomised controlled trials) involving 24 887 individuals. Studies were mostly good to fair quality. The thematic analysis resulted in four overarching themes: Applicability of guidelines; Purpose of guidelines for patient; Purpose of guidelines for health care system and physician; and Properties of guidelines. Overall, participants had mixed attitudes towards guidelines; some participants found them empowering but many saw them as a way of rationing care. Patients were also concerned that the information may not apply to their own health care situations. Awareness of guidelines ranged from 0-79%, with greater awareness in participants surveyed on national guideline websites.ConclusionThere are many factors, not only formatting, that may affect the uptake and use of guideline-derived material by the public. Producers need to make clear how the information is relevant to the reader and how it can be used to make healthcare improvements although there were problems with data quality. Awareness of guidelines is generally low and guideline producers cannot assume that the public has a more positive perception of their material than of alternative sources of health information.
Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge (www.trialforge.org) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants’ views on the processes in the life of a randomised trial that should be covered by Trial Forge.General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a ‘go to’ website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.
BackgroundSystematic reviews have shown uncertainty about the size or direction of any 'trial effect' for patients in trials compared to those treated outside trials. We are not aware of any systematic review of whether there is a 'trial effect' related to being treated by healthcare practitioners or institutions that take part in research.MethodsWe searched the Cochrane Methodology Register and MEDLINE (most recently in January 2009) for studies in which patients were allocated to treatment in one or other setting, and cohort studies reporting the outcomes of patients from different settings. We independently assessed study quality, including the control of bias in the generation of the comparison groups, and extracted data.ResultsWe retrieved and checked more than 15,000 records. Thirteen articles were eligible: five practitioner studies and eight institution studies. Meta-analyses were not possible because of heterogeneity. Two practitioner studies were judged to be 'controlled' or better. A Canadian study among nurses found that use of research evidence was higher for those who took part in research working groups and a Danish study on general practitioners found that trial doctors were more likely to prescribe in accordance with research evidence and guidelines. Five institution studies were 'controlled' but provided mixed results. A study of North American patients at hospitals that had taken part in trials for myocardial infarction found no statistically significant difference in treatment for patients in trial and non-trial hospitals. A Canadian study of myocardial infarction patients found that trial participants had better survival than patients in the same hospitals who were not in trials or those in non-trial hospitals. A study of general practices in Denmark did not detect differences in guideline adherence between trial and non-trial practices but found that trial practices were more likely to prescribe the trial sponsor's drugs. The other two 'controlled' studies of institutions found lower mortality in trial than non-trial hospitals.ConclusionsThe available findings from existing research suggest that there might be a 'trial effect' of better outcomes, greater adherence to guidelines and more use of evidence by practitioners and institutions that take part in trials. However, the consequences for patient health are uncertain and the most robust conclusion may be that there is no apparent evidence that patients treated by practitioners or in institutions that take part in trials do worse than those treated elsewhere.
BackgroundIf you want to know which of two or more healthcare interventions is most effective, the randomised controlled trial is the design of choice. Randomisation, however, does not itself promote the applicability of the results to situations other than the one in which the trial was done. A tool published in 2009, PRECIS (PRagmatic Explanatory Continuum Indicator Summaries) aimed to help trialists design trials that produced results matched to the aim of the trial, be that supporting clinical decision-making, or increasing knowledge of how an intervention works. Though generally positive, groups evaluating the tool have also found weaknesses, mainly that its inter-rater reliability is not clear, that it needs a scoring system and that some new domains might be needed. The aim of the study is to: Produce an improved and validated version of the PRECIS tool. Use this tool to compare the internal validity of, and effect estimates from, a set of explanatory and pragmatic trials matched by intervention.MethodsThe study has four phases. Phase 1 involves brainstorming and a two-round Delphi survey of authors who cited PRECIS. In Phase 2, the Delphi results will then be discussed and alternative versions of PRECIS-2 developed and user-tested by experienced trialists. Phase 3 will evaluate the validity and reliability of the most promising PRECIS-2 candidate using a sample of 15 to 20 trials rated by 15 international trialists. We will assess inter-rater reliability, and raters’ subjective global ratings of pragmatism compared to PRECIS-2 to assess convergent and face validity. Phase 4, to determine if pragmatic trials sacrifice internal validity in order to achieve applicability, will compare the internal validity and effect estimates of matched explanatory and pragmatic trials of the same intervention, condition and participants. Effect sizes for the trials will then be compared in a meta-regression. The Cochrane Risk of Bias scores will be compared with the PRECIS-2 scores of pragmatism.DiscussionWe have concrete suggestions for improving PRECIS and a growing list of enthusiastic individuals interested in contributing to this work. By early 2014 we expect to have a validated PRECIS-2.
(J. Coussement). y The members of the Bacteriuria in Renal Transplantation (BiRT) study group are listed in the Appendix.
Background: Guideline producers are increasingly producing versions of guidelines for the public. The aim of this study was to explore what patients and the public understand about the purpose and production of clinical guidelines, and what they want from clinical guidelines to support their healthcare decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.