Randomised trials are a central component of all evidence-informed health care systems and the evidence coming from them helps to support health care users, health professionals and others to make more informed decisions about treatment. The evidence available to trialists to support decisions on design, conduct and reporting of randomised trials is, however, sparse. Trial Forge is an initiative that aims to increase the evidence base for trial decision-making and in doing so, to improve trial efficiency.One way to fill gaps in evidence is to run Studies Within A Trial, or SWATs. This guidance document provides a brief definition of SWATs, an explanation of why they are important and some practical ‘top tips’ that come from existing experience of doing SWATs. We hope the guidance will be useful to trialists, methodologists, funders, approvals agencies and others in making clear what a SWAT is, as well as what is involved in doing one.
The evidence base available to trialists to support trial process decisions—e.g. how best to recruit and retain participants, how to collect data or how to share the results with participants—is thin. One way to fill gaps in evidence is to run Studies Within A Trial, or SWATs. These are self-contained research studies embedded within a host trial that aim to evaluate or explore alternative ways of delivering or organising a particular trial process.SWATs are increasingly being supported by funders and considered by trialists, especially in the UK and Ireland. At some point, increasing SWAT evidence will lead funders and trialists to ask: given the current body of evidence for a SWAT, do we need a further evaluation in another host trial? A framework for answering such a question is needed to avoid SWATs themselves contributing to research waste.This paper presents criteria on when enough evidence is available for SWATs that use randomised allocation to compare different interventions.
Background Retention of participants is essential to ensure the statistical power and internal validity of clinical trials. Poor participant retention reduces power and can bias the estimates of intervention effect. There is sparse evidence from randomised comparisons of effective strategies to retain participants in randomised trials. Currently, non-randomised evaluations of trial retention interventions embedded in host clinical trials are rejected from the Cochrane review of strategies to improve retention because it only included randomised evaluations. However, the systematic assessment of non-randomised evaluations may inform trialists’ decision-making about retention methods that have been evaluated in a trial context.Therefore, we performed a systematic review to synthesise evidence from non-randomised evaluations of retention strategies in order to supplement existing randomised trial evidence. Methods We searched MEDLINE, EMBASE, and Cochrane CENTRAL from 2007 to October 2017. Two reviewers independently screened abstracts and full-text articles for non-randomised studies that compared two or more strategies to increase participant retention in randomised trials. The retention trials had to be nested in real ‘host’ trials ( including feasibility studies) but not hypothetical trials. Two investigators independently rated the risk of bias of included studies using the ROBINS-I tool and determined the certainty of evidence using GRADE (Grading of Recommendations Assessment, Development and Evaluation) framework. Results Fourteen non-randomised studies of retention were included in this review. Most retention strategies (in 10 studies) aimed to increase questionnaire response rate. Favourable strategies for increasing questionnaire response rate were telephone follow-up compared to postal questionnaire completion, online questionnaire follow-up compared to postal questionnaire, shortened version of questionnaires versus longer questionnaires, electronically transferred monetary incentives compared to cash incentives, cash compared with no incentive and reminders to non-responders (telephone or text messaging). However, each retention strategy was evaluated in a single observational study. This, together with risk of bias concerns, meant that the overall GRADE certainty was low or very low for all included studies. Conclusions This systematic review provides low or very low certainty evidence on the effectiveness of retention strategies evaluated in non-randomised studies. Some strategies need further evaluation to provide confidence around the size and direction of the underlying effect.
Background: Data collection consumes a large proportion of clinical trial resources. Each data item requires time and effort for collection, processing and quality control procedures. In general, more data equals a heavier burden for trial staff and participants. It is also likely to increase costs. Knowing the types of data being collected, and in what proportion, will be helpful to ensure that limited trial resources and participant goodwill are used wisely. Aim: The aim of this study is to categorise the types of data collected across a broad range of trials and assess what proportion of collected data each category represents. Methods: We developed a standard operating procedure to categorise data into primary outcome, secondary outcome and 15 other categories. We categorised all variables collected on trial data collection forms from 18, mainly publicly funded, randomised superiority trials, including trials of an investigational medicinal product and complex interventions. Categorisation was done independently in pairs: one person having in-depth knowledge of the trial, the other independent of the trial. Disagreement was resolved through reference to the trial protocol and discussion, with the project team being consulted if necessary. Key results: Primary outcome data accounted for 5.0% (median)/11.2% (mean) of all data items collected. Secondary outcomes accounted for 39.9% (median)/42.5% (mean) of all data items. Non-outcome data such as participant identifiers and demographic data represented 32.4% (median)/36.5% (mean) of all data items collected. Conclusion: A small proportion of the data collected in our sample of 18 trials was related to the primary outcome. Secondary outcomes accounted for eight times the volume of data as the primary outcome. A substantial amount of data collection is not related to trial outcomes. Trialists should work to make sure that the data they collect are only those essential to support the health and treatment decisions of those whom the trial is designed to inform.
BackgroundRandomised control trials are regarded as the gold standard for evaluating the effectiveness and efficacy of healthcare interventions with thousands of trials published every year. Despite significant investment in infrastructure, a staggering number of clinical trials continue to face challenges with retention. Dropouts could lead to negative consequences—from lengthy delays to missing data that can undermine the results and integrity of the trial.Summarising evidence from non-randomised evaluations of retention strategies could provide complementary information to randomised evaluations that could guide trialists to the most effective ways of increasing retention of participants in clinical trials.MethodsThe following electronic databases will be searched for relevant studies: EMBASE, MEDLINE, the Cochrane Controlled Trials Register, and Cochrane Methodology Register and the search will be limited to English-published studies during the last 10 years to increase relevance to current trials. Non-randomised studies (observational studies) including a comparison of two or more strategies to increase participant retention in randomised trials or comparing one or more strategies with no strategy will be included. The primary outcome will be the proportion of participants remained at the primary analysis as determined in each retention study.DiscussionThis review aims to gather and evaluate evidence on the effect of retention strategies examined in non-randomised studies. It is imperative to collect evidence from obseravational studies to infer whether or not these studies could be considered a practical way to complement or even replace a broadly favourable randomised design. If we find that non-randomised studies to be included in this review are of high quality with adequate control of biases, we will recommend to trialists and others not to rely exclusively on randomised studies and to give meticulous attention to the plentiful evidence that can be obtained from non-randomised studies. Should the results of this review suggest that evaluating retention strategies in observational studies provides insufficient evidence to trialists planning their retention strategies, we will be able to say that there is little point in conducting non-randomised studies and that they would do better to invest their time and resources in a randomised evaluation if possible. Where a non-randomised study design is chosen, the review authors will offer recommendations to trialists and others regarding how to ensure that these studies are conducted in a way that can minimise the risk of bias and increase confidence in the findings.Systematic review registrationPROSPERO 2017:CRD42017072775.Electronic supplementary materialThe online version of this article (10.1186/s13643-018-0696-7) contains supplementary material, which is available to authorized users.
BackgroundData collection is a substantial part of trial workload for participants and staff alike. How these hours of work are spent is important because stakeholders are more interested in some outcomes than others. The ORINOCO study compared the time spent collecting primary outcome data to the time spent collecting secondary outcome data in a cohort of trials.MethodsWe searched PubMed for phase III trials indexed between 2015 and 2019. From these, we randomly selected 120 trials evaluating a therapeutic intervention plus an additional random selection of 20 trials evaluating a public health intervention. We also added eligible trials from a cohort of 189 trials in rheumatology that had used the same core outcome set.We then obtained the time taken to collect primary and secondary outcomes in each trial. We used a hierarchy of methods that included data in trial reports, contacting the trial team, and approaching individuals with experience of using the identified outcome measures. We calculated the primary:secondary data collection time ratio and notional data collection cost for each included trial.ResultsWe included 161 trials (120 Phase III; 21 Core outcome set; 20 Public health), which together collected 230 primary and 688 secondary outcomes. Full primary and secondary timing data were obtained for 134 trials. The median time spent on primaries was 56 hours (range 0.0 – 10,747) and the median time spent on secondaries was 191 hours (range 0.0 – 1,356,833). The median primary:secondary data collection time ratio was 1:3.0 (i.e. for every minute spent on primary outcomes, 3.0 were spent on secondaries. The ratio varied by trial type: Phase III trials were 1: 3.1, Core outcome set 1:3.4 and Public health trials 1:2.2. The median notional overall data collection cost was £8,016 (range £53 – £31,899,141).ConclusionsDepending on trial type, between two and three times as much time is spent collecting secondary outcome data than collecting primary outcome data. Trial teams should explicitly consider how long it will take to collect the data for an outcome and decide whether that time is worth it given importance of the outcome to the trial.
The evidence base available to trialists to support trial process decisions– e.g. how best to recruit and retain participants, how to collect data or how to share the results with participants – is thin. One way to fill gaps in evidence is to run Studies Within A Trial, or SWATs. These are self-contained research studies embedded within a host trial that aim to evaluate or explore alternative ways of delivering or organising a particular trial process. SWATs are increasingly being supported by funders and considered by trialists, especially in the UK and Ireland. At some point, increasing SWAT evidence will lead funders and trialists to ask : given the current body of evidence for a SWAT, do we need a further evaluation in a another host trial? A framework for answering such a question is needed to avoid SWATs themselves contributing to research waste. This paper presents criteria on when enough evidence is available for SWATs that use randomised allocation to compare different interventions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.