Background: In systematic reviews and meta-analyses, time-to-event outcomes are most appropriately analysed using hazard ratios (HRs). In the absence of individual patient data (IPD), methods are available to obtain HRs and/or associated statistics by carefully manipulating published or other summary data. Awareness and adoption of these methods is somewhat limited, perhaps because they are published in the statistical literature using statistical notation.
IMPORTANCE Systematic reviews and meta-analyses of individual participant data (IPD) aim to collect, check, and reanalyze individual-level data from all studies addressing a particular research question and are therefore considered a gold standard approach to evidence synthesis. They are likely to be used with increasing frequency as current initiatives to share clinical trial data gain momentum and may be particularly important in reviewing controversial therapeutic areas. OBJECTIVE To develop PRISMA-IPD as a stand-alone extension to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Statement, tailored to the specific requirements of reporting systematic reviews and meta-analyses of IPD. Although developed primarily for reviews of randomized trials, many items will apply in other contexts, including reviews of diagnosis and prognosis. DESIGN Development of PRISMA-IPD followed the EQUATOR Network framework guidance and used the existing standard PRISMA Statement as a starting point to draft additional relevant material. A web-based survey informed discussion at an international workshop that included researchers, clinicians, methodologists experienced in conducting systematic reviews and meta-analyses of IPD, and journal editors. The statement was drafted and iterative refinements were made by the project, advisory, and development groups. The PRISMA-IPD Development Group reached agreement on the PRISMA-IPD checklist and flow diagram by consensus. FINDINGS Compared with standard PRISMA, the PRISMA-IPD checklist includes 3 new items that address (1) methods of checking the integrity of the IPD (such as pattern of randomization, data consistency, baseline imbalance, and missing data), (2) reporting any important issues that emerge, and (3) exploring variation (such as whether certain types of individual benefit more from the intervention than others). A further additional item was created by reorganization of standard PRISMA items relating to interpreting results. Wording was modified in 23 items to reflect the IPD approach. CONCLUSIONS AND RELEVANCE PRISMA-IPD provides guidelines for reporting systematic reviews and meta-analyses of IPD.
Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.
BackgroundClinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic.MethodsWe review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity.ResultsDiffering results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses.ConclusionsExplaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim.
BackgroundAfter a 1999 National Cancer Institute (NCI) clinical alert was issued, chemoradiotherapy has become widely used in treating women with cervical cancer. Two subsequent systematic reviews found that interpretation of the benefits was complicated, and some important clinical questions were unanswered.Patients and MethodsWe initiated a meta-analysis seeking updated individual patient data from all randomized trials to assess the effect of chemoradiotherapy on all outcomes. We prespecified analyses to investigate whether the effect of chemoradiotherapy differed by trial or patient characteristics.ResultsOn the basis of 13 trials that compared chemoradiotherapy versus the same radiotherapy, there was a 6% improvement in 5-year survival with chemoradiotherapy (hazard ratio [HR] = 0.81, P < .001). A larger survival benefit was seen for the two trials in which chemotherapy was administered after chemoradiotherapy. There was a significant survival benefit for both the group of trials that used platinum-based (HR = 0.83, P = .017) and non–platinum-based (HR = 0.77, P = .009) chemoradiotherapy, but no evidence of a difference in the size of the benefit by radiotherapy or chemotherapy dose or scheduling was seen. Chemoradiotherapy also reduced local and distant recurrence and progression and improved disease-free survival. There was a suggestion of a difference in the size of the survival benefit with tumor stage, but not across other patient subgroups. Acute hematologic and GI toxicity was increased with chemoradiotherapy, but data were too sparse for an analysis of late toxicity.ConclusionThese results endorse the recommendations of the NCI alert, but also demonstrate their applicability to all women and a benefit of non–platinum-based chemoradiotherapy. Furthermore, although these results suggest an additional benefit from adjuvant chemotherapy, this requires testing in randomized trials.
BackgroundLoss to follow-up from randomised trials can introduce bias and reduce study power, affecting the generalisability, validity and reliability of results. Many strategies are used to reduce loss to follow-up and improve retention but few have been formally evaluated.ObjectivesTo quantify the effect of strategies to improve retention on the proportion of participants retained in randomised trials and to investigate if the effect varied by trial strategy and trial setting.Search methodsWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, PreMEDLINE, EMBASE, PsycINFO, DARE, CINAHL, Campbell Collaboration's Social, Psychological, Educational and Criminological Trials Register, and ERIC. We handsearched conference proceedings and publication reference lists for eligible retention trials. We also surveyed all UK Clinical Trials Units to identify further studies.Selection criteriaWe included eligible retention trials of randomised or quasi-randomised evaluations of strategies to increase retention that were embedded in 'host' randomised trials from all disease areas and healthcare settings. We excluded studies aiming to increase treatment compliance.Data collection and analysisWe contacted authors to supplement or confirm data that we had extracted. For retention trials, we recorded data on the method of randomisation, type of strategy evaluated, comparator, primary outcome, planned sample size, numbers randomised and numbers retained. We used risk ratios (RR) to evaluate the effectiveness of the addition of strategies to improve retention. We assessed heterogeneity between trials using the Chi2 and I2 statistics. For main trials that hosted retention trials, we extracted data on disease area, intervention, population, healthcare setting, sequence generation and allocation concealment.Main resultsWe identified 38 eligible retention trials. Included trials evaluated six broad types of strategies to improve retention. These were incentives, communication strategies, new questionnaire format, participant case management, behavioural and methodological interventions. For 34 of the included trials, retention was response to postal and electronic questionnaires with or without medical test kits. For four trials, retention was the number of participants remaining in the trial. Included trials were conducted across a spectrum of disease areas, countries, healthcare and community settings. Strategies that improved trial retention were addition of monetary incentives compared with no incentive for return of trial-related postal questionnaires (RR 1.18; 95% CI 1.09 to 1.28, P value < 0.0001), addition of an offer of monetary incentive compared with no offer for return of electronic questionnaires (RR 1.25; 95% CI 1.14 to 1.38, P value < 0.00001) and an offer of a GBP20 voucher compared with GBP10 for return of postal questionnaires and biomedical test kits (RR 1.12; 95% CI 1.04 to 1.22, P value < 0.005). The evidence that shorter questionnaires are better than longer questionnaires was unclear...
Identifying which individuals benefit most from particular treatments or other interventions underpins so-called personalised or stratified medicine. However, single trials are typically underpowered for exploring whether participant characteristics, such as age or disease severity, determine an individual’s response to treatment. A meta-analysis of multiple trials, particularly one where individual participant data (IPD) are available, provides greater power to investigate interactions between participant characteristics (covariates) and treatment effects. We use a published IPD meta-analysis to illustrate three broad approaches used for testing such interactions. Based on another systematic review of recently published IPD meta-analyses, we also show that all three approaches can be applied to aggregate data as well as IPD. We also summarise which methods of analysing and presenting interactions are in current use, and describe their advantages and disadvantages. We recommend that testing for interactions using within-trials information alone (the deft approach) becomes standard practice, alongside graphical presentation that directly visualises this.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.