BackgroundMissing data may seriously compromise inferences from randomised clinical trials, especially if missing data are not handled appropriately. The potential bias due to missing data depends on the mechanism causing the data to be missing, and the analytical methods applied to amend the missingness. Therefore, the analysis of trial data with missing values requires careful planning and attention.MethodsThe authors had several meetings and discussions considering optimal ways of handling missing data to minimise the bias potential. We also searched PubMed (key words: missing data; randomi*; statistical analysis) and reference lists of known studies for papers (theoretical papers; empirical studies; simulation studies; etc.) on how to deal with missing data when analysing randomised clinical trials.ResultsHandling missing data is an important, yet difficult and complex task when analysing results of randomised clinical trials. We consider how to optimise the handling of missing data during the planning stage of a randomised clinical trial and recommend analytical approaches which may prevent bias caused by unavoidable missing data. We consider the strengths and limitations of using of best-worst and worst-best sensitivity analyses, multiple imputation, and full information maximum likelihood. We also present practical flowcharts on how to deal with missing data and an overview of the steps that always need to be considered during the analysis stage of a trial.ConclusionsWe present a practical guide and flowcharts describing when and how multiple imputation should be used to handle missing data in randomised clinical.Electronic supplementary materialThe online version of this article (10.1186/s12874-017-0442-1) contains supplementary material, which is available to authorized users.
BackgroundThresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour.MethodsMethodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) guidelines.ResultsWe propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most conservative results as the main results. (2) Explore the reasons behind substantial statistical heterogeneity using subgroup and sensitivity analyses (see step 6). (3) To take account of problems with multiplicity adjust the thresholds for significance according to the number of primary outcomes. (4) Calculate required information sizes (≈ the a priori required number of participants for a meta-analysis to be conclusive) for all outcomes and analyse each outcome with trial sequential analysis. Report whether the trial sequential monitoring boundaries for benefit, harm, or futility are crossed. (5) Calculate Bayes factors for all primary outcomes. (6) Use subgroup analyses and sensitivity analyses to assess the potential impact of bias on the review results. (7) Assess the risk of publication bias. (8) Assess the clinical significance of the statistically significant review results.ConclusionsIf followed, the proposed eight-step procedure will increase the validity of assessments of intervention effects in systematic reviews of randomised clinical trials.Electronic supplementary materialThe online version of this article (doi:10.1186/1471-2288-14-120) contains supplementary material, which is available to authorized users.
BackgroundMost meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors).MethodsWe developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached.ResultsThe Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis.ConclusionsTrial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
BackgroundThe evidence on selective serotonin reuptake inhibitors (SSRIs) for major depressive disorder is unclear.MethodsOur objective was to conduct a systematic review assessing the effects of SSRIs versus placebo, ‘active’ placebo, or no intervention in adult participants with major depressive disorder. We searched for eligible randomised clinical trials in The Cochrane Library’s CENTRAL, PubMed, EMBASE, PsycLIT, PsycINFO, Science Citation Index Expanded, clinical trial registers of Europe and USA, websites of pharmaceutical companies, the U.S. Food and Drug Administration (FDA), and the European Medicines Agency until January 2016. All data were extracted by at least two independent investigators. We used Cochrane systematic review methodology, Trial Sequential Analysis, and calculation of Bayes factor. An eight-step procedure was followed to assess if thresholds for statistical and clinical significance were crossed. Primary outcomes were reduction of depressive symptoms, remission, and adverse events. Secondary outcomes were suicides, suicide attempts, suicide ideation, and quality of life.ResultsA total of 131 randomised placebo-controlled trials enrolling a total of 27,422 participants were included. None of the trials used ‘active’ placebo or no intervention as control intervention. All trials had high risk of bias. SSRIs significantly reduced the Hamilton Depression Rating Scale (HDRS) at end of treatment (mean difference −1.94 HDRS points; 95% CI −2.50 to −1.37; P < 0.00001; 49 trials; Trial Sequential Analysis-adjusted CI −2.70 to −1.18); Bayes factor below predefined threshold (2.01*10−23). The effect estimate, however, was below our predefined threshold for clinical significance of 3 HDRS points. SSRIs significantly decreased the risk of no remission (RR 0.88; 95% CI 0.84 to 0.91; P < 0.00001; 34 trials; Trial Sequential Analysis adjusted CI 0.83 to 0.92); Bayes factor (1426.81) did not confirm the effect). SSRIs significantly increased the risks of serious adverse events (OR 1.37; 95% CI 1.08 to 1.75; P = 0.009; 44 trials; Trial Sequential Analysis-adjusted CI 1.03 to 1.89). This corresponds to 31/1000 SSRI participants will experience a serious adverse event compared with 22/1000 control participants. SSRIs also significantly increased the number of non-serious adverse events. There were almost no data on suicidal behaviour, quality of life, and long-term effects.ConclusionsSSRIs might have statistically significant effects on depressive symptoms, but all trials were at high risk of bias and the clinical significance seems questionable. SSRIs significantly increase the risk of both serious and non-serious adverse events. The potential small beneficial effects seem to be outweighed by harmful effects.Systematic review registrationPROSPERO CRD42013004420.Electronic supplementary materialThe online version of this article (doi:10.1186/s12888-016-1173-2) contains supplementary material, which is available to authorized users.
BackgroundThresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid.MethodsSeveral methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed.ResultsFor a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results.ConclusionsIf the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials.
The evidence for our main outcomes of interest come from short-term trials, and we are unable to determine the effect of long-term treatment with DAAs. The rates of hepatitis C morbidity and mortality observed in the trials are relatively low and we are uncertain as to how DAAs affect this outcome. Overall, there is very low quality evidence that DAAs on the market or under development do not influence serious adverse events. There is insufficient evidence to judge if DAAs have beneficial or harmful effects on other clinical outcomes for chronic HCV. Simeprevir may have beneficial effects on risk of serious adverse event. In all remaining analyses, we could neither confirm nor reject that DAAs had any clinical effects. DAAs may reduce the number of people with detectable virus in their blood, but we do not have sufficient evidence from randomised trials that enables us to understand how SVR affects long-term clinical outcomes. SVR is still an outcome that needs proper validation in randomised clinical trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.