Abstract:BackgroundA non-inferiority (NI) trial is intended to show that the effect of a new treatment is not worse than the comparator. We conducted a review to identify how NI trials were conducted and reported, and whether the standard requirements from the guidelines were followed.Methodology and Principal FindingsFrom 300 randomly selected articles on NI trials registered in PubMed at 5 February 2009, we included 227 NI articles that referred to 232 trials. We excluded studies on bioequivalence, trials on healthy … Show more
“…This is in line with the range of margins that were defined by expert opinions in previous reviews (25 to 75%) [9, 11, 22, 23]. Our findings also highlight the issue of not providing enough details on the method that was used to define the margin.…”
BackgroundThere is no consensus on the preferred method for defining the non-inferiority margin in non-inferiority trials, and previous studies showed that the rationale for its choice is often not reported. This study investigated how the non-inferiority margin is defined in the published literature, and whether its reporting has changed over time.MethodsA systematic PubMed search was conducted for all published randomized, double-blind, non-inferiority trials from January 1, 1966, to February 6, 2015. The primary outcome was the number of margins that were defined by methods other than the historical evidence of the active comparator. This was evaluated for a time trend. We also assessed the under-reporting of the methods of defining the margin as a secondary outcome, and whether this changed over time. Both outcomes were analyzed using a Poisson log-linear model. Predictors for better reporting of the methods, and the use of the fixed-margin method (one of the historical evidence methods) were also analyzed using logistic regression.ResultsTwo hundred seventy-three articles were included, which account for 273 non-inferiority margins. There was no statistically significant difference in the number of margins that were defined by other methods compared to those defined based on the historical evidence (ratio 2.17, 95% CI 0.86 to 5.82, p = 0.11), and this did not change over time. The number of margins for which methods were unreported was similar to those with reported methods (ratio 1.35, 95% CI 0.76 to 2.43, p = 0.31), with no change over time. The method of defining the margin was less often reported in journals with low-impact factors compared to journals with high-impact factors (OR 0.20; 95% CI 0.10 to 0.37, p < 0.0001). The publication of the FDA draft guidance in 2010 was associated with increased reporting of the fixed-margin method (after versus before 2010) (OR 3.54; 95% CI 1.12 to 13.35, p = 0.04).ConclusionsNon-inferiority margins are not commonly defined based on the historical evidence of the active comparator, and they are poorly reported. Authors, reviewers, and editors need to take notice of reporting this critical information to allow for better judgment of non-inferiority trials.Electronic supplementary materialThe online version of this article (doi:10.1186/s13063-017-1859-x) contains supplementary material, which is available to authorized users.
“…This is in line with the range of margins that were defined by expert opinions in previous reviews (25 to 75%) [9, 11, 22, 23]. Our findings also highlight the issue of not providing enough details on the method that was used to define the margin.…”
BackgroundThere is no consensus on the preferred method for defining the non-inferiority margin in non-inferiority trials, and previous studies showed that the rationale for its choice is often not reported. This study investigated how the non-inferiority margin is defined in the published literature, and whether its reporting has changed over time.MethodsA systematic PubMed search was conducted for all published randomized, double-blind, non-inferiority trials from January 1, 1966, to February 6, 2015. The primary outcome was the number of margins that were defined by methods other than the historical evidence of the active comparator. This was evaluated for a time trend. We also assessed the under-reporting of the methods of defining the margin as a secondary outcome, and whether this changed over time. Both outcomes were analyzed using a Poisson log-linear model. Predictors for better reporting of the methods, and the use of the fixed-margin method (one of the historical evidence methods) were also analyzed using logistic regression.ResultsTwo hundred seventy-three articles were included, which account for 273 non-inferiority margins. There was no statistically significant difference in the number of margins that were defined by other methods compared to those defined based on the historical evidence (ratio 2.17, 95% CI 0.86 to 5.82, p = 0.11), and this did not change over time. The number of margins for which methods were unreported was similar to those with reported methods (ratio 1.35, 95% CI 0.76 to 2.43, p = 0.31), with no change over time. The method of defining the margin was less often reported in journals with low-impact factors compared to journals with high-impact factors (OR 0.20; 95% CI 0.10 to 0.37, p < 0.0001). The publication of the FDA draft guidance in 2010 was associated with increased reporting of the fixed-margin method (after versus before 2010) (OR 3.54; 95% CI 1.12 to 13.35, p = 0.04).ConclusionsNon-inferiority margins are not commonly defined based on the historical evidence of the active comparator, and they are poorly reported. Authors, reviewers, and editors need to take notice of reporting this critical information to allow for better judgment of non-inferiority trials.Electronic supplementary materialThe online version of this article (doi:10.1186/s13063-017-1859-x) contains supplementary material, which is available to authorized users.
“…In the evaluation of non-inferiority trials by Wangge et al . [14] there one third of trials were also reported as open label. In their review they pointed out that this was not consistent with the guidelines, which recommend blinding of any randomised trial whenever possible [4].…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, a number of topics assessed in our review do not apply, or are not directly comparable with the situation in bioequivalence trials. Furthermore, other similar studies [10,13,14] also excluded bioequivalence trials, and it was one of the aims of our work to compare our results with others.…”
BackgroundNon-inferiority and equivalence trials require tailored methodology and therefore adequate conduct and reporting is an ambitious task. The aim of our review was to assess whether the criteria recommended by the CONSORT extension were followed.MethodsWe searched the Medline database and the Cochrane Central Register for reports of randomised non-inferiority and equivalence trials published in English language. We excluded reports on bioequivalence studies, reports targeting on other than the main results of a trial, and articles of which the full-text version was not available. In total, we identified 209 reports (167 non-inferiority, 42 equivalence trials) and assessed the reporting and methodological quality using abstracted items of the CONSORT extension.ResultsHalf of the articles did not report on the method of randomisation and only a third of the trials were reported to use blinding. The non-inferiority or equivalence margin was defined in most reports (94%), but was justified only for a quarter of the trials. Sample size calculation was reported for a proportion of 90%, but the margin was taken into account in only 78% of the trials reported. Both intention-to-treat and per-protocol analysis were presented in less than half of the reports. When reporting the results, a confidence interval was given for 85% trials. A proportion of 21% of the reports presented a conclusion that was wrong or incomprehensible. Overall, we found a substantial lack of quality in reporting and conduct. The need to improve also applied to aspects generally recommended for randomised trials. The quality was partly better in high-impact journals as compared to others.ConclusionsThere are still important deficiencies in the reporting on the methodological approach as well as on results and interpretation even in high-impact journals. It seems to take more than guidelines to improve conduct and reporting of non-inferiority and equivalence trials.
“…In our previous review, we found that most non-inferiority trials were financed by the pharmaceutical industry (73.7%). [12] In this study, we identified questions on non-inferiority trials that were posed by applicants for scientific advice in Europe in 2008 and 2009, and the responses given by the EMA. Our analysis of the questions about non-inferiority trials posed by applicants in scientific advice dialogues with the EMA could identify any complex issues in the regulation of non-inferiority trials that may benefit from more explicit regulatory guidance.…”
The active-controlled trial with a non-inferiority design has gained popularity in recent years. However, non-inferiority trials present some methodological challenges, especially in determining the non-inferiority margin. Regulatory guidelines provide some general statements on how a non-inferiority trial should be conducted. Moreover, in a scientific advice procedure, regulators give companies the opportunity to discuss critical trial issues prior to the start of the trial. The aim of this study was to identify potential issues that may benefit from more explicit guidance by regulators. To achieve this, we collected and analyzed questions about non-inferiority trials posed by applicants for scientific advice in Europe in 2008 and 2009, as well as the responses given by the European Medicines Agency (EMA). In our analysis we included 156 final letters of advice from 2008 and 2009, addressed to 94 different applicants (manufacturers). Our analysis yielded two major findings: (1) applicants frequently asked questions ‘whether’ and ‘how’ to conduct a non-inferiority trial, 26% and 74%, respectively, and (2) the EMA regulators seem mainly concerned about the choice of the non-inferiority margin in non-inferiority trials (36% of total regulatory answers). In 40% of the answers, the EMA recommended using a stricter margin, and in 10% of the answers regarding non-inferiority margins, the EMA questioned the justification of the proposed non-inferiority margin.We conclude that there are still difficulties in selecting the appropriate methodology for non-inferiority trials. Straightforward and harmonized guidance regarding non-inferiority trials is required, for example on whether it is necessary to conduct such a trial and how the non-inferiority margin is determined. It is unlikely that regulatory guidelines can cover all therapeutic areas; therefore, in some cases regulatory scientific advice may be used as an opportunity for tailored advice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.