2009
DOI: 10.1136/bmj.b1147
|View full text |Cite
|
Sign up to set email alerts
|

Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews

Abstract: Objective To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions.Design Survey of published systematic reviews.Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

3
320
0
7

Year Published

2010
2010
2015
2015

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 345 publications
(336 citation statements)
references
References 20 publications
3
320
0
7
Order By: Relevance
“…[60][61][62][63][64] First, we used adjusted indirect frequentist comparisons for individual drugs compared with placebo. 62 This analysis provided pairwise triangular comparisons for drugs compared with placebo rather than network meta-analysis.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…[60][61][62][63][64] First, we used adjusted indirect frequentist comparisons for individual drugs compared with placebo. 62 This analysis provided pairwise triangular comparisons for drugs compared with placebo rather than network meta-analysis.…”
Section: Methodsmentioning
confidence: 99%
“…Second, to address the problems with inevitable differences across studies, we used mixed (or multiple) treatment comparison (MTCs) Bayesian network meta-analysis. [62][63][64] We calculated Bayesian odds ratios 43,51 with 2.5 to 97.5 % credible intervals and Bayesian network random effects metaanalysis assuming heterogeneous variances across treatments (online Appendix Table 3). 65 We synthesized evidence from drug classes in network meta-analysis when individual drugs from the same class demonstrated no significant differences in outcomes.…”
Section: Methodsmentioning
confidence: 99%
“…While this method yields robust data about the absolute performance of each treatment, it is not methodologically valid simply to collect the aggregate effect sizes of each block and determine the best treatment by choosing the biggest number, because this approach would ignore any data we have from direct comparisons. 5 How then can we determine best practice? The difficulty of performing an omnibus multi-armed RCT to find the single best option becomes obvious if, for the sake of discussion, we accept that there are (at least) seven different anatomic approaches to the nerves innervating the hip, three different technical methods of nerve location, catheter vs single-shot options, and several different local anesthetics that can be administered in any number of concentrations and doses, with many different additives, and for a variety of durations.…”
Section: Introductionmentioning
confidence: 99%
“…This is particularly important, as although obviously fundamentally flawed analyses are now far less commonly seen in the literature than was the case five years ago 14 , more subtle errors that may slip past the unwary reader are still prevalent.…”
mentioning
confidence: 99%
“…It is beyond the scope of this editorial to tease apart the strengths and weaknesses of the different approaches to indirect comparisons -for those who are interested, there are a number of useful publications that can help with this 6,7,9,13,14 . What is important to grasp is that each indirect evidence set has its own individual challenges, with several potential approaches to meaningful analysis being available.…”
mentioning
confidence: 99%