Network meta-analysis (NMA), combining direct and indirect comparisons, is increasingly being used to examine the comparative effectiveness of medical interventions. Minimal guidance exists on how to rate the quality of evidence supporting treatment effect estimates obtained from NMA. We present a four-step approach to rate the quality of evidence in each of the direct, indirect, and NMA estimates based on methods developed by the GRADE working group. Using an example of a published NMA, we show that the quality of evidence supporting NMA estimates varies from high to very low across comparisons, and that quality ratings given to a whole network are uninformative and likely to mislead. Network meta-analysis (NMA) that simultaneously addresses the comparative effectiveness and/or safety of multiple interventions through combining direct and indirect estimates of effect is rapidly gaining popularity and influence. 1-6 Although NMA approaches appear attractive, 6-8 application of their results requires understanding the quality of the evidence. By quality of evidence, we mean the degree of confidence or certainty one can place in estimates of treatment effects.NMA is sufficiently new that terminology differs between authors and continues to evolve. Box 1 presents a glossary of terms used in this article.Rationale for an approach to rate the quality of evidence from NMA Recently, several articles have provided guidance regarding identification of the evidence for a NMA, 9 statistical aspects of conducting NMA, 10-17 and critical appraisal and interpretation of published NMA.18 19 Few of these, however, provide explicit guidance on how to rate the quality of the evidence. Reports of NMAs often describe the risk of bias of trials included in a NMA (such as method of randomisation, concealment of random allocation, masking, etc). [22][23][24] For example, a recent NMA compared the effects of coronary artery bypass grafting, various stents, and medical treatment on mortality, myocardial infarction, and the need for revascularisation among patients with stable coronary artery disease. The authors stated that appropriate methods of concealment of random allocation were reported for 71 trials (71%). 25 Fifty six trials (56%) reported blind adjudication of clinical outcomes, and for 69 trials (69%) data from intention to treat analyses were available. Although such an assessment of risk of bias describes the entire body of evidence (that is, all trials contributing evidence to the NMA), it does not acknowledge that the risk of bias is likely to differ across the comparisons of the network.1 For example, the risk of bias of studies comparing sirolimus eluting stents versus medical treatment may be considerably less than the risk of bias of studies comparing coronary artery bypass grafting with medical treatment. In addition, risk of bias is only one determinant of quality of evidence. Our confidence in effect estimates will, for instance, also decrease if there are large differences in results from study to study (for exampl...