Twitter: @JBC13Mar1967_This editorial is about making statistics more useful by reframing them as the credibility of explanations for the results of trials. This approach is illustrated with a trial that reported postoperative organ dysfunction in 56/147 (38%) intervention participants and 75/145 (52%) control participants [1]. The rates in these samples are not the same, but so what? The `so what?´is how much credibility should we give to competing explanations for the relative rate in the intervention sample of 0.73 (73%)? An explanation that 38% of the intervention population develop postoperative organ dysfunction is obviously consistent with the sample rate of 56/147 (38%). But other explanations are also consistent with the sample rate, although less credible, for instance population rates of 27% or 51% or 34%. Similarly, an explanation that 52% of the control population develop organ dysfunction is consistent with the sample rate of 75/ 145 (52%), as are many other less credible explanations.