Peer reviewed eScholarship.org Powered by the California Digital Library University of California to show a statistically significant reduction, drug-related hospitalizations did not. We interpret this as a matter of power; the hazard ratio point estimate for extended vs basic intervention was 0.77 for the main end point compared to 0.80 and 0.65 for drug-related hospitalization within 180 and 30 days, respectively. These figures are obviously of the same magnitude, but because there were fewer of the drug-related outcomes, they did not reach statistical significance. There are 2 counterexamples mentioned: Gillespie et al 2 and Pellegrin et al. 3 The trial by Gillespie et al 2 was considerably smaller than ours, and the apparent strong benefit for drug-related admissions (relative risk, 0.20) was offset by other admissions, so that overall readmission rates were identical in the 2 groups. The study by Pellegrin et al 3 is not randomized, but is instead a macroanalysis using interrupted time series. As we pointed out in our discussion, 1 it is conceivable that our intervention could have an effect on non-drug-related admission, as well as on drugrelated admission. For example, a patient who is hospitalized because of nonadherence would manifest as someone hospitalized because of a disease exacerbation and not necessarily because of a drug problem. If our intervention improved adherence, such hospitalization could possibly be prevented. We fully agree that it would have been desirable to present data on adherence, and we had planned to do so. Unfortunately, our adherence data were not of sufficient quality to allow for it. Van der Linden et al correctly point out that we had powered our study according a perceived risk of drug-related admissions, not general admissions as was our main outcome. We do not believe, however, this has much bearing on the interpretation of the results. Given that the estimates and their confidence intervals are known by now, little, if anything, in terms of interpretation is added by considering the presumed power at the planning stage. 4 Finally, it is suggested that we should have adjusted for multiple comparisons in our exploratory analyses for subgroup effects. It is, however, not customary to adjust for multiple comparisons under such circumstances, and doing so would in our opinion defy its purpose. While such adjustments do lower the occurrence of false positives, they also lower the occurrence of true positives (because the significance threshold is now lower), essentially leaving the researchers unable to identify the patterns they are looking for.