We thank Dr Fay for his letter, in which he suggests an alternative approach for evaluating sample size using the existing data [1]. There exist different methods to calculate the sample size necessary to achieve the actual power for normal data, while accounting for sampling variability in the parameters relevant for sample size estimation, such as the sample variance. These methods may involve an adjustment of the planned power [1,2], an adjustment of the sample standard deviation (SD) [3], or may utilize a Bayesian approach using prior information from the existing data [4][5][6][7].In our article [3], we first show that using the sample SD or average SD from existing data as an estimate of the true (population) SD can lead to underpowered clinical trials. Next, we suggest a variety of methods (Table 7) for choosing the appropriate SD, which can in turn be used in existing, standard sample size formulas. We proceed by suggesting guidelines for choosing the SD that results in the actual power being equal to, or exceeding the planned power, at least 80% of the time. Our recommendations are based on the results of simulations that utilize scenarios in which different numbers of preliminary studies are available. Dr Fay et al. suggest a different approach. First, a function, that averages the 'fixed-n' power functions over the assumed distribution of the SDs, is defined as the 'over-all' power function. Next, the 'over-all' power function is set to the desired 'actual' power, and the Beta (planned power) in the 'fixed-n' functions is adjusted such that the 'over-all' power function is equal to the desired actual power [8]. In his letter [1], Dr Fay extends this method to the case where multiple preliminary studies are available, by substituting a weighted variance estimate into the 'fixed-n' power functions, averaging them over the distribution of the weighted estimate, and then adjusting the planned power parameter accordingly.It is worth noting that our study only focused on the variability in the sample variance estimates, but not on the variability in the effect size derived from pilot studies and expert recommendations. Since sample effect size is a random variable, the effect size derived from a single pilot study is likely to underestimate or overestimate the population effect size. Furthermore, since effect size often depends on the population variance (e.g., Cohen's d and the confidence interval for the population value of Cohen's d [9,10]), expert opinion regarding the difference in the outcome measures across two groups is also affected by the SD estimate. Therefore, research advances in the direction of addressing these two sources of uncertainty may result in a smaller number of underpowered clinical trials.In sum, it appears that both approaches [1,3] can be successful in adjusting for the bias resulted from the uncertainty inherent in the population variance estimate used in sample size calculations. The method described in Fay et al.[8] achieves this by adjusting the planned power parameter (Beta) ...