“…where the minimum is taken over all possible n-sample estimators p : [k] n → ∆ k , and the maximum over all possible discrete distributions on [k]. The precise asymptotic minimax rates (including the leading constant) for a fixed k and as n increases are known for 1 distance [12], 2 distance [14, p. 349], scaled 2 losses and chi-squared type distances [18,19,12], as well as KL divergence [5]. Distribution-specific rates are also known for 1 distance [11,6,7].…”