We consider the problem of estimating functionals of discrete distributions,
and focus on tight nonasymptotic analysis of the worst case squared error risk
of widely used estimators. We apply concentration inequalities to analyze the
random fluctuation of these estimators around their expectations, and the
theory of approximation using positive linear operators to analyze the
deviation of their expectations from the true functional, namely their
\emph{bias}.
We characterize the worst case squared error risk incurred by the Maximum
Likelihood Estimator (MLE) in estimating the Shannon entropy $H(P) = \sum_{i =
1}^S -p_i \ln p_i$, and $F_\alpha(P) = \sum_{i = 1}^S p_i^\alpha,\alpha>0$, up
to multiplicative constants, for any alphabet size $S\leq \infty$ and sample
size $n$ for which the risk may vanish. As a corollary, for Shannon entropy
estimation, we show that it is necessary and sufficient to have $n \gg S$
observations for the MLE to be consistent. In addition, we establish that it is
necessary and sufficient to consider $n \gg S^{1/\alpha}$ samples for the MLE
to consistently estimate $F_\alpha(P), 0<\alpha<1$. The minimax rate-optimal
estimators for both problems require $S/\ln S$ and $S^{1/\alpha}/\ln S$
samples, which implies that the MLE has a strictly sub-optimal sample
complexity. When $1<\alpha<3/2$, we show that the worst-case squared error rate
of convergence for the MLE is $n^{-2(\alpha-1)}$ for infinite alphabet size,
while the minimax squared error rate is $(n\ln n)^{-2(\alpha-1)}$. When
$\alpha\geq 3/2$, the MLE achieves the minimax optimal rate $n^{-1}$ regardless
of the alphabet size.
As an application of the general theory, we analyze the Dirichlet prior
smoothing techniques for Shannon entropy estimation. We show that no matter how
we tune the parameters in the Dirichlet prior, this technique cannot achieve
the minimax rates in entropy estimation.Comment: 27 pages, 1 figure, published in IEEE Transactions on Information
Theor