Nested-error regression models are widely used for analyzing clustered data. For example, they are often applied to two-stage sample surveys, and in biology and econometrics. Prediction is usually the main goal of such analyses, and mean-squared prediction error is the main way in which prediction performance is measured. In this paper we suggest a new approach to estimating mean-squared prediction error. We introduce a matched-moment, double-bootstrap algorithm, enabling the notorious underestimation of the naive mean-squared error estimator to be substantially reduced. Our approach does not require specific assumptions about the distributions of errors. Additionally, it is simple and easy to apply. This is achieved through using Monte Carlo simulation to implicitly develop formulae which, in a more conventional approach, would be derived laboriously by mathematical arguments. . This reprint differs from the original in pagination and typographic detail. 1 2 P. HALL AND T. MAITI method has several attractive properties. First, it does not require specific distributional assumptions about error distributions. Second, it produces positive, bias-corrected estimators of mean-squared prediction errors. (See [2] and [5] for discussion of possible negativity.) Third, it is easy to apply. Although our emphasis is on small-area prediction, our methodology is equally useful for other applications, such as estimating subject-or cluster-specific random effects. Standard mixed-effects prediction involves two steps. First, a best linear unbiased predictor, or BLUP, is derived under the assumption that model parameters are known. Then, the model parameters are replaced by estimators, producing an empirical version of BLUP. This approach is popular because it is straightforward and, at this level, does not require distributional assumptions.However, estimation of mean-squared prediction error is significantly more challenging. The variability of parameter estimators can substantially influence mean-squared error, to a much greater extent than a conventional asymptotic analysis suggests. Moreover, the nature and extent of this influence is intimately connected to the values of the design variables and to properties of the two error distributions.In this paper we point out that, in terms of the biases of estimators of mean-squared prediction error, the two error distributions influence results predominantly through their second and fourth moments. This observation leads to a surprisingly simple, moment-matching, double-bootstrap algorithm for estimating, and correcting for, bias. We show that this approach substantially reduces the large degree of underestimation by the naive approach.Kackar and Harville [20] and Harville and Jeske [18] studied various approximations to the mean-squared prediction error of the empirical BLUP, assuming normality in both stages. Prasad and Rao [27] pointed out that if unknown model parameters are replaced by their estimators, then significant underestimation of true mean-squared prediction error c...