Based on X ∼ N d (θ, σ 2 X I d ), we study the efficiency of predictive densities under α−divergence loss L α for estimating the density ofWe identify a large number of cases where improvement on a plug-in density are obtainable by expanding the variance, thus extending earlier findings applicable to Kullback-Leibler loss. The results and proofs are unified with respect to the dimension d, the variances σ 2 X and σ 2 Y , the choice of loss L α ; α ∈ (−1, 1). The findings also apply to a large number of plug-in densities, as well as for restricted parameter spaces with θ ∈ Θ ⊂ R d . The theoretical findings are accompanied by various observations, illustrations, and implications dealing for instance with robustness with respect to the model variances and simultaneous dominance with respect to the loss.