We analyze the Dawid-Rissanen prequential maximum likelihood codes relative to oneparameter exponential family models M. If data are i.i.d. according to an (essentially) arbitrary P , then the redundancy grows at rate 1 2 c ln n. We show that c = σ 2 1 /σ 2 2 , where σ 2 1 is the variance of P , and σ 2 2 is the variance of the distribution M * ∈ M that is closest to P in KL divergence. This shows that prequential codes behave quite differently from other important universal codes such as the 2-part MDL, Shtarkov and Bayes codes, for which c = 1. This behavior is undesirable in an MDL model selection setting.