This study is in the context of continual learning (CL) with DNNs. It compares several types of generators when performing replay, i.e., the generation of previously seen samples, to avoid catastrophic forgetting. Principal generators are generative adversarial networks (GANs) and variational autoencoders (VAEs). We evaluate these generators in various flavors (conditional, Wasserstein etc.) w.r.t. CL performance on a variety of CL tasks generated from the MNIST benchmark. Concerning generators, we find that VAEs are generally more compatible with CL than GANs. More generally, we find that replay-based CL faces counterintuitive issues for seemingly simple problems: first, that performance degrades more strongly as less information is added, and, furthermore, that performance degrades even when only known information is added.