We explore the idea of overrelaxation for accelerating the expectation-maximization (EM) algorithm, focusing on preserving its simplicity and monotonic convergence properties. It is shown that in many cases a trivial modification in the M-step results in an algorithm that maintains monotonic increase in the log-likelihood, but can have an appreciably faster convergence rate, especially when EM is very slow. The method is applicable to more general fixed point algorithms. Its simplicity and effectiveness are illustrated with several statistical problems, including probit regression, least absolute deviations regression, Poisson inverse problems, and finite mixtures. This paper has supplemental materials online.