Neural networks can achieve extraordinary results on a wide variety of tasks. However, when they attempt to sequentially learn a number of tasks, they tend to learn the new task while destructively forgetting previous tasks. One solution to this problem is pseudo-rehearsal, which involves learning the new task while rehearsing generated items representative of previous tasks. Our model combines pseudo-rehearsal with a deep generative model and a dual memory system, resulting in a method that does not demand additional storage requirements as the number of tasks increase. Our model iteratively learns three Atari 2600 games while retaining above human level performance on all three games and performing as well as a set of networks individually trained on the tasks. This result is achieved without revisiting or storing raw data from past tasks. Furthermore, previous state-of-the-art solutions demonstrate substantial forgetting compared to our model on these complex deep reinforcement learning tasks.