The recent success of neural networks in machine translation and other elds has drawn renewed attention to learning sequence-to-sequence (seq2seq) tasks. While there exists a rich literature that studies classi cation and regression using solvable models of neural networks, learning seq2seq tasks is signi cantly less studied from this perspective. Here, we propose a simple model for a seq2seq task that gives us explicit control over the degree of memory, or non-Markovianity, in the sequences -the stochastic switching-Ornstein-Uhlenbeck (SSOU) model. We introduce a measure of non-Markovianity to quantify the amount of memory in the sequences. For a minimal autoregressive (AR) learning model trained on this task, we identify two learning regimes corresponding to distinct phases in the stationary state of the SSOU process. These phases emerge from the interplay between two di erent time scales that govern the sequence statistics. Moreover, we observe that while increasing the memory of the AR model always improves performance, increasing the non-Markovianity of the input sequences can improve or degrade performance. Finally, our experiments with recurrent and convolutional neural networks show that our observations carry over to more complicated neural network architectures.