Learning a map from movement to neural data (Encoding Problem) and vice versa (Decoding Problem) are crucial to understanding motor control. A principled encoding model that understands underlying neural dynamics can help better solve the decoding problem. Here, we develop a new generative encoding model leveraging deep learning that autonomously captures neural dynamics. After training, the model can synthesize spike trains given any observed kinematics, under the guidance of the learned neural dynamics. When neural data from other sessions or subjects are limited, synthesized spike trains can improve cross-session and cross-subject decoding performance of a Brain Computer Interface decoder. For cross-subject, even with ample data for both subjects, neural dynamics learned from a previous subject can transfer useful knowledge that improves the best achievable decoding performance for the new subject. The approach is general and fully data-driven, and hence could apply to neuroscience encoding and decoding problems beyond motor control.1 Current deep generative models can only generate samples from the distribution they have been trained on. For example, in machine vision, a generative model can only generate images of dogs and cats if it has only been trained on images of cats and dogs. We do not expect it can generalize to images of birds. Here, because each distinct kinematics is a new category such as birds, we do not expect our model to generalize well to novel kinematics in terms of neural dynamics. However, we show that, after fine-tuning with a small amount of neural data from another session or subject, our encoding model can generalize to novel situations improving cross-session and cross-subject decoding performance.