SummaryTemporal difference reinforcement learning (TDRL) accurately models associative learning observed in animals where they learn to predict the reward value of an unconditioned stimulus (US) based on a conditioned stimulus (CS), such as in classical conditioning. A key component of TDRL is the value function, which captures the expected temporally discounted reward from a given state. The value function can also be modified by the animal's knowledge and certainty of its environment. Here we show that not only do primary motor cortex (M1) neurodynamics reflect a TD learning process, but M1 also encodes a value function in line with TDRL. M1 responds to the delivery of an unpredictable reward, and shifts its value related response earlier in a trial, becoming predictive of an expected reward, when reward is predictable, such as when a CS acts as a cue predicting the upcoming reward. This is observed in tasks performed manually or observed passively, as well as in tasks with explicit CS predicting reward, or simply with a predictable temporal task structure, that is a predictable environment. M1 also encodes the expected reward value associated with a CS in a multiple reward level CS-US task. The Microstimulus TD model, reported to accurately capture RL related dopaminergic activity, extends to account for M1 reward related neural activity in a multitude of tasks.