A great deal of research focuses on how humans and animals learn from trial-and-error interactions with the environment. This research has established the viability of reinforcement learning as a model of behavioral adaptation and neural reward valuation. Error-driven learning is inefficient and dangerous, however. Fortunately, humans learn from nonexperiential sources of information as well. In the present study, we focused on one such form of information, instruction. We recorded event-related potentials as participants performed a probabilistic learning task. In one experiment condition, participants received feedback only about whether their responses were rewarded. In the other condition, they also received instruction about reward probabilities before performing the task. We found that instruction eliminated participants' reliance on feedback as evidenced by their immediate asymptotic performance in the instruction condition. In striking contrast, the feedback-related negativity, an event-related potential component thought to reflect neural reward prediction error, continued to adapt with experience in both conditions. These results show that, whereas instruction may immediately control behavior, certain neural responses must be learned from experience.R einforcement learning (RL) formalizes the notion that humans and animals learn from trial-and-error interactions with the environment (1). According to many RL models, differences between actual and expected outcomes, or reward prediction errors, provide teaching signals. These signals convey information about the magnitude and valence of the difference between actual and expected rewards. By using reward prediction errors to revise expectations, RL models increasingly select advantageous actions. Behavioral studies furnished early support for RL in the form of the "law of effect" (2). This law states that actions that are followed by rewards will be repeated. Single-cell recordings from animals provided further support by showing that responses of midbrain dopamine neurons to outcomes scale according to the differences between actual and expected rewards (3). Neuroimaging experiments have since extended this result to humans by demonstrating that blood-oxygen level-dependent (BOLD) responses in the striatum and prefrontal cortex also mirror reward prediction errors (4).On the basis of these findings, RL has emerged as a prominent theory of behavioral adaptation and neural reward valuation. As it stands, however, RL is an incomplete theory. Individuals learn from nonexperiential sources of information as well. For example, by using language to acquire knowledge about outcome likelihoods, humans can avoid costly mistakes. This raises the question, How does information provided by instruction mediate trial-and-error learning?Several theories seek to explain how the brain uses instruction and experience to select actions (5-8). These theories agree that instruction engages the prefrontal cortex and medial temporal lobes (PFC/MTL), whereas experience engages ...