Reinforcement learning, a general and universally useful framework for learning from experience, has been broadly recognized as a critically important concept for understanding and shaping adaptive behavior, both in ethology and in artificial intelligence. A key component in reinforcement learning is the reward function, which, according to an emerging consensus, should be intrinsic to the learning agent and a matter of appraisal rather than a simple reflection of external outcomes. We describe an approach to intrinsically motivated reinforcement learning that involves various aspects of happiness, operationalized as dynamic estimates of well-being. In four experiments, in which simulated agents learned to explore and forage in simulated environments, we show that agents whose reward function properly balances momentary (hedonic) and longer-term (eudaimonic) well-being outperform agents equipped with standard fitness-oriented reward functions. Our findings suggest that happiness-based features can be useful in developing robust, general-purpose reward mechanisms for intrinsically motivated autonomous agents.