Experiments to illustrate a novel methodology for reinforcement learning in embodied physical agents are described. A simulated legged robot is decomposed into structurebased modules following the authors' EMBER principles of local sensing, action and learning. The legs are individually trained to 'walk' in isolation, and re-attached to the robot; walking is then sufficiently stable that learning in situ can continue. The experiments demonstrate the benefits of the modular decomposition: state-space factorisation leads to faster learning, in this case to the extent that an otherwise intractable problem becomes learnable.