Multiagent teams have been shown to be effective in various domains that require finely tuned coordination among team members. However, finding useful agent policies depends on receiving individualized feedback during training which is difficult when performance measures depend on many other agents. Reward shaping partially addresses this challenge by deriving more individualized rewards that more accurately reflect an individual's contribution to the system, but this approach still relies on agents randomly discovering suitable joint-actions. In this work, we introduce Counterfactual Focused Learning (CFL) as a method for injecting additional knowledge into an agent's reward signal by comparing rewards from the system state against rewards derived from different counterfactual states. We show that CFL enables agent teams to converge towards useful policies more reliably. We also show that the counterfactual states in CFL lead to significant improvements in coordination and performance in tasks where many agents must perform complimentary actions to complete an objective.