In the field of mobile robot control, the utilization of reinforcement learning methods often faces the challenge of sparse rewards, resulting in suboptimal control performance. This paper proposes an approach that leverages the Diversity is All You Need (DIAYN) framework to dynamically generate reward functions and enhance the policy network weights of reinforcement learning algorithms. A comparative analysis is conducted with traditional reinforcement learning algorithms, namely Deep Deterministic Policy Gradient (DDPG), Deep Q-Network (DQN), and behavior-based robotic and proportional-integral-derivative (BBR_PID) planning control algorithm. The results obtained from the CoppeliaSim simulation experiment demonstrate that, under identical training conditions, the DIAYN-based DDPG algorithm exhibits superior learning capability and achieves faster convergence to optimal actions. Consequently, the mobile robot can reach the target point more efficiently and with greater stability.