A deep reinforcement Q learning algorithm (DRQN) based on radial neural network is proposed to achieve path planning and obstacle avoidance for mobile robots in complex ground environments with different types of obstacles, including static and dynamic obstacles. Firstly, the path planning problem is represented as a partially-observed Markov decision process. Steering angle, running characteristics, and other elements are introduced into the state-action decision space and the greedy factor is dynamically adjusted using a simulated annealing algorithm, which improves the mobile robot's environment exploration and action selection accuracy. Secondly, the Q-learning algorithm is improved by replacing the Q-table structure with an RBF neural network to enhance the approximation ability of the algorithm's function values, and the parameters of the implicit layer and the weights between the implicit and the output layer are trained using the dynamic clustering and least-mean methods respectively, which improves the convergence speed and enhances the ability of mobile robots to handle large-scale computation. Lastly, the double reward mechanism is set up to prevent the mobile robot from blind searching in unknown environments, which enhances the learning ability and improves path planning safety and flexibility at the same time. Different types of scenarios are set up for simulation experiments, and the results verified the superiority of the DQRN algorithm. Taking the 30 * 30 complex scene as an example, using the DQRN algorithm for path planning reduces the values of distance, turning angle, and planning time by 27.04%, 7.76%, and 28.05%, respectively, compared to the average values of Q-learning, optimized Q-learning, deep Q-learning, and DDPG algorithms, which can effectively improve the path planning efficiency for mobile robots in complex environments.