This paper introduces the D* Hindsight Deep Q-learning (DH-DQN) algorithm, which combines the D* algorithm and is applied to the port AGV path planning problem. To address the drawbacks of long decision times and large storage space of the D* algorithm, this paper proposes an improved DQN algorithm for a single AGV path planning problem. The Hindsight experience replay algorithm is used to alleviate the problem of slow or non-convergence due to sparse reward space. By selecting the actions planned by the D* algorithm with a certain probability, the DH-DQN algorithm's convergence speed and stability are improved, and its decision time is shorter than the classical D* algorithm. The DH-DQN algorithm is used to control an AGV to perform different tasks in different layouts. Experimental results show that the DH-DQN algorithm not only avoids non-convergence in four different layouts but also converges faster than the classical DQN algorithm. Furthermore, the DH-DQN and DQN algorithms' decision-making process's time comparison under the four layouts reveals that DH-DQN saves 24.91\% of the time in the large-scale environment and 28.08\% in the small-scale environment. Therefore, the DH-DQN algorithm performs well in solving the port AGV path planning problem.