Path planning is a key issue in the field of coastal ships, and it is also the core foundation of ship intelligent development. In order to better realize the ship path planning in the process of navigation, this paper proposes a coastal ship path planning model based on the optimized deep Q network (DQN) algorithm. The model is mainly composed of environment status information and the DQN algorithm. The environment status information provides training space for the DQN algorithm and is quantified according to the actual navigation environment and international rules for collision avoidance at sea. The DQN algorithm mainly includes four components which are ship state space, action space, action exploration strategy and reward function. The traditional reward function of DQN may lead to the low learning efficiency and convergence speed of the model. This paper optimizes the traditional reward function from three aspects: (a) the potential energy reward of the target point to the ship is set; (b) the reward area is added near the target point; and (c) the danger area is added near the obstacle. Through the above optimized method, the ship can avoid obstacles to reach the target point faster, and the convergence speed of the model is accelerated. The traditional DQN algorithm, A* algorithm, BUG2 algorithm and artificial potential field (APF) algorithm are selected for experimental comparison, and the experimental data are analyzed from the path length, planning time, number of path corners. The experimental results show that the optimized DQN algorithm has better stability and convergence, and greatly reduces the calculation time. It can plan the optimal path in line with the actual navigation rules, and improve the safety, economy and autonomous decision-making ability of ship navigation.
In the process of ship collision avoidance decision making, steering collision avoidance is the most frequently adopted collision avoidance method. In order to obtain an effective and reasonable steering angle, this paper proposes a decision-making method for ship collision avoidance based on improved cultural particle swarm. Firstly, the ship steering angle direction is to be determined. In this stage, the Kalman filter is used to predict the ship’s trajectory. According to the prediction parameters, the collision risk index of the ship is calculated and the situation with the most dangerous ship is judged. Then, the steering angle direction of the ship is determined by considering the Convention on the International Regulations for Preventing Collisions at Sea (COLREGs). Secondly, the ship steering angle is to be calculated. In this stage, the cultural particle swarm optimization algorithm is improved by introducing the index of population premature convergence degree to adaptively adjust the inertia weight of the cultural particle swarm so as to avoid the algorithm falling into premature convergence state. The improved cultural particle swarm optimization algorithm is used to find the optimal steering angle within the range of the steering angle direction. Compared with other evolutionary algorithms, the improved cultural particle swarm optimization algorithm has better global convergence. The convergence speed and stability are also significantly improved. Thirdly, the ship steering angle direction decision method in the first stage and the ship steering angle decision method in the second stage are integrated into the electronic chart platform to verify the effectiveness of the decision-making method of ship collision avoidance presented in this paper. Results show that the proposed approach can automatically realize collision avoidance from all other ships and it has an important practical application value.
Deep Reinforcement Learning (DRL) is widely used in path planning with its powerful neural network fitting ability and learning ability. However, existing DRL-based methods use discrete action space and do not consider the impact of historical state information, resulting in the algorithm not being able to learn the optimal strategy to plan the path, and the planned path has arcs or too many corners, which does not meet the actual sailing requirements of the ship. In this paper, an optimized path planning method for coastal ships based on improved Deep Deterministic Policy Gradient (DDPG) and Douglas–Peucker (DP) algorithm is proposed. Firstly, Long Short-Term Memory (LSTM) is used to improve the network structure of DDPG, which uses the historical state information to approximate the current environmental state information, so that the predicted action is more accurate. On the other hand, the traditional reward function of DDPG may lead to low learning efficiency and convergence speed of the model. Hence, this paper improves the reward principle of traditional DDPG through the mainline reward function and auxiliary reward function, which not only helps to plan a better path for ship but also improves the convergence speed of the model. Secondly, aiming at the problem that too many turning points exist in the above-planned path which may increase the navigation risk, an improved DP algorithm is proposed to further optimize the planned path to make the final path more safe and economical. Finally, simulation experiments are carried out to verify the proposed method from the aspects of plan planning effect and convergence trend. Results show that the proposed method can plan safe and economic navigation paths and has good stability and convergence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.