To solve the problem of automatic recharging path planning for cleaning robots in complex industrial environments, this paper proposes two environmental path planning types based on designated charging location and multiple charging locations. First, we use the improved Maklink graph to plan the complex environment; then, we use the Dijkstra algorithm to plan the global path to reduce the complex two-dimensional path planning to one dimension; finally, we use the improved fruit fly optimization algorithm (IFOA) to adjust the path nodes for shorting the path length. Simulation experiments show that the effectiveness of using this path planning method in a complex industrial environment enables the cleaning robot to select a designated location or the nearest charging location to recharge when the power is limited. The proposed improved algorithm has the characteristics of a small amount of calculation, high precision, and fast convergence speed.
The path-planning approach plays an important role in determining how long the mobile robots can travel. To solve the path-planning problem of mobile robots in an unknown environment, a potential and dynamic Q-learning (PDQL) approach is proposed, which combines Q-learning with the artificial potential field and dynamic reward function to generate a feasible path. The proposed algorithm has a significant improvement in computing time and convergence speed compared to its classical counterpart. Experiments undertaken on simulated maps confirm that the PDQL when used for the path-planning problem of mobile robots in an unknown environment outperforms the state-of-the-art algorithms with respect to two metrics: path length and turning angle. The simulation results show the effectiveness and practicality of the proposal for mobile robot path planning.
Path planning is a major challenging problem for mobile robots, as the robot is required to reach the target position from the starting position while simultaneously avoiding conflicts with obstacles. This paper refers to a novel method as short and safe Q-learning to alleviate the short and safe path planning task of mobile robots. To solve the slow convergence of Q-learning, the artificial potential field is utilized to avoid random exploration and provides a priori knowledge of the environment for mobile robots. Furthermore, to speed up the convergence of the Q-learning and reduce the computing time, a dynamic reward is proposed to facilitate the mobile robot towards the target point. The experiments are divided into two parts: short and safe path planning. The mobile robot can reach the target with the optimal path length in short path planning, and away from obstacles in safe path planning. Experiments compared with the state-of-the-art algorithm demonstrate the effectiveness and practicality of the proposed approach. Concluded, the path length, computing time and turning angle of SSQL is increased by 2.83%, 23.98% and 7.98% in short path planning, 3.64%, 23.42% and 12.61% in safe path planning compared with classical Q-learning. Furthermore, the SSQL outperforms other optimization algorithms with shorter path length and smaller turning angles.
Search algorithm plays an important role in the motion planning of the robot, it determines whether the mobile robot complete the task. To solve the search task in complex environments, a fusion algorithm based on the Flower Pollination algorithm and Q-learning is proposed. To improve the accuracy, an improved grid map is used in the section of environment modeling to change the original static grid to a combination of static and dynamic grids. Secondly, a combination of Q-learning and Flower Pollination algorithm is used to complete the initialization of the Q-table and accelerate the efficiency of the search and rescue robot path search. A combination of static and dynamic reward function is proposed for the different situations encountered by the search and rescue robot during the search process, as a way to allow the search and rescue robot to get better different feedback results in each specific situation. The experiments are divided into two parts: typical and improved grid map path planning. Experiments show that the improved grid map can increase the success rate and the FIQL can be used by the search and rescue robot to accomplish the task in a complex environment. Compared with other algorithms, FIQL can reduce the number of iterations, improve the adaptability of the search and rescue robot to complex environments, and have the advantages of short convergence time and small computational effort.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.