2023
DOI: 10.3390/app13127202
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Navigation of Robots: Optimization with DQN

Abstract: In the field of artificial intelligence, control systems for mobile robots have undergone significant advancements, particularly within the realm of autonomous learning. However, previous studies have primarily focused on predefined paths, neglecting real-time obstacle avoidance and trajectory reconfiguration. This research introduces a novel algorithm that integrates reinforcement learning with the Deep Q-Network (DQN) to empower an agent with the ability to execute actions, gather information from a simulate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 96 publications
0
2
0
Order By: Relevance
“…However, while excelling in radioactive grid scenarios, it might lack adaptability across varied complex environments and might not prioritize real-time obstacle avoidance. In [ 33 ], a novel algorithm integrating reinforcement learning with the Deep Q-Network (DQN) aimed to empower agents for real-time decision making and trajectory planning in Gazebo simulations. However, this approach, while promoting exploration, might overlook real-time obstacle avoidance during trajectory planning, potentially limiting its effectiveness in dynamic environments.…”
Section: Related Workmentioning
confidence: 99%
“…However, while excelling in radioactive grid scenarios, it might lack adaptability across varied complex environments and might not prioritize real-time obstacle avoidance. In [ 33 ], a novel algorithm integrating reinforcement learning with the Deep Q-Network (DQN) aimed to empower agents for real-time decision making and trajectory planning in Gazebo simulations. However, this approach, while promoting exploration, might overlook real-time obstacle avoidance during trajectory planning, potentially limiting its effectiveness in dynamic environments.…”
Section: Related Workmentioning
confidence: 99%
“…The DQN model offers adaptive decision-making capabilities, learning from experience and adjusting its decision-making process based on real-world performance [42]. In our research, we implement the DQN to optimize the threshold time for releasing the DL model in the video surveillance system, as shown in Figure 3.…”
Section: Dqn-based Controlling Threshold Modulementioning
confidence: 99%
“…Traditional global path planning approaches, such as Dijkstra's algorithms [19], heavily rely on precomputed maps and face limitations in adapting to dynamic changes. In contrast, local path planning methods, like the Velocity Obstacle method and the Dynamic Window Approach, focus on real-time obstacle avoidance, considering the robot's immediate surroundings [20,21]. While suitable for reactive navigation, these methods may lack the ability to plan globally.…”
Section: Related Workmentioning
confidence: 99%