2021 6th International Conference on Control and Robotics Engineering (ICCRE) 2021
DOI: 10.1109/iccre51898.2021.9435666
|View full text |Cite
|
Sign up to set email alerts
|

Universal Artificial Pheromone Framework with Deep Reinforcement Learning for Robotic Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…Through the experiments, the suitability of the PhERS framework is also validated. Compared to our previous work [34] that first validated the PhERS framework, the experiments conducted in this study include more complex environments, use of a more advanced DRL-based controllers and an additional metric for more elaborate analysis so that the benefits of the proposed framework and the improved version of DRL-based controller could be further investigated.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Through the experiments, the suitability of the PhERS framework is also validated. Compared to our previous work [34] that first validated the PhERS framework, the experiments conducted in this study include more complex environments, use of a more advanced DRL-based controllers and an additional metric for more elaborate analysis so that the benefits of the proposed framework and the improved version of DRL-based controller could be further investigated.…”
Section: Methodsmentioning
confidence: 99%
“…Likewise, the communication network sends pheromone information from the agents to the main PhERS controller so that the released pheromone data is applied to the Phero-grids. A basic version of PhERS was tested for a simple environment scenario in [34].…”
Section: Artificial Pheromone Frameworkmentioning
confidence: 99%
“…Policy Gradients DDPG PPO Other [23,24,39,40,59, (QMIX), [38] (DDQN), [93] (DDQN), [94] (DDQN), [95] (DQN), [96] (DQN) [46][47][48][49][50][97][98][99][100][101][102][103][104][105][106], [22,[52][53][54][55][56][57] [86,88,129-140] (TRPO), [141] (TRPO), [81] (TRPO), [142] (TD3), [143] (SAC), [144] (SAC)…”
Section: Q-networkmentioning
confidence: 99%
“…Ground Robots Manipulators [23,24,38,46,53,56,57,68,76,83,85,91,104,106,110,110,120,123,134,135,138,139,141,142,161,167,171,172,186,205,207,[218][219][220]230,240,269] [ 22,39,49,52,54,55,59,70,71,[73][74][75][77][78][79]…”
Section: Aerial Robotsmentioning
confidence: 99%
“…Although the feasibility and benefits of RL have been demonstrated and expected to produce more promising results as it develops, there is a critical issue in implementing RL for swarm robotic systems in real-world applications. In the simulated environments, all data collected from individuals are used for training in a central server [16]- [18]. This dependence in a central server is also shown with Multi-Agent Reinforcement Learning (MARL) methods, which is a sub-domain of RL dealing with multi-agent problem.…”
Section: Introductionmentioning
confidence: 99%