2022
DOI: 10.1016/j.engappai.2022.105152
|View full text |Cite
|
Sign up to set email alerts
|

Online robot guidance and navigation in non-stationary environment with hybrid Hierarchical Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 40 publications
0
0
0
Order By: Relevance
“…On the other hand, DRL harnesses the efficacy of RL in addressing challenges in sequential decisionmaking [10]. This empowers the agent to gradually learn optimal decision strategies to maximize cumulative rewards or achieve specific goals through interactions with its environment [11,12].…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, DRL harnesses the efficacy of RL in addressing challenges in sequential decisionmaking [10]. This empowers the agent to gradually learn optimal decision strategies to maximize cumulative rewards or achieve specific goals through interactions with its environment [11,12].…”
Section: Introductionmentioning
confidence: 99%
“…In the past two decades, numerous coverage optimization and/or control techniques had been developed for addressing a large class of area coverage problems using multiple autonomous agents deployed in two-dimensional spatial environments (Cortes et al 2004;Wang and Hussein 2010;Leonard and Olshevsky 2013;Pimenta et al 2013;. In most cases, autonomous robots are deployed in dynamic or non-stationary environment (Zhou and Ho 2022). It is worth noting that most of the existing technique to address such a large class of area coverage problems in the literature assumed the fact that the communication/data packet sharing among robots/agents is perfect or noise-free.…”
Section: Introductionmentioning
confidence: 99%