2021
DOI: 10.48550/arxiv.2110.09018
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reinforcement Learning-Based Coverage Path Planning with Implicit Cellular Decomposition

Javad Heydari,
Olimpiya Saha,
Viswanath Ganapathy

Abstract: Coverage path planning in a generic known environment is shown to be NP-hard. When the environment is unknown, it becomes more challenging as the robot is required to rely on its online map information built during coverage for planning its path. A significant research effort focuses on designing heuristic or approximate algorithms that achieve reasonable performance. Such algorithms have sub-optimal performance in terms of covering the area or the cost of coverage, e.g., coverage time or energy consumption. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 49 publications
0
4
0
Order By: Relevance
“…In [14], the authors develop an abstraction model for CPP scenarios to find a specific coverage path solution with DRL. In [17], a CPP problem is reformulated as an optimal time-stopping problem, and the authors demonstrate a solution to this problem with a deep Q-network approach for indoor environments. Kyaw et al [15] use a classical cellular decomposition approach to repose the CPP problem as a traveling salesman problem, then solve it with the help of the REINFORCE algorithm.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In [14], the authors develop an abstraction model for CPP scenarios to find a specific coverage path solution with DRL. In [17], a CPP problem is reformulated as an optimal time-stopping problem, and the authors demonstrate a solution to this problem with a deep Q-network approach for indoor environments. Kyaw et al [15] use a classical cellular decomposition approach to repose the CPP problem as a traveling salesman problem, then solve it with the help of the REINFORCE algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…This ablation study explores the impacts of different action masks on the learning performance. Multi3 agents (i.e., agents trained on three maps, see Table I) were trained using either no mask, the valid mask ( 14), the immediate mask (15), or the invariant mask (17). For the purpose of this comparison, agents that violated a constraint received a penalty of r s = 5 (r c = 0.01 and r m = 0.02), and the episode was terminated.…”
Section: A Action Masking Ablationmentioning
confidence: 99%
“…We note that, for such sensor-based path planning and/or target detection problems, machine learning-based methods have been on the agenda recently. In particular, reinforcement learning (RL) approach appears to be gaining popularity as an adaptive optimization methodology [8][9][10][11][12].…”
Section: Introductionmentioning
confidence: 99%
“…In the service industry, AI-driven robots are employed in customer interactions, from chatbots resolving inquiries to robots providing room service in hotels. Household robotics, such as smart vacuum cleaners, leverage machine learning to map and clean spaces effectively [16][17][18].…”
Section: Introductionmentioning
confidence: 99%