2023
DOI: 10.1007/978-3-031-26564-8_2
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning Applied to Multi-agent Informative Path Planning in Environmental Missions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…One may choose to use RL in such systems as it allows agents to learn coordinated policies that may be complicated and time-consuming to hard code. Yanes Luis et al (2022) uses DQN with prioritised experience replay to develop a multi-agent framework for IPP. The proposed method is tested and applied to the Peralta et al (2020) .…”
Section: Methodologies In Active Environmental Monitoringmentioning
confidence: 99%
“…One may choose to use RL in such systems as it allows agents to learn coordinated policies that may be complicated and time-consuming to hard code. Yanes Luis et al (2022) uses DQN with prioritised experience replay to develop a multi-agent framework for IPP. The proposed method is tested and applied to the Peralta et al (2020) .…”
Section: Methodologies In Active Environmental Monitoringmentioning
confidence: 99%
“…Additionally, the partial observability of our proposed environment is mitigated by implementing a first phase of homogeneous patrolling, as the pollution distribution along the lake at the beginning of the episode is unknown. In the context of multi-agent patrolling of environmental missions, [6] and [28] proposed a scheme that employs Deep Q-Learning with a Convolutional Neural Network as a shared fleet policy and utilizes a global visual state. In [6], a decoupled final layer was proposed for N agents with |A| possible actions.…”
Section: Related Workmentioning
confidence: 99%
“…In fully cooperative environments that employ joint reward signals, agents face credit assignment challenges when determining the impact of their actions on team performance. To tackle the credit assignment problem, [28] proposed a decoupled reward in which each agent receives a reward only for their individual contributions, without any additional considerations. They demonstrated that their approach is effective in addressing the credit assignment problem, which motivates this work to employ it.…”
Section: Related Workmentioning
confidence: 99%