2020
DOI: 10.1109/access.2020.2997304
|View full text |Cite
|
Sign up to set email alerts
|

IADRL: Imitation Augmented Deep Reinforcement Learning Enabled UGV-UAV Coalition for Tasking in Complex Environments

Abstract: Recent developments in Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) have made them highly useful for various tasks. However, they both have their respective constraints that make them incapable of completing intricate tasks alone in many scenarios. For example, a UGV is unable to reach high places, while a UAV is limited by its power supply and payload capacity. In this paper, we propose an Imitation Augmented Deep Reinforcement Learning (IADRL) model that enables a UGV and UAV to form a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 25 publications
0
11
0
Order By: Relevance
“…As shown in FIGURE 24(a), this method can automatically learn a navigation strategy, imitate data by experts, and apply to multi-robot system. As shown in FIGURE 24(b), Zhang et al [123] proposed an imitation learning model (IADRL) to drive UAV to cooperate with each other to complete more complex tasks. The model provides a strategy for multi-UAV system, which makes multi-UAV complete the task with the minimum system cost.…”
Section: Figure 23 Imitation Learning In a 3d Simulation Environmentmentioning
confidence: 99%
“…As shown in FIGURE 24(a), this method can automatically learn a navigation strategy, imitate data by experts, and apply to multi-robot system. As shown in FIGURE 24(b), Zhang et al [123] proposed an imitation learning model (IADRL) to drive UAV to cooperate with each other to complete more complex tasks. The model provides a strategy for multi-UAV system, which makes multi-UAV complete the task with the minimum system cost.…”
Section: Figure 23 Imitation Learning In a 3d Simulation Environmentmentioning
confidence: 99%
“…An agent interfaces with all levels of control. This methodology constrains a trained agent to the specific vehicle it was trained on as it has learned not only how to avoid obstacles and adapt to a changing environment, but also a vehicle’s unique specific dynamics ( Cheng and Zhang, 2018 ; Codevilla et al, 2018 ; Zhang et al, 2020 ; Zhou et al, 2020 ). These implementations would likely struggle generalizing in any cross-vehicle or cross-domain implementation without re-training of the agent.…”
Section: Related Workmentioning
confidence: 99%
“…There have been some works about RFID-integrated UAV-aided networks [11][12][13][14][15][16][17][18][19][20][21][22][23]. Currently, Buffi et al in [11] presented a drone-based UHF-RFID tag identification scheme, which can be applied in indoor scenarios and VANET environments.…”
Section: Related Work 121 Rfid-integrated Uav-aided Networkmentioning
confidence: 99%
“…In addition, Won et al in [17] and Zhang et al in [18] introduced machine learning and deep reinforcement learning to RFID-integrated UAV-aided networks and investigated the tag estimation scheme. In addition, indoor positioning [19], resource allocation [20], tag localization [21], remote sensing [22], and electromagnetic modeling [23] have also been investigated.…”
Section: Related Work 121 Rfid-integrated Uav-aided Networkmentioning
confidence: 99%