2019
DOI: 10.3390/drones3030072
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Drone Delivery

Abstract: Drones are expected to be used extensively for delivery tasks in the future. In the absence of obstacles, satellite based navigation from departure to the geo-located destination is a simple task. When obstacles are known to be in the path, pilots must build a flight plan to avoid them. However, when they are unknown, there are too many or they are in places that are not fixed positions, then to build a safe flight plan becomes very challenging. Moreover, in a weak satellite signal environment, such as indoors… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(25 citation statements)
references
References 22 publications
(29 reference statements)
0
25
0
Order By: Relevance
“…In addition to providing ground wireless connectivity, there exist a plethora of areas where UAVs could be used efficiently, such as drone delivery. In this frame, achieving drone delivery tasks through DRL was investigated in [90]. The authors used double DQN to propose a path planning algorithm for UAVs having an objective to reach a destination in an obstacle-impaired environment.…”
Section: ) Update Rulementioning
confidence: 99%
See 1 more Smart Citation
“…In addition to providing ground wireless connectivity, there exist a plethora of areas where UAVs could be used efficiently, such as drone delivery. In this frame, achieving drone delivery tasks through DRL was investigated in [90]. The authors used double DQN to propose a path planning algorithm for UAVs having an objective to reach a destination in an obstacle-impaired environment.…”
Section: ) Update Rulementioning
confidence: 99%
“…The proposed solution is an improvement to the authors' previous work in [91], where three DRL algorithms were tested which are the DQN, double DQN, and duel DQN. As double DQN gave the best results, in [90] the same algorithm was used and the depth information deduced from the image of the UAV stereo-vision front camera was used as an input. More futuristic ideas are discussed in the literature, such as using drones to serve food and drinks in restaurants [92].…”
Section: ) Update Rulementioning
confidence: 99%
“…The work in [ 33 ] addressed this limitation by adopting the DDPG algorithm with continuous action space for UAV navigation in 3D space, yet still in an unrealistic simulated environment. The work in [ 34 ] applies a DQN algorithm for a drone delivery task, where the UAV tries to reach a predefined goal while avoiding obstacles based on depth images. This approach’s main drawback is that it uses a discrete action space for guiding the UAV, and it was tested only in simulation.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to robot control, RL is applied and used effectively in various fields. For convenience in daily life, RL has been applied to drone delivery, home energy system optimization, autonomous driving, and automatic parking systems [12][13][14][15]. In Internet of Things devices and networks, RL is mainly used to control traffic and congestion in complex situations.…”
Section: Learning From Demonstrationmentioning
confidence: 99%
“…Moreover, this area of research is directly related to drone control problems, where RL has been applied to design drones with obstacle avoidance. The data obtained from the sensor module mounted on the drone are used to configure the environment and state of the RL model, and the drone is controlled by designing an algorithm to maximize the reward value obtained from operation [12,23]. RL is also used to design energy management systems to determine the balance between agents and optimal scheduling strategies.…”
Section: Learning From Demonstrationmentioning
confidence: 99%