2020
DOI: 10.2991/ijcis.d.200615.002
|View full text |Cite
|
Sign up to set email alerts
|

Accuracy Improvement of Autonomous Straight Take-off, Flying Forward, and Landing of a Drone with Deep Reinforcement Learning

Abstract: Nowadays, drones are expected to be used in several engineering and safety applications both indoors and outdoors, e.g., exploration, rescue, sport, entertainment, and convenience. Among those applications, it is important to make a drone capable of flying autonomously to carry out an inspection patrol. In this paper, we present a novel method that uses ArUco markers as a reference to improve the accuracy of a drone on autonomous straight take-off, flying forward, and landing based on Deep Reinforcement Learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 13 publications
(20 reference statements)
0
17
0
Order By: Relevance
“…At every altitude that the quadrotor rises to, the corresponding pixels are computed using the color segmentation technique. To segment the color, the RGB image is first converted to YCBCR format using Equation (11) The same radius color pads were used for each range sensor. The sensors are then orthogonalized to the center of the allocated color pads (refer to Figure 8).…”
Section: Camera Tof Calibration Implementation and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…At every altitude that the quadrotor rises to, the corresponding pixels are computed using the color segmentation technique. To segment the color, the RGB image is first converted to YCBCR format using Equation (11) The same radius color pads were used for each range sensor. The sensors are then orthogonalized to the center of the allocated color pads (refer to Figure 8).…”
Section: Camera Tof Calibration Implementation and Resultsmentioning
confidence: 99%
“…While that algorithm uses deep learning, reinforcement learning, which includes reward-and penalty-based policy making, has progressed at the same rate. Chang et al (2020) utilized ArUco markers as a reference to augment the accuracy of autonomous drones performing straight takeoff, flying forward, and landing, based on Deep Reinforcement Learning (DRL) [11]. Deep Q-Networks (DQNs) for navigation and Double Deep Q Networks (DDQNs) have been investigated for an environment with noisy visual data, improving the system performance to generalize efficiently in occluded areas [12].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Namely, rule-based control strategies are unfit for the time-varying traffic conditions, and they also have difficulty coping with unexpected traffic situations in the real-world environment. On the other hand, some control strategies are implemented based on the absolute positioning information from the Global Positioning System (GPS), and this may result in precision and availability problems, that is, for some scenarios, GPS may not always be precise and available because of the effects of signal attenuation and multipath propagation [1][2][3]. Hence, instead of the rule-based control strategies and absolute positioning information, we exploited DRL with the CNN to indirectly use relative positioning information to enhance the quality and safety of autonomous driving control.…”
Section: Introductionmentioning
confidence: 99%
“…This helped us improve the portability of our research on our autonomous control strategy to a real vehicle in a real-world environment later. For instance, in [3,8,9], the vision information captured from the simulation software was utilized directly. Furthermore, this may reduce the level of confidence while realizing the designs in a real-world environment.…”
Section: Introductionmentioning
confidence: 99%
“…In a nutshell, dynamic programming provides the mathematical structure for reinforcement learning. Applications of dynamic programming can be found in robotics [4], autonomous vehicles [5], drones [6], and water networks [7], to name a few examples. In this work, we strive to combine these results with safety [8].…”
Section: Introductionmentioning
confidence: 99%