2018
DOI: 10.1177/1756829318757470
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning for vision-based micro aerial vehicle autonomous landing

Abstract: Vision-based techniques are widely used in micro aerial vehicle autonomous landing systems. Existing vision-based autonomous landing schemes tend to detect specific landing landmarks by identifying their straightforward visual features such as shapes and colors. Though efficient to compute, these schemes only apply to landmarks with limited variability and require strict environmental conditions such as consistent lighting. To overcome these limitations, we propose an end-to-end landmark detection system based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 17 publications
0
12
0
1
Order By: Relevance
“…In addition, they introduced Profile Checker V2 to improve accuracy. As a result, their method could operate with a maximum range of 50 m. Similarly, Yu et al [ 14 ] introduced a deep-learning-based method for MAV autonomous landing systems, and they adopted a variant of the YOLO detector to detect landmarks. The system achieved high accuracy of marker detection and exhibited robustness to various conditions, such as variations in landmarks under different lighting conditions and backgrounds.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, they introduced Profile Checker V2 to improve accuracy. As a result, their method could operate with a maximum range of 50 m. Similarly, Yu et al [ 14 ] introduced a deep-learning-based method for MAV autonomous landing systems, and they adopted a variant of the YOLO detector to detect landmarks. The system achieved high accuracy of marker detection and exhibited robustness to various conditions, such as variations in landmarks under different lighting conditions and backgrounds.…”
Section: Related Workmentioning
confidence: 99%
“…However, its target was only for logo detection, which was different from our research of marker detection by a drone camera. Although the method in [ 13 , 14 , 21 ] achieved a 99% accuracy for landmark or marker, based on field experiments, they assumed only the slow movement or landing of drone, which did not generate the motion blurring. However, in the actual case of drone movement or landing at normal speed, motion blurring occurred frequently, as mentioned in [ 20 ].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Their method can detect the accurate center and direction of a marker using a Profile Checker v2, and the drone can be operated for very large distances, with 50 m being the furthest. Similarly, Yu et al proposed a visionguided autonomous landing system of MAVs based on a modified SqueezeNet and a you only look once (YOLO) model for detecting landmarks [17]. The system is robust to variations in landmarks under different lighting conditions and backgrounds.…”
Section: Related Workmentioning
confidence: 99%
“…Yu et al. 3 propose an end-to-end landmark detection system based on a deep convolutional neural network and an associated embedded implementation on a graphics implementation processing unit to perform vision-based autonomous landing.…”
mentioning
confidence: 99%