2017
DOI: 10.1007/978-3-319-59147-6_51
|View full text |Cite
|
Sign up to set email alerts
|

Visual SLAM with a RGB-D Camera on a Quadrotor UAV Using on-Board Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 35 publications
(11 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…As a future work, we extrapolate video freeze detection to onboard applications as people detection [28] [29], navigation [30] [31] [32], obstacle avoidance [33] [34] [35], and mapping [36].…”
Section: Discussionmentioning
confidence: 99%
“…As a future work, we extrapolate video freeze detection to onboard applications as people detection [28] [29], navigation [30] [31] [32], obstacle avoidance [33] [34] [35], and mapping [36].…”
Section: Discussionmentioning
confidence: 99%
“…Representative applications include virtual reality and augmented reality fields, which are to render the virtual objects in the environment according to the map information and the current perspective information from SLAM, and the sense of reality of the virtual objects can be greatly enhanced. In the field of UAV, 126 SLAM can be used for map building, 127 autonomous obstacle avoidance, 128 and path planning. 129,130 In the unmanned vehicle field, SLAM technology provides visual function of odometer for mixing with other location techniques.…”
Section: Simultaneous Localization and Mappingmentioning
confidence: 99%
“…These techniques use measurements from sensors like Kinect [6], Light Detection and Ranging (LIDAR), Sound Navigation and Ranging (SONAR), optical flow, stereo and monocular cameras for computation. SLAM algorithms utilize measurements from a single sensor [7] or a combination of sensors [8] to build or update a map of the environment surrounding the UAV while simultaneously using the same to estimate the UAV's position. The SfM approaches use measurements from sensors like optical flow [9] and/or a moving monocular camera [10] to determine depth map and the 3D structure.…”
Section: Introductionmentioning
confidence: 99%