2012 IEEE/RSJ International Conference on Intelligent Robots and Systems 2012
DOI: 10.1109/iros.2012.6386146
|View full text |Cite
|
Sign up to set email alerts
|

Low-power parallel algorithms for single image based obstacle avoidance in aerial robots

Abstract: Abstract-For an aerial robot, perceiving and avoiding obstacles are necessary skills to function autonomously in a cluttered unknown environment. In this work, we use a single image captured from the onboard camera as input, produce obstacle classifications, and use them to select an evasive maneuver. We present a Markov Random Field based approach that models the obstacles as a function of visual features and non-local dependencies in neighboring regions of the image. We perform efficient inference using new … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(8 citation statements)
references
References 36 publications
0
8
0
Order By: Relevance
“…Other approaches focus on doing faster avoidance by simplifying the data coming from depth sensors: such as filtering depth camera data into planes [11], or converting dense stereo into digital elevation maps [12], or even singleimage depth from training data [13]. But again, all of these approaches demonstrate a frame rate of 20-30 Hz and require substantial off-board processing, while our conversion of disparity images to U-maps requires much less computing power.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches focus on doing faster avoidance by simplifying the data coming from depth sensors: such as filtering depth camera data into planes [11], or converting dense stereo into digital elevation maps [12], or even singleimage depth from training data [13]. But again, all of these approaches demonstrate a frame rate of 20-30 Hz and require substantial off-board processing, while our conversion of disparity images to U-maps requires much less computing power.…”
Section: Related Workmentioning
confidence: 99%
“…We use the version of the ellipses rotated into the current yaw angle of the robot to simplify calculations, as these are already available from Eq. (12,13).…”
Section: B Waypoint Planningmentioning
confidence: 99%
“…A vision-based autonomous flight with a quadrotor type UAV is proposed and tested in the Google Earth virtual environment [13]. Lenz et al [21] uses a single image captured from the onboard camera as input, produce obstacle classifications, and use them to select an evasive maneuver. Mejias and Campoy [23] presents a collision avoidance approach based on omnidirectional cameras that does not require the estimation of range between two platforms to resolve a collision encounter.…”
Section: Related Workmentioning
confidence: 99%
“…Vision-based navigation is widely used in the autonomous control for the robot or the automobile (Lenz et al, 2012;Bills et al, 2011). In these applications, images provide position information for the device to locate itself in the environment for path planning.…”
Section: Introductionmentioning
confidence: 99%