Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2006 Ieee/Aiaa 25TH Digital Avionics Systems Conference 2006
DOI: 10.1109/dasc.2006.313713
|View full text |Cite
|
Sign up to set email alerts
|

Color Optic Flow: A Computer Vision Approach for Object Detection on UAVs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2007
2007
2015
2015

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…18 Work at the University of Illinois using a 6.5-ft wingspan aerobatic unmanned aircraft involved risk mitigation in the form of failure mode analysis of subsystems and components, 19 particularly fuel management, 20 system fault detection, 21 and object and terrain detection using optical flow. 22,23 The aircraft used by the University of Illinois was also used by Boeing to test distributed communication networks. 24 The University of Kansas has performed extensive testing with their large, 10-ft wingspan, aerobatic unmanned aircraft: researchers have evaluated a COTS autopilot, 25, 26 tested a new flight control system, 27, 28 performed aircraft and avionics system identification, [29][30][31] used it as the base aircraft from which a test pilot transitioned to a new unfamiliar airframe, 32 evaluated flight loads using strain gauge measurements 33 and compared moment of inertia estimation methods with experimental measurements.…”
Section: A Literature Review Of Aerobatic Unmanned Aircraft Used Formentioning
confidence: 99%
“…18 Work at the University of Illinois using a 6.5-ft wingspan aerobatic unmanned aircraft involved risk mitigation in the form of failure mode analysis of subsystems and components, 19 particularly fuel management, 20 system fault detection, 21 and object and terrain detection using optical flow. 22,23 The aircraft used by the University of Illinois was also used by Boeing to test distributed communication networks. 24 The University of Kansas has performed extensive testing with their large, 10-ft wingspan, aerobatic unmanned aircraft: researchers have evaluated a COTS autopilot, 25, 26 tested a new flight control system, 27, 28 performed aircraft and avionics system identification, [29][30][31] used it as the base aircraft from which a test pilot transitioned to a new unfamiliar airframe, 32 evaluated flight loads using strain gauge measurements 33 and compared moment of inertia estimation methods with experimental measurements.…”
Section: A Literature Review Of Aerobatic Unmanned Aircraft Used Formentioning
confidence: 99%
“…As the advances of new powerful processing units, cameras can be used as a passive sensor to detect the obstacles around UAV. Many efforts are already being conducted to use camera in CAS systems such as the research found in (Matthies et al, 1998;Oh, 2004;Boon Kiat et al, 2004;Muratet et al, 2005;Mehra et al, 2005;De Wagter & Mulder, 2005;Zhihai et al, 2006;Ortiz & Neogi, 2006;Prazenica et al, 2006;Frew et al, 2006;Subong et al, 2008;Moore et al, 2009;Zufferey et al, 2010). Video cameras are light and inexpensive and thereby fit to the UAV requirements especially the small one.…”
Section: Non-cooperative Monitoringmentioning
confidence: 99%
“…• Chapter 4: Proposes an obstacle detection processing pipeline which uses stereoscopic vi sion in order to detect and map in 3D potential obstacles at or above the horizon line • Chapter 5: Provides conclusions in regards to the concepts presented in the thesis and suggests recommendations for future research. 14…”
Section: Thesis Outlinementioning
confidence: 99%
“…In the case where a stereoscopic pair of cameras is not feasible, such as for MAVs, monocular cameras have also been used both for obstacle detection [14] and also for computing depth from motion [14], [13], [16]. Depth from motion computes depth by tracking image feature points be tween successive image frames and inferring the 3D motion of the features based on their movement across an image.…”
Section: Obstacle Detectionmentioning
confidence: 99%