2011
DOI: 10.1117/12.883102
|View full text |Cite
|
Sign up to set email alerts
|

Detection and classification of moving objects from UAVs with optical sensors

Abstract: Small and medium sized UAVs like German LUNA have long endurance and define in combination with sophisticated image exploitation algorithms a very cost efficient platform for surveillance. At Fraunhofer IOSB, we have developed the video exploitation system ABUL with the target to meet the demands of small and medium sized UAVs. Several image exploitation algorithms such as multi-resolution, super-resolution, image stabilization, geocoded mosaiking and stereo-images/3D-models have been implemented and are used … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…This is used as an empirical test case to explore the optimization of an observation model for the environment when distributed over multiple sensors, platforms and resulting target detection signatures. With an ever increasing use of remote deployed autonomous systems the problem of reviewing, processing and effectively reporting the sensor information they gather is one of growing significance with notable parallels in ground-based sensor networks [9,10] Prior work in the field is generally limited to the case of automated ground surveillance (the camera network case [9,[11][12][13][14]), isolated to the UAV based detection [5,7,[15][16][17][18] with only very limited wide-scale consideration of multi-platform dual aerial and ground platforms within the same sensing scenario [19,20]. Notably prior work does not address autonomy within this context [19,20] and work considering multi-modal detection is in its infancy [7,17,18].…”
Section: Introductionmentioning
confidence: 99%
“…This is used as an empirical test case to explore the optimization of an observation model for the environment when distributed over multiple sensors, platforms and resulting target detection signatures. With an ever increasing use of remote deployed autonomous systems the problem of reviewing, processing and effectively reporting the sensor information they gather is one of growing significance with notable parallels in ground-based sensor networks [9,10] Prior work in the field is generally limited to the case of automated ground surveillance (the camera network case [9,[11][12][13][14]), isolated to the UAV based detection [5,7,[15][16][17][18] with only very limited wide-scale consideration of multi-platform dual aerial and ground platforms within the same sensing scenario [19,20]. Notably prior work does not address autonomy within this context [19,20] and work considering multi-modal detection is in its infancy [7,17,18].…”
Section: Introductionmentioning
confidence: 99%
“…Chong Huang, Peng Chen, Xin Yang, and Kwang-Ting (Tim) Cheng, Fellow, IEEE surveillance system on an embedded system [20,21,22]. These approaches are based on the assumption that the background can be approximated by a plane.…”
Section: Redbee: a Visual-inertial Drone System For Real-time Moving mentioning
confidence: 99%
“…If the drone performs the detection during flight, the camera motion compensation is necessary for the background modeling. Michael et al [20] developed the video exploitation drone system ABUL for detection and classification of moving objects. Their system relies on fast and reliable video data transmission between the drone and the ground station.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Leitloff et al proposed a method to detect cars by adapting boosting in combination with Haar-like features [8], and Schmidt et al proposed a slightly similar method with [8] to detect people based on Haarlike features [13]. Teutsch et al extracted appearance features, such as moments and local binary pattern (LBP), with a 9-NN classifier [14]. In these methods the image resolution of object area is large enough to obtain appearance information, so objects were detected successfully.…”
Section: Previous Workmentioning
confidence: 99%