Images of vehicles which move in traffic scenes recorded by a stationary camera are detected and tracked without operator intervention. The resulting vehicle trajectories are projected from the image plane onto the street plane. A suitable system internal representation of about ninety German motion verbs is then exploited in order to automatically characterize trajectory segments in terms of natural language concepts.A multiresolution approach for feature matching has been developed which is robust enough to track vehicle images across hundreds of frames despite considerable variations in size and projected velocity. Results from various experiments with image sequences from real world traffic scenes are presented.
For surveillance and reconnaissance tasks small UAVs are of growing importance. These UAVs have an endurance of several hours, but a small payload of about some kilograms. As a consequence lightweight sensors and cameras have to be used without having a mechanical stabilized high precision sensor-platform, which would exceed the payload and cost limitations. An example of such a system is the German UAV Luna with optical and IR sensors on board. For such platforms we developed image exploitation algorithms. The algorithms comprise mosaiking, stabilization, image enhancement, video based moving target indication, and stereo-image generation. Other products are large geo-coded image mosaics, stereo mosaics, and 3-D-model generation. For test and assessment of these algorithms the experimental system ABUL has been developed, in which the algorithms are integrated. The ABUL system is used for tests and assessment by military PIs
Small and medium sized UAVs like German LUNA have long endurance and define in combination with sophisticated image exploitation algorithms a very cost efficient platform for surveillance. At Fraunhofer IOSB, we have developed the video exploitation system ABUL with the target to meet the demands of small and medium sized UAVs. Several image exploitation algorithms such as multi-resolution, super-resolution, image stabilization, geocoded mosaiking and stereo-images/3D-models have been implemented and are used with several UAV-systems. Among these algorithms is the moving target detection with compensation of sensor motion. Moving objects are of major interest during surveillance missions, but due to movement of the sensor on the UAV and small object size in the images, it is a challenging task to develop reliable detection algorithms under the constraint of real-time demands on limited hardware resources. Based on compensation of sensor motion by fast and robust estimation of geometric transformations between images, independent motion is detected relatively to the static background. From independent motion cues, regions of interest (bounding-boxes) are generated and used as initial object hypotheses. A novel classification module is introduced to perform an appearance-based analysis of the hypotheses. Various texture features are extracted and evaluated automatically for achieving a good feature selection to successfully classify vehicles and people
The miniature SAR-system MiSAR has been developed by EADS Germany for lightweight UAVs like the LUNASystem. MiSAR adds to these tactical UAV-systems the all-weather reconnaissance capability, which is missing until now. Unlike other SAR sensors, that produce large strip maps at update rates of several seconds, MiSAR generates sequences of SAR images with approximately 1 Hz frame rate. photo interpreters (PI) of tactical drones, now mainly experienced with visual interpretation, are not used to SARimages, especially not with SAR-image sequence characteristics. So they should be supported to improve their ability to carry out their task with a new, demanding sensor system. We have therefore analyzed and discussed with military PIs in which task MiSAR can be used and how the PIs can be supported by special algorithms. We developed image processing- and exploitation-algorithms for such SAR-image sequences. A main component is the generation of image sequence mosaics to get more oversight. This mosaicing has the advantage that also non straight /linear flight-paths and varying squint angles can be processed. Another component is a screening-component for manmade objects to mark regions of interest in the image sequences. We use a classification based approach, which can be easily adapted to new sensors and scenes. These algorithms are integrated into an image exploitation system to improve the image interpreters ability to get a better oversight, better orientation and helping them to detect relevant objects, especially considering long endurance reconnaissance missions
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.