ABSTRACT:Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized) unmanned aerial vehicles, (M)UAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure -orientation, dense reconstruction, urban terrain modeling and geo-referencing -are robust, straight-forward, and nearly fully-automatic. The two last steps -namely, urban terrain modeling from almost-nadir videos and co-registration of models -represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM) extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasiintrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work.
A well known problem in computer vision and photogrammetry is the precise online mapping of the surrounding scenery. Due to the nature of single projective sensor configurations with inherent 7-DoF, error accumulation and scale drift is still a problem for vision based systems. This is especially relevant for difficult motion trajectories. However, it is desirable to use cheap small form factor systems ,e.g., small UAVs with a single camera setup.We propose a simple and efficient appearance based method for using LiDAR data in a monocular vision mapping system by using pose graph optimization. Provided laser scans are available, our system allows for a robust metric mapping and localization with single electro-optical sensors. We use large sets of synthetically generated 2-D LiDAR intensity views in order to globally register camera images.We especially provide insights for generating the synthetic intensity images and extracting features from such data. This enables the global appearance based 2-D/3-D registration of 2-D camera images to a metric 3-D point cloud data. As a result we are able to correct camera trajectories and estimate geo-referenced, metric structure from monocular camera images.Possible applications are numerous and include autonomous navigation, real-time map updating/extension or vision based indoor mapping.
Situation awareness in complex urban environments is an important component for a successful task fulfillment both in military and civil area of applications. In the first area, the fields of deployment of the members of the North Atlantic Alliance have been changed, in the past two decades, from the originally assigned task of acting as national and allied defense forces within the partners' own borders to out-of-area missions under conditions of an asymmetric conflict. Because of its complicated structure, urban terrain represents a particular difficulty of military missions such as patrolling. In the civil field of applications, police and rescue forces are also often strongly dependent on a local visibility and accessibility analysis. However, the process of decision-taking within a short time and under enormous pressure can be extensively trained in an environment that is tailored to the concrete situation. The contribution of this work consists of context-based modeling of urban terrain that can be then integrated into simulation software, for example, Virtual Battlespace 2 (VBS2). The input of our procedure is made up by the airborne sensor data, collected either by an active or a passive sensor. The latter is particularly important if the application is time-critical or the area to be explored is small. After description of our procedure for urban terrain modeling with a detailed focus on the recent innovations, the main steps of model integration into simulation software will be presented and two examples of missions for military and civil applications that can be easily created with VBS2 will be given
Virtual simulations have been on the rise together with the fast progress of rendering engines and graphics hardware. Especially in military applications, offensive actions in modern peace-keeping missions have to be quick, firm and precise, especially under the conditions of asymmetric warfare, non-cooperative urban terrain and rapidly developing situations. Going through the mission in simulation can prepare the minds of soldiers and leaders, increase self-confidence and tactical awareness, and finally save lives. This work is dedicated to illustrate the potential and limitations of integration of semantic urban terrain models into a simulation. Our system of choice is Virtual Battle Space 2, a simulation system created by Bohemia Interactive System. The topographic object types that we are able to export into this simulation engine are either results of the sensor data evaluation (building, trees, grass, and ground), which is done fully-automatically, or entities obtained from publicly available sources (streets and water-areas), which can be converted into the system-proper format with a few mouse clicks. The focus of this work lies in integrating of information about building façades into the simulation. We are inspired by state-of the art methods that allow for automatic extraction of doors and windows in laser point clouds captured from building walls and thus increase the level of details of building models. As a consequence, it is important to simulate these animation able entities. Doing so, we are able to make accessible some of the buildings in the simulation
Virtual representation of urban terrain and simulation in real-world scenarios are becoming increasingly important in military and civil areas, for example, for operations research, mission rehearsal or debriefing. This work demonstrates the possibility of automatized import of semantic models, which result from our urban terrain reconstruction process, based on simple video data taken from UAV flights, into various simulation systems. We provide the justification for our choice of three simulation development tools- Virtual Battlespace, TerraTools and FZK Viewer - and we describe the main advantages and draw-backs for automatic data exchange into these systems
Protecting critical infrastructure against intrusion, sabotage or vandalism is a task that requires a comprehensive situation picture. Modern security systems should provide a total solution including sensors, software, hardware, and a "control unit" to ensure complete security. Incorporating unmanned mobile sensors can significantly help to close information gaps and gain an ad hoc picture of areas where no pre-installed supervision infrastructure is available or damaged after an incident. Fraunhofer IOSB has developed the generic ground control station AMFIS which is capable of managing sensor data acquisition with all kinds of unattended stationary sensors, mobile ad hoc sensor networks, and mobile sensor platforms. The system is highly mobile and able to control various mobile platforms such as small UAVs (Unmanned Aerial Vehicles) and UGVs (Unmanned Ground Vehicles). In order to establish a real-time situation picture, also an image exploitation process is used. In this process, video frames from different sources (mainly from small UAVs) are georeferenced by means of a system of image registration methods. Relevant information can be obtained by a motion detection module. Thus, the image exploitation process can accelerate the situation assessment significantly
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.