A new approach is presented for health monitoring of structures using terrestrial laser scanning (TLS). coordinates of a target structure acquired using TLS can have maximum errors of about 10 mm, which is insufficient for the purpose of health monitoring of structures. A displacement measurement model is presented to improve the accuracy of the measurement. The model is tested experimentally on a simply supported steel beam. Measurements were made using three different techniques: (1) linear variable displacement transducers (LVDTs), (2) electric strain gages, and (3) a long gage fiber optic sensor. The maximum deflections estimated by the TLS model are less than 1 mm and within 1.6% of those measured directly by LVDT. Although GPS methods allow measurement of displacements only at the GPS receiver antenna location, the proposed TLS method allows measurement of the entire building's or bridge's deformed shape, and thus a realistic solution for monitoring structures at both structure and member level. Furthermore, it can be used to create a 3D finite element model of a structural member or the entire structure at any instance of time automatically. Through periodic measurements of deformations of a structure or a structural member and performing inverse structural analyses with the measured 3D displacements, the health of the structure can be monitored continuously.
Commission I, WG I/V KEY WORDS: UAV, Multi-sensor, Rapid Mapping, Real-time Georeferencing ABSTRACT:As the occurrences and scales of disasters and accidents have been increased due to the global warming, the terrorists' attacks, and many other reasons, the demand for rapid responses for the emergent situations also has been thus ever-increasing. These emergency responses are required to be customized to each individual site for more effective management of the emergent situations. These requirements can be satisfied with the decisions based on the spatial changes on the target area, which should be detected immediately or in real-time. Aerial monitoring without human operators is an appropriate means because the emergency areas are usually inaccessible. Therefore, a UAV is a strong candidate as the platform for the aerial monitoring. In addition, the sensory data from the UAV system usually have higher resolution than other system because the system can operate at a lower altitude. If the transmission and processing of the data could be performed in real-time, the spatial changes of the target area can be detected with high spatial and temporal resolution by the UAV rapid mapping systems. As a result, we aim to develop a rapid aerial mapping system based on a UAV, whose key features are the effective acquisition of the sensory data, real-time transmission and processing of the data. In this paper, we will introduce the general concept of our system, including the main features, intermediate results, and explain our real-time sensory data georeferencing algorithm which is a core for prompt generation of the spatial information from the sensory data.
Commission I, ICWG I/VbKEY WORDS: maritime monitoring, UAV, UAS, remote sensing, sensors ABSTRACT:In the last few years, Unmanned Aircraft Systems (UAS) have become more important and its use for different application is appreciated. At the beginning UAS were used for military purposes. These successful applications initiated interest among researchers to find uses of UAS for civilian purposes, as they are alternative to both manned and satellite systems in acquiring highresolution remote sensing data at lower cost while long flight duration. As UAS are built from many components such as unmanned aerial vehicle (UAV), sensing payloads, communication systems, ground control stations, recovery and launch equipment, and supporting equipment, knowledge about its functionality and characteristics is crucial for missions. Therefore, finding appropriate configuration of all elements to fulfill requirements of the mission is a very difficult, yet important task. UAS may be used in various maritime applications such as ship detection, red tide detection and monitoring, border patrol, tracking of pollution at sea and hurricane monitoring just to mention few. One of the greatest advantages of UAV is their ability to fly over dangerous and hazardous areas, where sending manned aircraft could be risky for a crew. In this article brief description of aerial unmanned system components is introduced. Firstly characteristics of unmanned aerial vehicles are presented, it continues with introducing inertial navigation system, communication systems, sensing payloads, ground control stations, and ground and recovery equipment. Next part introduces some examples of UAS for maritime applications. This is followed by suggestions of key indicators which should be taken into consideration while choosing UAS. Last part talks about configuration schemes of UAVs and sensor payloads suggested for some maritime applications.
<p><strong>Abstract.</strong> The vehicle localization is an essential component for stable autonomous car operation. There are many algorithms for the vehicle localization. However, it still needs much improvement in terms of its accuracy and cost. In this paper, sensor fusion based localization algorithm is used for solving this problem. Our sensor system is composed of in-vehicle sensors, GPS and vision sensors. The localization algorithm is based on extended Kalman filter and it has time update step and measurement update step. In the time update step, in-vehicle sensors are used such as yaw-rate and speed sensor. And GPS and vision sensor information are used to update the vehicle position in the measurement update step.We use visual odometry library to process vision sensor data and generate the moving distance and direction of the car. Especially, when performing visual odometry we use georeferenced image database to reduce the error accumulation. Through the experiments, the proposed localization algorithm is verified and evaluated. The RMS errors of the estimated result from the proposed algorithm are about 4.3<span class="thinspace"></span>m. This result shows about 40<span class="thinspace"></span>% improvement in accuracy even in comparison with the result from the GPS only method. It shows the possibility to use proposed localization algorithm. However, it is still necessary to improve the accuracy for applying this algorithm to the autonomous car. Therefore, we plan to use multiple cameras (rear cameras or AVM cameras) and more information such as high-definition map or V2X communication. And the filter and error modelling also need to be changed for the better results.</p>
Detecting unregistered buildings from aerial images is an important task for urban management such as inspection of illegal buildings in green belt or update of GIS database. Moreover, the data acquisition platform of photogrammetry is evolving from manned aircraft to UAVs (Unmanned Aerial Vehicles). However, it is very costly and time-consuming to detect unregistered buildings from UAV images since the interpretation of aerial images still relies on manual efforts. To overcome this problem, we propose a system which automatically detects unregistered buildings from UAV images based on deep learning methods. Specifically, we train a deconvolutional network with publicly opened geospatial data, semantically segment a given UAV image into a building probability map and compare the building map with existing GIS data. Through this procedure, we could detect unregistered buildings from UAV images automatically and efficiently. We expect that the proposed system can be applied for various urban management tasks such as monitoring illegal buildings or illegal land-use change.
Abstract. The underwater environment has substantial properties for underwater research such as marine archaeology, monitoring coral reefs, and shipwrecks. SfM, as a major step of photogrammetry, has been widely used in the field. For a high 3D construction quality, images must have a clear visual sight environment and known orientations of the images. However, underwater images have various types of visual disturbances, but also GPS/INS, commonly used on the ground, are not accepted. Finding more feature points or using more images for SfM are solutions to the problems. However, these methods take high computational costs. An alternative to this problem is to provide the known orientations of the images. For a solution to provide known orientations of images, the presented method in this study uses visual SLAM that processes the localization of a vehicle system and mapping of surroundings. The experiment aims to verify whether SLAM improves the quality of underwater 3D reconstruction and the computation efficiency of SfM. We examine the two aqualoc datasets with the results of the number of cloud points, SfM processing time, and the number of matched images/total images and mean reprojection errors. The outcome shows SLAM-determined orientations improved the quality of 3D reconstruction and the computation efficiency of SfM with results of the increased number of point clouds and the decreased processing time.
Recently, a car employs various built-in sensors such as a speedometer, odometer, accelerometer and angular rate sensor for safety and maintenance. These sensory data can be provided in real time through a CAN (Controller Area Network) bus. In addition, image sequences can be provided from various cameras mounted to the car, such as built-in front and around view monitoring cameras. We thus propose an image based car navigation framework to determine car position and attitude using the built-in sensory data such as a speed, angular rate and images from a front view camera. First, we determine the two-dimensional position and attitude of a car using the velocity and angular rate provided in real-time through the CAN bus. We then estimate the three-dimensional position and attitude by conducting sequential bundle block adjustment using the two-dimensional position and attitude and tie points between image sequences. The sequential bundle adjustment can produce accurate results comparable to those from the conventional simultaneous bundle adjustment in real time. As the input to this process, it needs reliable tie points between adjacent images acquired from a real-time image matching process. Hence, we develop an image matching process based on the enhanced KLT algorithm using preliminary exterior orientation parameters. We also construct a test system that can acquire and store built-in sensory data and front camera images at the same time, and conduct experiments with the real data acquired by the system. The experimental results show that the proposed image matching process can generate accurate tie-points with about 0.2 second in average at each epoch. It can successfully meet the requirements from real-time bundle adjustment for image based car navigation.
ABSTRACT:Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.