Recent advances in structure from motion (SfM) and dense matching algorithms enable surface reconstruction from unmanned aerial vehicle (UAV) images with high spatial resolution, allowing for new insights into earth surface processes. However, accuracy issues are inherent in parallel-axes UAV image configurations. In this study, the quality of digital elevation models (DEMs) is assessed using images from a simulated UAV flight. Five different SfM tools and three different cameras are compared. If ground control points (GCPs) are not integrated into the adjustment process with parallel-axes image configurations, significant dome-effect systematic errors are observed, which can be reduced based on calibration parameters retrieved from a testfield captured with convergent images immediately before or after the UAV flight. A comparison between DEMs of a soil surface generated from UAV images and terrestrial laserscanning data show that natural surfaces can be very accurately reconstructed from UAV images, even when GCPs are missing and simple geometric camera models are considered.
In many close‐range applications it is essential to obtain information about the geometry of the target surface as well as its chemical composition. In this study, close‐range hyperspectral imaging was integrated with terrestrial laser scanning to provide mineral and chemical information for geological field studies. The spectral data was collected with the HySpex SWIR‐320m sensor, which operates in the infrared spectrum between the wavelengths of 1·3 and 2·5 μm. This sensor permits surfaces to be imaged with high spectral resolution, allowing detailed classification and analysis to be carried out. Photogrammetric processing of the hyperspectral imagery was achieved using an existing geometric model for rotating linear‐array‐based panoramic cameras. Bundle block adjustment of multiple images resulted in the registration of the spectral images in the lidar coordinate system, with a precision of around one image pixel. Although the image and control point network was not optimised for photogrammetric processing, it was possible to recover the exterior camera orientations, as well as additional camera calibration parameters. With the known image orientations, 3D lidar models could be textured with hyperspectral classifications, and the quality of the registration determined. The integration of the hyperspectral image products with the terrestrial lidar data enabled data interpretation and evaluation in a real‐world coordinate system, and provided a reliable means of linking material and geometric information.
Digital panoramic cameras represent a powerful tool for generating high resolution images of scenes. They generate images of up to 100 000 · 10 000 pixels and are especially suited for 360°recording of objects such as indoor scenes or city squares. The paper describes the development of a strict geometric model for rotating linear array panoramic cameras and the extension of the model by additional parameters adapting the camera model to the physical reality. The camera model has been implemented in a spatial resection and a bundle solution. The bundle solution also allows for the combined handling of panoramic and central perspective images. In several practical tests a potential accuracy of around ¼ pixel was demonstrated.
ABSTRACT:Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi-) automated photogrammetric reconstruction methods for heterogeneous collections of historical (city) photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ('crown gate') of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM). Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.
Rapid technological progress has made mobile devices increasingly valuable for scientific research. This paper outlines a versatile camera‐based water gauging method, implemented on smartphones, which is usable almost anywhere if 3D data is available at the targeted river section. After analysing smartphone images to detect the present water line, the image data is transferred into object space. Using the exterior orientation acquired by smartphone sensor fusion, a synthetic image originating from the 3D data is rendered that represents the local situation. Performing image‐to‐geometry registration using the true smartphone camera image and the rendered synthetic image, image parameters are refined by space resection. Moreover, the water line is transferred into object space by means of the underlying 3D information. The algorithm is implemented in the smartphone application “Open Water Levels”, which can be used on both high‐end and low‐cost devices. In a comprehensive investigation, the methodology is evaluated, demonstrating both its potential and remaining issues.
ABSTRACT:Intensity values, which are registered by a terrestrial laser scanner system (TLS) for each point of a 3D point cloud in addition to its coordinates, are affected by the characteristic of the measured object and the parameters of the environment. The backscattered electromagnetic signal is influenced in his strength by the reflectivity of the scanned object surface, the incidence angle, the distance between laser scanner and object and the atmospheric respectively system specific setting of the TLS-measurement. The entity of all influences on the signal can be summarized in the laser range equation of Jelalian
1. For the investigations of this study the named influences where divided into two groups. Group 1 includes the surface specific influences. The second group contains all other influences. The correction of the intensity values from the effects of group 2 theoretically allows the determination of similar materials, using similar intensity values in laser scanner point clouds. In this paper the dependency between laser scanner intensity values and range are investigated on the basis of laser scanner data recorded with a Riegl LMS-Z420i. The results are compared with data from the phase-difference laser scanner Zoller+Fröhlich Imager 5006i.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.