The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.
The main focus of the presented study is a multi-variant accuracy assessment of a photogrammetric 2D and 3D data collection, whose accuracy meets the appropriate technical requirements, based on the block of 858 digital images (4.6 cm ground sample distance) acquired by Trimble® UX5 unmanned aircraft system equipped with Sony NEX-5T compact system camera. All 1418 well-defined ground control and check points were a posteriori measured applying Global Navigation Satellite Systems (GNSS) using the real-time network method. High accuracy of photogrammetric products was obtained by the computations performed according to the proposed methodology, which assumes multi-variant images processing and extended error analysis. The detection of blurred images was preprocessed applying Laplacian operator and Fourier transform implemented in Python using the Open Source Computer Vision library. The data collection was performed in Pix4Dmapper suite supported by additional software: in the bundle block adjustment (results verified using RealityCapure and PhotoScan applications), on the digital surface model (CloudCompare), and georeferenced orthomosaic in GeoTIFF format (AutoCAD Civil 3D). The study proved the high accuracy and significant statistical reliability of unmanned aerial vehicle (UAV) imaging 2D and 3D surveys. The accuracy fulfills Polish and US technical requirements of planimetric and vertical accuracy (root mean square error less than or equal to 0.10 m and 0.05 m).
Abstract. The image based point clouds generated from multiple different oriented photos enable 3D object reconstruction in a variety spectrum of close range applications. The paper presents the results of testing the accuracy the image based point clouds generated in disadvantageous conditions of digital photogrammetric data processing. The subject of the study was a long shaped object, i.e. the horizontal and rectilinear section of the railway track. DSLR Nikon D5100 camera, 16MP, equipped with the zoom lens (f = 18 ÷ 55mm), was used to acquire the block of terrestrial convergent and very oblique photos at different scales, with the full longitudinal overlap. The point clouds generated from digital images, automatic determination of the interior orientation parameters, the spatial orientation of photos and 3D distribution of discrete points were obtained using the successively tested software: RealityCapture, Photoscan, VisualSFM+SURE and iWitness+SURE. The dense point clouds of the test object generated with the use of RealityCapture and PhotoScan applications were filtered using MeshLab application. The geometric parameters of test object were determined by means of CloudCompare software. The image based dense point clouds allow, in the case of disadvantageous conditions of photogrammetric digital data processing, to determine the geometric parameters of a close range elongated object with the high accuracy (mXYZ < 1 mm).
<p><strong>Abstract.</strong> The continuous development of sensors, methods and technologies in the modern digital photogrammetry requires testing the quality and accuracy of software, processing workflow and products. The paper presents a new test field for performance analysis of software processing and accuracy assessment of photogrammetric 2D and 3D data collection, mapping, 3D object reconstruction and modeling based on low-altitude imagery with particular regard to unmanned aerial vehicles imagery. The first experiment was carried out using images captured by Phase One iXU-RS 1000 medium format aerial digital camera and Light Detection and Ranging (LiDAR) point cloud acquired by RIEGL LMS-Q680i airborne laser scanner. The process of complex digital processing was performed in Agisoft Metashape packages. The subblock of 169 images and 16 signalized ground points measured by Global Navigation Satellite Systems in the WGS 84 coordinate system using the Real-Time Network method were adopted in the preliminary investigations. The root mean square error RMSEXYZ on check points in the bundle block adjustment was equal to 0.032&thinsp;m. Vertical deviations between digital elevation model and LiDAR point clouds belong to the range from &minus;0.020&thinsp;m to 0.020&thinsp;m which is related to RIEGL LMS-Q680i accuracy and precision. Georeferenced orthomosaic was generated with ground sampling distance (GSD) equal to 0.020&thinsp;m, which was the same as the GSD of input images. The high accuracy of obtained processing results is related to accuracy of initial data, and it proves the usefulness of Kortowo test field.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.