<p><strong>Abstract.</strong> In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.</p>
Augmented Reality (AR) is already transforming many fields, from medical applications to industry, entertainment and heritage. In its most common form, AR expands reality with virtual 3D elements, providing users with an enhanced and enriched experience of the surroundings. Until now, most of the research work focused on techniques based on markers or on GNSS/INS positioning. These approaches require either the preparation of the scene or a strong satellite signal to work properly. In this paper, we investigate the use of visual-based methods, i.e., methods that exploit distinctive features of the scene estimated with Visual Simultaneous Localization and Mapping (V-SLAM) algorithms, to determine and track the user position and attitude. The detected features, which encode the visual appearance of the scene, can be saved and later used to track the user in successive AR sessions. Existing AR frameworks like Google ARCore, Apple ARKit and Unity AR Foundation recently introduced visual-based localization in their frameworks, but they target mainly small scenarios. We propose a new Mobile Augmented Reality (MAR) methodology that exploits OPEN-V-SLAM to extend the application range of Unity AR Foundation and better handle large-scale environments. The proposed methodology is successfully tested in both controlled and real-case large heritage scenarios. Results are available also in this video: https://youtu.be/Q7VybmiWIuI. a) b) c) Figure 1. The three large-scale scenarios used in the paper and the AR results based on markerless smartphone solution: (a) historical photographs of the city of Trento, (b) the remains of the underground roman city in Trento and (c) the pile dwelling site of Fiavè.
<p><strong>Abstract.</strong> In the last years we are witnessing an increasing quality (and quantity) of video streams and a growing capability of SLAM-based methods to derive 3D data from video. Video sequences can be easily acquired by non-expert surveyors and possibly used for 3D documentation purposes. The aim of the paper is to evaluate the possibility to perform 3D reconstructions of heritage scenarios using videos ("videogrammetry"), e.g. acquired with smartphones. Video frames are extracted from the sequence using a fixed-time interval and two advanced methods. Frames are then processed applying automated image orientation / Structure from Motion (SfM) and dense image matching / Multi-View Stereo (MVS) methods. Obtained 3D dense point clouds are the visually validated as well as compared with photogrammetric ground truth archived acquiring image with a reflex camera or analysing 3D data's noise on flat surfaces.</p>
The paper presents an efficient photogrammetric workflow to improve the 3D reconstruction of scenes surveyed by integrating terrestrial and Unmanned Aerial Vehicle (UAV) images. In the last years, the integration of this kind of images has shown clear advantages for the complete and detailed 3D representation of large and complex scenarios. Nevertheless, their photogrammetric integration often raises several issues in the image orientation and dense 3D reconstruction processes. Noisy and erroneous 3D reconstructions are the typical result of inaccurate orientation results. In this work, we propose an automatic filtering procedure which works at the sparse point cloud level and takes advantage of photogrammetric quality features. The filtering step removes low-quality 3D tie points before refining the image orientation in a new adjustment and generating the final dense point cloud. Our method generalizes to many datasets, as it employs statistical analyses of quality feature distributions to identify suitable filtering thresholds. Reported results show the effectiveness and reliability of the method verified using both internal and external quality checks, as well as visual qualitative comparisons. We made the filtering tool publicly available on GitHub.
<p><strong>Abstract.</strong> The paper presents an innovative approach for improving the orientation results when terrestrial and UAV images are jointly processed. With the existing approaches, the processing of images coming from different platforms and sensors leads often to noisy and inaccurate 3D reconstructions, due to the different nature and properties of the acquired images. In this work, a photogrammetric pipeline is proposed to filter and remove bad computed tie points, according to some quality feature indicators. A completely automatic procedure has been developed to filter the sparse point cloud, in order to improve the orientation results before computing the dense point cloud. We report some tests and results on a dataset of about 140 images (Modena cathedral, Italy). The effectiveness of the filtering procedure was verified using some internal quality indicators, external checks (ground truth data) and qualitative visual analyses.</p>
<p><strong>Abstract.</strong> This work presents an extended photogrammetric pipeline aimed to improve 3D reconstruction results. Standard photogrammetric pipelines can produce noisy 3D data, especially when images are acquired with various sensors featuring different properties. In this paper, we propose an automatic filtering procedure based on some geometric features computed on the sparse point cloud created within the bundle adjustment phase. Bad 3D tie points and outliers are detected and removed, relying on micro and macro-clusters analyses. Clusters are built according to the prevalent dimensionality class (1D, 2D, 3D) assigned to low-entropy points, and corresponding to the main linear, planar o scatter local behaviour of the point cloud. While the macro-clusters analysis removes smallsized clusters and high-entropy points, in the micro-clusters investigation covariance features are used to verify the inner coherence of each point to the assigned class. Results on heritage scenarios are presented and discussed.</p>
Among the existing Cultural Heritage settings, Underground Built Heritage (UBH) represents a peculiar case. The scarce or lack of knowledge and documentation of these spaces frequently limits their proper management, exploitation, and valorization. When mapping these environments for documentation purposes, the primary need is to achieve a complete, reliable, and adequate representation of the built spaces and their geometry. Terrestrial laser scanners were widely employed for this task, although the procedure is generally time-consuming and often lacks color information. Mobile Mapping Systems (MMSs) are nowadays fascinating and promising technologies for mapping underground structures, speeding up acquisition times. In this paper, mapping experiences (with two commercial tools and an in-house prototype) in UBH settings are presented, testing the different handheld mobile solutions to guarantee an accurate and reliable 3D digitization. Tests were performed in the selected case study of Camerano Caves (Italy), characterized by volumetric complexity, poor lighting conditions, and difficult accessibility. The aim of this research activity is not only to show the differences in the technological instruments used for 3D surveying, but rather to argue over the pros and cons of the systems, providing the community with best practices and rules for 3D data collection with handheld mobile systems. The experiments deliver promising results when compared with TLS data.
Mobile and handheld mapping systems are becoming widely used nowadays as fast and cost-effective data acquisition systems for 3D reconstruction purposes. While most of the research and commercial systems are based on active sensors, solutions employing only cameras and photogrammetry are attracting more and more interest due to their significantly minor costs, size and power consumption. In this work we propose an ARM-based, low-cost and lightweight stereo vision mobile mapping system based on a Visual Simultaneous Localization And Mapping (V-SLAM) algorithm. The prototype system, named GuPho (Guided Photogrammetric System) also integrates an in-house guidance system which enables optimized image acquisitions, robust management of the cameras and feedback on positioning and acquisition speed. The presented results show the effectiveness of the developed prototype in mapping large scenarios, enabling motion blur prevention, robust camera exposure control and achieving accurate 3D results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.