<p><strong>Abstract.</strong> In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.</p>
Augmented Reality (AR) is already transforming many fields, from medical applications to industry, entertainment and heritage. In its most common form, AR expands reality with virtual 3D elements, providing users with an enhanced and enriched experience of the surroundings. Until now, most of the research work focused on techniques based on markers or on GNSS/INS positioning. These approaches require either the preparation of the scene or a strong satellite signal to work properly. In this paper, we investigate the use of visual-based methods, i.e., methods that exploit distinctive features of the scene estimated with Visual Simultaneous Localization and Mapping (V-SLAM) algorithms, to determine and track the user position and attitude. The detected features, which encode the visual appearance of the scene, can be saved and later used to track the user in successive AR sessions. Existing AR frameworks like Google ARCore, Apple ARKit and Unity AR Foundation recently introduced visual-based localization in their frameworks, but they target mainly small scenarios. We propose a new Mobile Augmented Reality (MAR) methodology that exploits OPEN-V-SLAM to extend the application range of Unity AR Foundation and better handle large-scale environments. The proposed methodology is successfully tested in both controlled and real-case large heritage scenarios. Results are available also in this video: https://youtu.be/Q7VybmiWIuI. a) b) c) Figure 1. The three large-scale scenarios used in the paper and the AR results based on markerless smartphone solution: (a) historical photographs of the city of Trento, (b) the remains of the underground roman city in Trento and (c) the pile dwelling site of Fiavè.
<p><strong>Abstract.</strong> In the last years we are witnessing an increasing quality (and quantity) of video streams and a growing capability of SLAM-based methods to derive 3D data from video. Video sequences can be easily acquired by non-expert surveyors and possibly used for 3D documentation purposes. The aim of the paper is to evaluate the possibility to perform 3D reconstructions of heritage scenarios using videos ("videogrammetry"), e.g. acquired with smartphones. Video frames are extracted from the sequence using a fixed-time interval and two advanced methods. Frames are then processed applying automated image orientation / Structure from Motion (SfM) and dense image matching / Multi-View Stereo (MVS) methods. Obtained 3D dense point clouds are the visually validated as well as compared with photogrammetric ground truth archived acquiring image with a reflex camera or analysing 3D data's noise on flat surfaces.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.