<p><strong>Abstract.</strong> In this paper, we propose a workflow for recreating places of cultural heritage in Virtual Reality (VR) using structure from motion (SfM) photogrammetry. The unique texture of heritage places makes them ideal for full photogrammetric capture. An optimized model is created from the photogrammetric data so that it is small enough to render in a real-time environment. The optimized model, combined with mesh maps (texture maps, normal maps, etc.) looks like the original high detail model. The capture of a whole space makes it possible to create a VR experience with six degrees of freedom (6DoF) that allows the user to explore the historic place. Creating these experiences can bring people to cultural heritage that is either endangered or too remote for some people to access. The workflow described in this paper will be demonstrated with the case study of Myin-pya-gu, an 11th century temple in Bagan, Myanmar.</p>
<p><strong>Abstract.</strong> Accessibility plays a main role among the aspects that contribute to the conservation of Cultural Heritage sites. Seismic stability, fragility of the artefacts, conflicts, deterioration, natural disasters, climate change and visitors’ impact are only some of the possible causes that might lead to the inaccessibility of a heritage site for both researchers and visitors.</p><p>The increasing potential of Information and Communication Technologies (ICT) in the conservation field has resulted in the development of Augmented and Virtual reality (AR and VR) experiences. These ones can be very effective for what concerns the description of the visual experience, but also improve the understanding of a site and even became analytic research tools.</p><p>This paper presents an inaccessible Buddhist temple in the Myanmar city of Bagan as a case study for the realization of a VR experience that aims at providing accessibility to knowledge and therefore a better understanding of the cultural value. In order to evaluate the effectiveness of the VR for this purpose, a user study has been conducted and its results are reported.</p>
<p><strong>Abstract.</strong> Digital tools have brought new techniques for recording and fabrication allowing for the augmentation of traditional processes in repairs and restorations. Traditional mechanical and chemical techniques require physical contact to the artefacts of interest, while LiDAR Scanning, photogrammetry and structured light scanning provide non-invasive solutions. Analog recording technologies have always informed fabrication processes, but contemporary digital recording can produce complete geometry for fabrication. In this paper, we discuss recording and fabrication technologies and how they have been applied for heritage conservation.</p>
No abstract
<p><strong>Abstract.</strong> There are multiple conservation challenges related to decorated surfaces, the majority are intimately linked to its documentation. This paper draws on wall paintings as a representative of decorated surfaces, arguing the importance of considering its fourthdimensionality &ndash; space and time &ndash; in its conservation and documentation. To that end, we propose the use of Building Information Model (BIM) as a platform to consolidate this approach together with various documentation techniques used for the conservation and management of wall paintings. This paper exemplifies this method with a case study of Myin-pya-gu Temple in Old Bagan (Myanmar); firstly, reviewing the different techniques used to document the temple and wall painting (photography, photogrammetry, laser scanning, reflectance transformation imaging (RTI); and secondly, discussing the data integration within a BIM environment. This position proposes a transition from a two-dimensional to a four-dimensional approach in wall painting conservation, potentially opening up possibilities of documentation, monitoring, simulation, or dissemination. Ultimately, the case study of Myin-pya-gu has the objective to introduce the use of HBIM as a platform for consolidating the documentation of decorated surfaces.</p>
The use of digital documentation techniques has led to an increase in opportunities for using documentation data for valorization purposes, in addition to technical purposes. Likewise, building information models (BIMs) made from these data sets hold valuable information that can be as effective for public education as it is for rehabilitation. A BIM can reveal the elements of a building, as well as the different stages of a building over time. Valorizing this information increases the possibility for public engagement and interest in a heritage place. Digital data sets were leveraged by the Carleton Immersive Media Studio (CIMS) for parts of a virtual tour of the Senate of Canada. For the tour, workflows involving four different programs were explored to determine an efficient and effective way to leverage the existing documentation data to create informative and visually enticing animations for public dissemination: Autodesk Revit, Enscape, Autodesk 3ds Max, and Bentley Pointools. The explored workflows involve animations of point clouds, BIMs, and a combination of the two.
This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.
<p><strong>Abstract.</strong> Colour fidelity is vital when documenting painted surfaces. The 2.5D nature of many painted surfaces makes orthophotos and digital surface models (DSMs) common products of the documentation process. This paper presents a workflow to combine photographic and photogrammetric methods to produce aligned colour and depth (orthophotos and DSMs). First, two photogrammetric software (Agisoft Photoscan and Capturing Reality Reality Capture) were tested to determine if they adjusted the colour data during the processing stages. It was found that Photoscan can produce 16-bit orthophotos without manipulating the data; however, Reality Capture is currently limited to 8-bit results. When capturing a surface using photogrammetry, it is common to use the same data for colour and depth. The presented workflow, however, argues that better colour accuracy can be achieved by capturing the two datasets separately and combining them in photogrammetric software. The workflow is demonstrated through the documentation of an unnamed religious painting from the 17th century.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.