<p><strong>Abstract.</strong> In this paper, we propose a workflow for recreating places of cultural heritage in Virtual Reality (VR) using structure from motion (SfM) photogrammetry. The unique texture of heritage places makes them ideal for full photogrammetric capture. An optimized model is created from the photogrammetric data so that it is small enough to render in a real-time environment. The optimized model, combined with mesh maps (texture maps, normal maps, etc.) looks like the original high detail model. The capture of a whole space makes it possible to create a VR experience with six degrees of freedom (6DoF) that allows the user to explore the historic place. Creating these experiences can bring people to cultural heritage that is either endangered or too remote for some people to access. The workflow described in this paper will be demonstrated with the case study of Myin-pya-gu, an 11th century temple in Bagan, Myanmar.</p>
<p><strong>Abstract.</strong> Accessibility plays a main role among the aspects that contribute to the conservation of Cultural Heritage sites. Seismic stability, fragility of the artefacts, conflicts, deterioration, natural disasters, climate change and visitors’ impact are only some of the possible causes that might lead to the inaccessibility of a heritage site for both researchers and visitors.</p><p>The increasing potential of Information and Communication Technologies (ICT) in the conservation field has resulted in the development of Augmented and Virtual reality (AR and VR) experiences. These ones can be very effective for what concerns the description of the visual experience, but also improve the understanding of a site and even became analytic research tools.</p><p>This paper presents an inaccessible Buddhist temple in the Myanmar city of Bagan as a case study for the realization of a VR experience that aims at providing accessibility to knowledge and therefore a better understanding of the cultural value. In order to evaluate the effectiveness of the VR for this purpose, a user study has been conducted and its results are reported.</p>
Convolutional Neural Networks(CNNs) are complex systems. They are trained so they can adapt their internal connections to recognize images, texts and more. It is both interesting and helpful to visualize the dynamics within such deep artificial neural networks so that people can understand how these artificial networks are learning and making predictions. In the field of scientific simulations, visualization tools like Paraview have long been utilized to provide insights and understandings. We present in situ TensorView to visualize the training and functioning of CNNs as if they are systems of scientific simulations. In situ TensorView is a loosely coupled in situ visualization open framework that provides multiple viewers to help users to visualize and understand their networks. It leverages the capability of co-processing from Paraview to provide real-time visualization during training and predicting phases. This avoid heavy I/O overhead for visualizing large dynamic systems. Only a small number of lines of codes are injected in TensorFlow framework. The visualization can provide guidance to adjust the architecture of networks, or compress the pre-trained networks. We showcase visualizing the training of LeNet-5 and VGG16 using in situ TensorView.Index Terms-in situ visualization, convolutional neural networks, Paraview. !
The purposes of this chapter are three-fold: to (a) review the research on 3D immersive and interactive technology (or virtual reality, VR) conducted so far for educational purposes both in the earlier years of the technology and in more recent years, (b) discuss a few VR technology tools available today, and (c) describe three scenarios in science, mathematics, and language learning to demonstrate how the current VR technology can be designed for education. In addition, primary challenges of using 3D immersive and interactive technology in education are also discussed along with future research directions. The intent of this chapter is to provide ideas and insights for researchers and designers who are interested in applying the VR technology in education.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.