Abstract-Widespread adoption of smartphones and tablets has enabled people to multiplex their physical reality, where they engage in face-to-face social interaction, with Web-based social networks and apps, whilst emerging 3D Web technologies hold promise for networks of parallel 3D virtual environments to emerge. Although current technologies allow this multiplexing of physical reality and 2D Web, in a situation called PolySocial Reality, the same cannot yet be achieved with 3D content. Cross Reality was proposed to address this issue; however so far it has focused on the use of fixed links between physical and virtual environments in closed lab settings, limiting investigation of the explorative and social aspects. This paper presents an architecture and implementation that addresses these shortcomings using a tablet computer and the Pangolin virtual world viewer to provide a mobile interface to a corresponding 3D virtual environment. Motivation for this project stemmed from a desire to enable students to interact with existing virtual reconstructions of cultural heritage sites in tandem with exploration of the corresponding real locations, avoiding the adverse temporal separation caused otherwise by interacting with the virtual content only within the classroom. The accuracy of GPS tracking emerged as a constraint on this style of interaction.
Abstract-This paper reports experience in developing a parallel reality system which allows its user to observe and move around their real environment whilst wearing a stereoscopic 3D head mounted display imbued with video-see through capabilities, with their position and gaze tracked by an indoor positioning system and head tracker, allowing them to alternately view their real environment and an immersive virtual reality environment from the equivalent vantage point. In so doing the challenge of the vacancy problem is addressed by lightening the cognitive load needed to switch between realities and to navigate the virtual environment. Evaluation of the usability, system performance and value of the system are undertaken in the context of a cultural heritage application; users are able to compare a reconstruction of an important 15 th century chapel with its present day instantiation.
Abstract-Continuing advances and reduced costs in computational power, graphics processors and network bandwidth have led to 3D immersive multi-user virtual worlds becoming increasingly accessible while offering an improved and engaging Quality of Experience. At the same time the functionality of the World Wide Web continues to expand alongside the computing infrastructure it runs on and pages can now routinely accommodate many forms of interactive multimedia components as standard features -streaming video for example. Inevitably there is an emerging expectation that the Web will expand further to incorporate immersive 3D environments. This is exciting because humans are well adapted to operating in 3D environments and it is challenging because existing software and skill sets are focused around competencies in 2D Web applications. Open Simulator (OpenSim) is a freely available open source toolkit that empowers users to create and deploy their own 3D environments in the same way that anyone can create and deploy a Web site. Its characteristics can be seen as a set of references as to how the 3D Web could be instantiated. This paper describes experiments carried out with OpenSim to better understand network and system issues, and presents experience in using OpenSim to develop and deliver applications for education and cultural heritage. Evaluation is based upon observations of these applications in use and measurements of systems both in the lab and in the wild.
We present the cross reality system Mirrorshades, which enables a user to be present and aware of both a virtual reality environment and the real world at the same time. In so doing the challenge of the vacancy problem is addressed by lightening the cognitive load needed to switch between realities and to navigate the virtual environment. We present a case study in the context of a cultural heritage application wherein users are able to compare a reconstruction of an important 15th century chapel with its present day instantiation, whilst walking through them.
This paper discusses how a digital reconstruction of the Scottish capital of Edinburgh around the year 1544 was created and communicated to the public. It explores the development and reception of the Virtual Time Binoculars platforma system for delivering virtual reality heritage apps suitable for use on most smartphones. The Virtual Time Binoculars system is placed in the context of earlier research into mobile heritage experiences, including Situated Simulations (G. Liestøl. 2009) and the Mirrorshades Project (C. Davies et al. 2014). The eventual virtual reality app is compared with other means of viewing the historic reconstruction, including online videos and an interactive museum and educational exhibit. It outlines the historical and technical challenges of modelling Edinburgh's sixteenth-century cityscape, and of distributing the eventual reconstruction in an immersive fashion that works safely and effectively on smartphones on the streets of the modern city. Finally, it considers the implications of this project for future developments in mobile exploration of historic scenes.
Saint Andrews is a town with a rich history. It was the religious centre of Scotland for close to a millennium. The Cathedral was strongly associated with the wars of Independence and Robert the Bruce. The castle was the scene of pivotal revolt leading to the reformation and hosted the first Scottish protestant congregation. St Salvators chapel was the religious centre of Scotland's first University. This paper presents work which explores using mobile technologies to support investigation, learning and appreciation of the past. It builds on tradition and world class scholarship into the history of this important town and makes them available to school students, researchers and tourists using mobile technologies. From text based quests, through mobile apps to location aware stereoscopic 3D experiences the gamut of available commodity hardware is used to enable the past to be explored in new ways.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers