The paper presents different issues dealing with both the preservation of cultural heritage using Virtual Reality (VR) and Augmented Reality (AR) technologies in a cultural context. While the VR/AR technologies are mentioned, the attention is paid to the 3D visualization and 3D interaction modalities illustrated through three different demonstrators: the VR demonstrators (Immersive and semi immersive) and the AR demonstrator including tangible user interfaces. To show the benefits of the VR and AR technologies for studying and preserving cultural heritage, we investigated the visualisation and interaction with reconstructed underwater archaeological sites. The base idea behind using VR and AR techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information and exploit interaction modalities. The visualisation and interaction techniques developed through these demonstrators are the results of the ongoing dialogue between the archaeological requirements and the technological solutions developed.
in many current teleoperation architectures, remote tasks are indirectly performed by a Human Operator (HO) by means of a virtual environment consisting in a virtual or symbolic representation of the remote site. In order to achieve virtual tasks, the interaction of the HO and the virtual representation is monitored. Monitoring results are subsequently translated into a sequence of instructions sent to the remote robot for actual execution. This paper focuses on different strategies designed to allow a user-friendly operator interaction with the virtual representation in order to achieve complex remote tasks via Internet. The use of active virtual guides to assist the HO in performing simple or complex tasks -with enhanced performances(speed, precision and safety)-is also discussed. Techniques such as Virtual Reality (VR), Augmented Reality (AR) combined with Internet-based programming facilities are investigated as part of the proposed teleoperation system named ARITI (acronym for Augmented Reality Interface for Telerobotic applications via Internet).
Abstract-The objective of this work is to int and Mixed Reality technologies in aquatic le have proposed a new device which is auton easily transportable by one person. It c installed, equipped with GPS and wireless positive buoyancy. The device will be used well as underwater using a tuba. Moreo equipped with one (can be upgraded for m pointing downwards. Augmented Reality c actual underwater images with 3D animated of the preferred ways to use the device.
In this article, we propose an approach to introduce tailorability in the design of groupware, as the approaches already existing are still ambiguous in putting it forward in CSCW systems. We will present a brief overview on some approaches that deals with tailorability in this field. Then, we will make use of concepts and notions from each in order to integrate them in an innovative, value-added and tailorable architecture. We will discuss the purpose of integrating internet technologies with software agents while putting it forward in the context of tailorable groupware design.
The visionary objective of this work is to "open to people connected to the internet, an access to ocean depths anytime, anywhere." Today these people can just perceive the changing surface of the sea from the shores, but ignore almost everything on what is hidden. If they could explore seabed and become knowledgeable, they would get eventually involved in finding alternative solutions for our vital terrestrial problems -pollution, climate changes, and destruction of biodiversity and exhaustion of Earth resources. The introduction of Mixed Reality and Internet in aquatic activities constitutes a technological rupture when compared with the status of existing related technologies. Through Internet, anyone, anywhere, at any moment will be naturally able to dive in real-time using a Remote Operated Vehicle (ROV) in the most remarkable sites around the world. The heart of this work is focused on Mixed Reality. The main challenge is to reach real time display of digital video stream to web users, by mixing 3D entities (objects or pre-processed underwater terrain surfaces), with 2D videos of live images collected in real time by a teleoperated ROV.
Serious games are a promising approach to improve gait rehabilitation for people with gait disorders. Combined with wearable augmented reality headset, serious games for gait rehabilitation in a clinical setting can be envisaged, allowing to evolve in a real environment and provide fun and feedback to enhance patient’s motivation. This requires a method to obtain accurate information on the spatiotemporal gait parameters of the playing patient. To this end, we propose a new algorithm called HoloStep that computes spatiotemporal gait parameters using only the head pose provided by an augmented reality headset (Hololens). It is based on the detection of peaks associated to initial contact event, and uses a combination of locking distance, locking time, peak amplitude detection with custom thresholds for children with CP. The performance of HoloStep was compared during a walking session at comfortable speed to Zeni’s reference algorithm, which is based on kinematics and a full 3D motion capture system. Our study included 62 children with cerebral palsy (CP), classified according to Gross Motor Function Classification System (GMFCS) between levels I and III, and 13 healthy participants (HP). Metrics such as sensitivity, specificity, accuracy and precision for step detection with HoloStep were above 96%. The Intra-Class Coefficient between steps length calculated with HoloStep and the reference was 0.92 (GMFCS I), 0.86 (GMFCS II/III) and 0.78 (HP). HoloStep demonstrated good performance when applied to a wide range of gait patterns, including children with CP using walking aids. Findings provide important insights for future gait intervention using augmented reality games for children with CP.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.