The present study aims (a) to translate and adapt the Igroup Presence Questionnaire (IPQ) to the Portuguese context (semantic equivalence/ conceptual and content validity) and (b) to examine its psychometric properties (reliability and factorial validity). The sample consisted of 478 subjects (285 males and 193 females). The fidelity of the factors varied between 0.53 and 0.83. The confirmatory factor analysis results produced a 14-item version of IPQ-PT, accepting covariance between residual errors of some items of the instrument, as the best structural representation of the data analyzed. The CFA was conducted based on a three-variable model. The fit indexes obtained were X2/df = 2.647, GFI = .948, CFI = .941, RSMEA = .059, and AIC = 254. These values demonstrate that the proposed Portuguese translation of the IPQ maintains its original validity, demonstrating it to be a robust questionnaire to measure the sense of presence in virtual reality studies. It is therefore recommended for use in presence research when using Portuguese samples.
Gestural interaction devices emerged and originated various studies on multimodal human-computer interaction to improve user experience. However, there is a knowledge gap regarding the use of these devices to enhance learning.We present an exploratory study which analysed the user experience with a multimodal immersive videogame prototype, based on a Portuguese historical/cultural episode. Evaluation tests took place in high school environments and public videogaming events. Two users would be present simultaneously in the same virtual reality environment: one as the helmsman aboard Vasco da Gama's XV-century Portuguese ship, another as the mythical Adamastor stone giant at the Cape of Good Hope. The helmsman player wore a virtual reality headset to explore the environment, whereas the giant player used body motion to control the giant, and observed results on a screen, with no headset. This allowed a preliminary characterization of user experience, identifying challenges and potential use of these devices in multi-user virtual learning contexts. We also discuss the combined use of such devices, towards future development of similar systems, and its implications on learning improvement through multimodal human-computer interaction.
The information is taken from normal hips and may not be directly applicable to the deformed hip. Nevertheless, it is a prerequisite for a surgeon to understand the normal anatomy and use those boundaries to prevent mistakes during intra-articular joint-preserving hip surgical procedures.
The current technologic proliferation has originated new paradigms concerning the production and consumption of multimedia content. This paper proposes a multisensory 360 video editor that allows producers to edit such contents with high levels of customization. This authoring tool allows the edition and visualization of 360 video with the novelty of allowing to complement the 360 video with multiple stimuli such as audio, haptics, and olfactory. In addition to this multisensory feature, the authoring tool allows customizing individually each of the stimuli to provide an optimal multisensory user experience.A usability evaluation has revealed the pertinence of the editor, where it was verified an effectiveness rate of 100%, only one help request out of 10 participants, and positive efficiency. Satisfaction-wise, results equally revealed high level of satisfaction as the average score was 8.3 out of 10.
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.