Research on virtual reality (VR) has studied users' experience of immersion, presence, simulator sickness, and learning effects. However, the momentary experience of exiting VR and transitioning back to the real-world is not well understood. Do users become self-conscious of their actions upon exit? Are users nervous of their surroundings? Using explicitation interviews, we explore the moment of exit from VR across four applications. Analysis of the interviews reveals five components of experience: space, control, sociality, time, and sensory adaptation. Participants described spatial disorientation, for example, regardless of the complexity of the VR scene. Participants also described a window across which they exit VR, for example mentally first and then physically. We present six designs for easing or heightening the exit experience, as described by the participants. Based on these findings, we further discuss the 'moment of exit' as an opportunity for designing engaging and enhanced VR experiences.
The notion of interaction is essential to human-computer interaction, yet rarely studied. We use quantitative and qualitative methods to investigate how this notion has been used across 35 years of proceedings from the ACM Conference on Human Factors in Computing (CHI). Using natural language processing, we extract 53,568 occurrences of the word “interaction” across 4,604 papers. In these occurrences, we categorize 2,668 unique words that modify how “interaction” is used in a sentence. We show that the use of “interaction” is both increasing and diversifying, suggesting the importance of the notion, but also the difficulty in developing theory about interaction. Our findings show that styles of interaction are closely associated with changes in technology and that modalities and characteristics of interaction are becoming more of a topic than specific devices or widgets. Interaction qualities, relating to structure, feel, effectiveness, and efficiency, are consistently prominent, and the quality of novelty is increasingly frequent. From this analysis, we identify open questions about interaction, including how to build knowledge across changing technologies, how to work toward a model of quality for interaction, and what the core of a science of interaction could be.
Electric muscle stimulation (EMS) can enable mobile force feedback, support pedestrian navigation, or confer object affordances. To date, however, EMS is limited by two interlinked problems. (1) EMS is low resolution -- achieving only coarse movements and constraining opportunities for exploration. (2) EMS requires time consuming, expert calibration -- confining these interaction techniques to the lab. EMS arrays have been shown to increase stimulation resolution, but as calibration complexity increases exponentially as more electrodes are used, we require heuristics or automated procedures for successful calibration. We explore the feasibility of using electromyography (EMG) to auto-calibrate high density EMS arrays. We determine regions of muscle activity during human-performed gestures, to inform stimulation patterns for EMS-performed gestures. We report on a study which shows that auto-calibration of a 60-electrode array is feasible: achieving 52% accuracy across six gestures, with 82% accuracy across our best three gestures. By highlighting the electrode-array calibration problem, and presenting a first exploration of a potential solution, this work lays the foundations for high resolution, wearable and, perhaps one day, ubiquitous EMS beyond the lab.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.