Over the past decade, augmented reality (AR) developers have explored a variety of approaches to allow users to interact with the information displayed on smart glasses and head-mounted displays (HMDs). Current interaction modalities such as mid-air gestures, voice commands, or hand-held controllers provide a limited range of interactions with the virtual content. Additionally, these modalities can also be exhausting, uncomfortable, obtrusive, and socially awkward. There is a need to introduce comfortable interaction techniques for smart glasses and HMDS without the need for visual attention. This paper presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users' interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch-stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses.
Domain users (DUs) with a knowledge base in specialized fields are frequently excluded from authoring Virtual Reality (VR)-based applications in corresponding fields. This is largely due to the requirement of VR programming expertise needed to author these applications. To address this concern, we developed VRFromX, a system workflow design to make the virtual content creation process accessible to DUs irrespective of their programming skills and experience. VRFromX provides an in-situ process of content creation in VR that (a) allows users to select regions of interest in scanned point clouds or sketch in mid-air using a brush tool to retrieve virtual models, and (b) then attach behavioral properties to those objects. Using a welding use case, we performed a usability evaluation of VRFromX with 20 DUs from which 12 were novices in VR programming. Study results indicated positive user ratings for the system features with no significant differences across users with or without VR programming expertise. Based on the qualitative feedback, we also implemented two other use cases to demonstrate potential applications. We envision that the solution can facilitate the adoption of the immersive technology to create meaningful virtual environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.