Research has explored miniature radar as a promising sensing technique for the recognition of gestures, objects, users' presence and activity. However, within Human-Computer Interaction (HCI), its use remains underexplored, in particular in Tangible User Interface (TUI). In this paper, we explore two research questions with radar as a platform for sensing tangible interaction with the counting, ordering, identification of objects and tracking the orientation, movement and distance of these objects. We detail the design space and practical use-cases for such interaction which allows us to identify a series of design patterns, beyond static interaction, which are continuous and dynamic. With a focus on planar objects, we report on a series of studies which demonstrate the suitability of this approach. This exploration is grounded in both a characterization of the radar sensing and our rigorous experiments which show that such sensing is accurate with minimal training. With these techniques, we envision both realistic and future applications and scenarios. The motivation for what we refer to as Solinteraction, is to demonstrate the potential for radar-based interaction with objects in HCI and TUI. CCS Concepts: • Human-centered computing → Interaction techniques; Interface design prototyping; Ubiquitous and mobile computing design and evaluation methods;
In RadarCat we present a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. We demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time. Based on established research designs, we report on the results of three studies, first with 26 materials (including complex composite objects), next with 16 transparent materials (with different thickness and varying dyes) and finally 10 body parts from 6 participants. Both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate. We further demonstrate four working examples including a physical object dictionary, painting and photo editing application, body shortcuts and automatic refill based on RadarCat. We conclude with a discussion of our results, limitations and outline future directions.
The popularity of mobile devices with large screens is making single-handed interaction difficult. We propose and evaluate a novel design point around a tilt-based text entry technique which supports single handed usage. Our technique is based on the gesture keyboard (shape writing). However, instead of drawing gestures with a finger or stylus, users articulate a gesture by tilting the device. This can be especially useful when the user's other hand is otherwise encumbered or unavailable. We show that novice users achieve an entry rate of 15 wordsper-minute (wpm) after minimal practice. A pilot longitudinal study reveals that a single participant achieved an entry rate of 32 wpm after approximate 90 minutes of practice. Our data indicate that tilt-based gesture keyboard entry enables walk-up use and provides a suitable text entry rate for occasional use and can act as a promising alternative to single-handed typing in certain situations.
The exploration of novel sensing to facilitate new interaction modalities remains an active research topic in Human-Computer Interaction. Across the breadth of HCI conferences, we can see the development of new forms of interaction underpinned by the appropriation or adaptation of sensing techniques based on the measurement of sound, light, electric fields, radio waves, biosignals etc. Commercially, we see extensive industrial developments of radar sensing in vehicular/automotive and military settings. At very long range, radar technology has been used for many decades in weather and aircraft tracking. At long-, mid-and short-range radar has been used for ACC, EBA, security scanners, pedestrian detection and blind spot detection. Radar is often considered a long-range sensing technology, which is all-weather, offering 3D position information, operating at all-times as it doesn't require lighting and can penetrate surfaces and objects. At very short range radar has been employed in disbond detection, corrosion detection and foam insulation flaw identification. In addition, radar technology has been explored by the research community for various purposes, such as presence sensing and indoor user tracking [5], vital signs monitoring [6] and emotion recognition. At this range, radar is touted as addressing problems in privacy, occlusion, lighting and limited field-of-view that are suffered by vision-based approaches, or for uses in medical conditions where traditional approaches such as capacitive and galvanic skin response sensing do not work well.
This paper presents ongoing work toward a design exploration for combining microgestures with other types of gestures within the greater lexicon of gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors.
25°+ 25°F igure 1: RotoSwype ring-based word-gesture typing for AR: candidate hand postures with rotation ranges.
Optimization & Real-time Adaptation Figure 1. Given a graphical user interface (left), AdaM automatically decides which UI elements should be displayed on each device in real-time. Our optimization is designed for multi-user scenarios and considers user roles and preferences, device access restrictions and device characteristics. ABSTRACTDeveloping cross-device multi-user interfaces (UIs) is a challenging problem. There are numerous ways in which content and interactivity can be distributed. However, good solutions must consider multiple users, their roles, their preferences and access rights, as well as device capabilities. Manual and rule-based solutions are tedious to create and do not scale to larger problems nor do they adapt to dynamic changes, such as users leaving or joining an activity. In this paper, we cast the problem of UI distribution as an assignment problem and propose to solve it using combinatorial optimization. We present a mixed integer programming formulation which allows realtime applications in dynamically changing collaborative settings. It optimizes the allocation of UI elements based on device capabilities, user roles, preferences, and access rights. We present a proof-of-concept designer-in-the-loop tool, allowing for quick solution exploration. Finally, we compare our approach to traditional paper prototyping in a lab study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.