Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems 2017
DOI: 10.1145/3027063.3053161
|View full text |Cite
|
Sign up to set email alerts
|

WaveTrace

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…This is exemplified in Figure 1, where a user interacts with Wattom by tracking a moving blue light with its hands. This user input is captured using most forms of wrist-mounted IMUs (e.g., smart watch, fitness tracker), and represented as Euler angles (yaw and pitch) -an approach first introduced in [30]. Visually, our system is inspired by [31], where interface elements are orbited by one or more moving targets.…”
Section: Interactionmentioning
confidence: 99%
See 1 more Smart Citation
“…This is exemplified in Figure 1, where a user interacts with Wattom by tracking a moving blue light with its hands. This user input is captured using most forms of wrist-mounted IMUs (e.g., smart watch, fitness tracker), and represented as Euler angles (yaw and pitch) -an approach first introduced in [30]. Visually, our system is inspired by [31], where interface elements are orbited by one or more moving targets.…”
Section: Interactionmentioning
confidence: 99%
“…Target selection works as follows: a bespoke Android Wear application runs in both the user's smart watch and smart phone. As in [30], the interaction starts after a user performs a flick of the wrist (the only gesture that needs to be learned and memorized) that triggers the start of the target movement in all local plugs. This movement, i.e., LED state (0 to 23) and direction (0 or 1), is conveyed to users' mobile apps through the Wattom server.…”
Section: Interactionmentioning
confidence: 99%
“…But due to inherent limitations of computer vision, such as being restricted by their FOV (interaction space), being susceptible to changing-light conditions and occlusion, and introducing privacy concerns when used in the context of smart homes [7], recent work looks at other forms of input sensing for motion matching. Examples include passive magnets that capture the user's thumb movement [39], or inertial measurement units (IMUs) that capture users' head (AR headset [19]), arm (smartwatch [50]), or phone-based [4] rotations when following a moving target.…”
Section: Motion Matchingmentioning
confidence: 99%
“…Compared to other touchless embodied systems, motion matching is seen as an interesting alternative for interaction with public displays and a growing number of smart appliances at home. For example, it works with unmodified off-the-shelf hardware such as web-cams [11] or smart watches [50]; does not require any gesture data training sets; and does not require gesture (or speech) discovery and memorization -making it an ideal candidate for spontaneous interaction [52]. On the other hand, previous work in this domain has focused primarily on seminal performance studies and technical developments.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation