Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019
DOI: 10.1145/3290605.3300245
|View full text |Cite
|
Sign up to set email alerts
|

BeamBand

Abstract: BeamBand is a wrist-worn system that uses ultrasonic beamforming for hand gesture sensing. Using an array of small transducers, arranged on the wrist, we can ensemble acoustic wavefronts to project acoustic energy at specified angles and focal lengths. This allows us to interrogate the surface geometry of the hand with inaudible sound in a raster-scan-like manner, from multiple viewpoints. We use the resulting, characteristic reflections to recognize hand pose at 8 FPS. In our user study, we found that BeamBan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 36 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…The HCI community has placed significant emphasis on advancing gestural input for various technology applications by deploying machine-learning-backed solutions for differing sensing modalities such as computer vision (Kinect, Meta Quest, etc.) inertial measurement units (Wen, Ramos Rojas, and Dey 2016), sEMG (Mendez et al 2017;Saponas et al 2008), bio-acoustic signals (Laput, Xiao, and Harrison 2016), electrical impedance tomography (Zhang and Harrison 2015), electromagnetic signals (Laput et al 2015), and ultrasonic beamforming (Iravantchi, Goel, and Harrison 2019). The most direct antecedent of the work presented here is the now discontinued sEMG Myo armband (worn on forearm) by Thalmic Labs, as used for gesture detection by e.g., (Mendez et al 2017), who built single-user models to classify more intuitively separated gestures such as wrist extension, open hand, etc.…”
Section: Related Work In Hci and Bcimentioning
confidence: 99%
“…The HCI community has placed significant emphasis on advancing gestural input for various technology applications by deploying machine-learning-backed solutions for differing sensing modalities such as computer vision (Kinect, Meta Quest, etc.) inertial measurement units (Wen, Ramos Rojas, and Dey 2016), sEMG (Mendez et al 2017;Saponas et al 2008), bio-acoustic signals (Laput, Xiao, and Harrison 2016), electrical impedance tomography (Zhang and Harrison 2015), electromagnetic signals (Laput et al 2015), and ultrasonic beamforming (Iravantchi, Goel, and Harrison 2019). The most direct antecedent of the work presented here is the now discontinued sEMG Myo armband (worn on forearm) by Thalmic Labs, as used for gesture detection by e.g., (Mendez et al 2017), who built single-user models to classify more intuitively separated gestures such as wrist extension, open hand, etc.…”
Section: Related Work In Hci and Bcimentioning
confidence: 99%
“…Therefore, various sensor-based approaches for detecting hand gestures have been explored as follows: inductive tracking [9, 15, 16, 28, ? ], touch sensing using the body as an electric waveguide [36], electromyography (EMG) [32,33], bio-acoustic sensing [14,18], and inertial measurement of users' hand [13,21]. However, these approaches can not detect subtle textinput action such as fnger typing; therefore, long and exaggerated gestures, which cause fatigue and slow down input speed [12], are required.…”
Section: Related Work Wearable Keyboardsmentioning
confidence: 99%
“…Approaches to sign language recognition can be classified as vision-based [9], [10], sensor-based [11], [12], [13], [14], [15], [16], [17], and a blend of both [18]. Vision-based gesture recognition techniques use a camera to capture the visual data of a gesture, and then process the visual data to complete the recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Wireless sensing-based methods realize gesture recognition without the need for users to wear a device, and do not consider environmental factors. Prior studies have employed various wireless sensing technologies for human activity recognition, such as Wi-Fi [15], [19], RFID [20], radar [21], [22], [23], and ultrasound [16], [17]. The Wi-Fi-based recognition technology has the advantages of low cost and easy expansion, but in the recognition of fine-grained actions, such as interactive gestures [24], the recognition effect is not good.…”
Section: Introductionmentioning
confidence: 99%