Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems 2021
DOI: 10.1145/3411764.3445205
|View full text |Cite
|
Sign up to set email alerts
|

EarRumble: Discreet Hands- and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction

Abstract: Figure 1: We present EarRumble, a technique that uses "ear rumbling" for interaction. (a) The tensor tympani muscle can be contracted voluntarily which displaces the eardrum and induces a pressure change within the sealed ear canal; (b) Custombuilt earables detect ear rumbling using an in-ear pressure sensor; (c) Eyes-and hands-free discreet input can be provided by performing diferent rumbling gestures by voluntarily contracting the tensor tympani muscle.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 50 publications
0
4
0
Order By: Relevance
“…Bedri et al [5] proposed a method that recognizes the movement of the jaw by using an infrared distance sensor attached to the tip of a canal-type earphone. An input method that uses canal-type earphones to acquire sound using a small muscle in the middle ear called the tensor tympani muscle has been proposed [10]. There is an ear-worn device with which users use their ears as a hand-held input controller [13].…”
Section: Gesture Recognition Methods Using Ear Accessoriesmentioning
confidence: 99%
See 1 more Smart Citation
“…Bedri et al [5] proposed a method that recognizes the movement of the jaw by using an infrared distance sensor attached to the tip of a canal-type earphone. An input method that uses canal-type earphones to acquire sound using a small muscle in the middle ear called the tensor tympani muscle has been proposed [10]. There is an ear-worn device with which users use their ears as a hand-held input controller [13].…”
Section: Gesture Recognition Methods Using Ear Accessoriesmentioning
confidence: 99%
“…As the shape and wearing of ear accessories are socially acceptable, devices that use them as controllers for hands-free input are likely to be accepted by society. Gesture input methods based on intentional face movements using sensors (e.g., barometers, microphones, infrared distance sensors and electrodes) mounted on canal-type earphones have been proposed in previous studies [4][5][6][7][8][9][10]. Such gesture input methods based on facial movements using ear accessories is effective as a simple input method in scenarios where the number of input contents is not large and can be used even in situations where it is difficult to use the hands-free input method using voice recognition (e.g., scenes with a lot of environmental voices and a high psychological load of speaking).…”
Section: Introductionmentioning
confidence: 99%
“…As another promising direction, research proposed various handsfree body-centric interaction techniques that leverage the degrees of freedom of other body parts, such as the head [35,108] and the eye [63,75]. Further examples include the ear [84], the mouth [26,86], and the face [66], as well as interpreting blow gestures [10,82] or combinations of multiple body parts [24,64]. As the most related area of such hands-free body-centric interfaces, foot-based interaction techniques have a long tradition in the operation of industrial machinery [3,4,15,50,77].…”
Section: Body-centric Interactionmentioning
confidence: 99%
“…And mouth-related interface (Whoosh [38], TieLent [20]) such as tongue interface [11,24] and teeth interface (TeethTap [43],Bitey [2], EarSense [37]) also offers users a novel way for hands-free input. What's more, ear-based interaction (EarRumble [39]), waist gestures (HulaMove [48]) and foot gestures (FootUI [17], FEETICHE [26]) are also explored by researchers. Compared with existing work, head gestures present another way for hands-free input like navigation in 3D space [5].…”
Section: Hands-free Gesture Inputmentioning
confidence: 99%