Proceedings of the Conference on Human Factors in Computing Systems - CHI '03 2003
DOI: 10.1145/642693.642694
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal 'eyes-free' interaction techniques for wearable devices

Abstract: Mobile and wearable computers present input/output problems due to limited screen space and interaction techniques. When mobile, users typically focus their visual attention on navigating their environment -making visually demanding interface designs hard to operate. This paper presents two multimodal interaction techniques designed to overcome these problems and allow truly mobile, 'eyes-free' device use. The first is a 3D audio radial pie menu that uses head gestures for selecting items. An evaluation of a r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

2
64
0

Year Published

2004
2004
2019
2019

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(66 citation statements)
references
References 4 publications
(7 reference statements)
2
64
0
Order By: Relevance
“…Innovation in these areas has explored usage of sensory modalities other than vision -for example, speech recognition 77 , non-speech auditory feedback 17 , haptic (touch-based) feedback 18 , and multimodal input 105,76 (which combines different sensory modalities) -to reduce dependence on visual interaction 19,107,21 . Recent advances in the likes of vibrotactile, text-to-speech (TTS), and gestural recognition systems have consequently opened up scope for increased accessibility to devices for persons with visual impairment.…”
Section: Mobile Devices Made Accessible For Visually-impaired Usersmentioning
confidence: 99%
See 1 more Smart Citation
“…Innovation in these areas has explored usage of sensory modalities other than vision -for example, speech recognition 77 , non-speech auditory feedback 17 , haptic (touch-based) feedback 18 , and multimodal input 105,76 (which combines different sensory modalities) -to reduce dependence on visual interaction 19,107,21 . Recent advances in the likes of vibrotactile, text-to-speech (TTS), and gestural recognition systems have consequently opened up scope for increased accessibility to devices for persons with visual impairment.…”
Section: Mobile Devices Made Accessible For Visually-impaired Usersmentioning
confidence: 99%
“…Brewster et al 19 proposed two novel solutions for eyes-free, mobile device use. The first presented information items to users via a 3D radial pie menu.…”
Section: Mobile Devices Made Accessible For Visually-impaired Usersmentioning
confidence: 99%
“…229 Non-visual interfaces, particularly audio display 230 interfaces have been shown to be effective in improv-231 ing interaction and integration within existing physical 232 contexts. For example, Brewster and Pirhonen [21,22] 233 have explored the combination of gesture and audio 234 display that allows for complicated interaction with 235 mobile devices while people are in motion. The Audio 236 Aura project [23] explores how to better connect hu-237 man activity in the physical world with virtual infor-238 mation through use of audio display.…”
Section: U N C O R R E C T E D P R O O Fmentioning
confidence: 99%
“…Switching between the stereo channels created 686 localization: we used the left channel audio for the left, 687 right channel audio for the right, and both channels for 688 the center. It is an egocentric [22] spatial structure that 689 allowed the three prefaces to be distinguishable and an 690 underlying content categorization structure to exist. 691 The spatialization was mapped to the tangible interface 692 for selection.…”
mentioning
confidence: 99%
“…Switching between the stereo channels created localization: we used the 436 left channel audio for the left, right channel audio for the right, and both channels 437 for the center. It is a simple egocentric (Brewster et al, 2003) spatial structure that 438 allows the three prefaces to be distinguishable and an underlying content categori-439 zation structure to exist. The spatialization was mapped to the tangible user inter-440 face for selection.…”
mentioning
confidence: 99%