Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2013
DOI: 10.1145/2470654.2481386
|View full text |Cite
|
Sign up to set email alerts
|

ContextType

Abstract: The challenge of mobile text entry is exacerbated as mobile devices are used in a number of situations and with a number of hand postures. We introduce ContextType, an adaptive text entry system that leverages information about a user's hand posture (using two thumbs, the left thumb, the right thumb, or the index finger) to improve mobile touch screen text entry. ContextType switches between various keyboard models based on hand posture inference while typing. ContextType combines the user's posture-specific t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 81 publications
(13 citation statements)
references
References 11 publications
(7 reference statements)
0
13
0
Order By: Relevance
“…Numerous ITE methods have been presented in the literature and are implemented in commercial keyboards Many aim at improving input accuracy, and thus speed, for example by correcting touch points [15,16], resizing key targets [14,15], creating personalized touch models [40,43], taking into account individual hand postures and finger usage [3,13,27,43], or by adapting to walking speed [27]. Statistical decoding to auto-correct users' typing has been demonstrated to be quite powerful, such as in the context of smart watch typing [39].…”
Section: Intelligent Text Entry Methodsmentioning
confidence: 99%
“…Numerous ITE methods have been presented in the literature and are implemented in commercial keyboards Many aim at improving input accuracy, and thus speed, for example by correcting touch points [15,16], resizing key targets [14,15], creating personalized touch models [40,43], taking into account individual hand postures and finger usage [3,13,27,43], or by adapting to walking speed [27]. Statistical decoding to auto-correct users' typing has been demonstrated to be quite powerful, such as in the context of smart watch typing [39].…”
Section: Intelligent Text Entry Methodsmentioning
confidence: 99%
“…Several studies have shown that using index finger responses instead of thumbs can lead to different results: In particular, it has been consistently found that more accurate general input (mainly: typing) can be given using index fingers (Buschek, De Luca, & Alt, 2016;Lehmann & Kipp, 2018;Wang & Ren, 2009;Wobbrock, Myers, & Aung, 2008). However, results have been mixed regarding speed differences, which seems to depend on the particular input type and study design (Azenkot & Zhai, 2012;Goel, Jansen, Mandel, Patel, & Wobbrock, 2013;Lehmann & Kipp, 2018;Wobbrock et al, 2008). In any case, to our knowledge, no studies have explored the potential effect of this difference in a regular experimental RT task yet, let alone in the RT-CIT.…”
Section: Discussionmentioning
confidence: 99%
“…While not directly related to the main question of our study, we included an exploratory analysis in our first experiment on keypress-and touch-durations as topics relevant to other smartphone-based studies as well (e.g., Buschek, De Luca, & Alt, 2015;Goel et al, 2013). We found shorter durations for probes (i.e., when participants saw their own names), and replicated this finding in the second experiment (though only when using index fingers for touchscreen taps, and not when using thumbs).…”
Section: Discussionmentioning
confidence: 99%
“…Others have demonstrated the use of passive grasp detection to adapt interface elements to a user's posture. For example, it is known that touch accuracy on a mobile keyboard varies with the user's posture [1], and that posture information can be used to improve the spatial model for text entry decoding [12,69]. Several of the sensing technologies described above were validated on their ability to discern several styles of grasp, with the intention of using this information for interface adaptation [18,47,67].…”
Section: Grasp Detection and Grasp Gesturesmentioning
confidence: 99%