Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction 2010
DOI: 10.1145/1935701.1935745
|View full text |Cite
|
Sign up to set email alerts
|

Grasp sensing for human-computer interaction

Abstract: The way we grasp an object depends on several factors, e.g. the intended goal or the hand's anatomy. Therefore, a grasp can convey meaningful information about its context. Inferring these factors from a grasp allows us to enhance interaction with grasp-sensitive objects. This paper highlights an grasp as an important source of meaningful context for human-computer interaction and gives an overview of prior work from other disciplines. This paper offers a basis and framework for further research and discussion… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 39 publications
(18 citation statements)
references
References 29 publications
0
18
0
Order By: Relevance
“…The grasp serves here as starting point for searching executable gestures while grasping. Wimmer (2011) stated that there are many ways to grasp an object; and Feix et al (2009) provides a grasp taxonomy that allows for distinguishing between three grasp types. While there are many ways to hold a tablet, the form factor of other objects as well as tasks that are performed with those can determine one specific grasp (Table 3.1).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The grasp serves here as starting point for searching executable gestures while grasping. Wimmer (2011) stated that there are many ways to grasp an object; and Feix et al (2009) provides a grasp taxonomy that allows for distinguishing between three grasp types. While there are many ways to hold a tablet, the form factor of other objects as well as tasks that are performed with those can determine one specific grasp (Table 3.1).…”
Section: Methodsmentioning
confidence: 99%
“…The object influences the way to grasp it by its form factor as well as through the intended use (Wimmer 2011). Furthermore, the grasp affects the possibility to perform gestures while grasping as the grasp requires a certain hand pose and applied force per digit.…”
Section: Contextmentioning
confidence: 99%
“…Wimmer's GRASP model encompasses both semantics (goal and relationship) and physicality (anatomy, setting and properties) but keeps the focus only on the way we grasp objects in the hand [25]. Wolf presented an even more narrowed taxonomy, which analyzes microgestures that can be performed while grasping objects [26].…”
Section: Gestures With Objectsmentioning
confidence: 99%
“…It is worth noting that grasps are static gestures and can be compared to postures in free-hand gestures. Wimmer's GRASP model [25] offers several guidelines to design and recognize grasp postures, which should be followed by TGI designer for this type of gestures. The "Human Grasping Database" is another powerful tool to understand all the physical forms that grasp gestures can assume [41].…”
Section: Hold + Touchmentioning
confidence: 99%
“…In a grasp, we can gather rich information in relation to the object properties, the setting, the relationship, the goal, and the anatomy of the user [37]. According to the previous studies on grasping, the way we grasp an object may be predicted according to the opposition plans, and in a virtual environment with virtual objects, we can simplify the model even more by ignoring a part of the grasp properties like the relationship and the goal.…”
Section: Affordance Of Object Graspingmentioning
confidence: 99%