The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.
DOI: 10.1109/ismar.2003.1240730
|View full text |Cite
|
Sign up to set email alerts
|

SenseShapes: using statistical geometry for object selection in a multimodal augmented reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
41
0

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 53 publications
(41 citation statements)
references
References 3 publications
0
41
0
Order By: Relevance
“…SenseShapes [14] aimed to find spatial correlation between gestures and deictic terms such as "that", "here", and "there" in an object selection task. The user's hands were tracked using data gloves, and object selection was facilitated by a virtual cone projected out from the users' fingers.…”
Section: Hand Gesture and Speech Interfaces In Armentioning
confidence: 99%
“…SenseShapes [14] aimed to find spatial correlation between gestures and deictic terms such as "that", "here", and "there" in an object selection task. The user's hands were tracked using data gloves, and object selection was facilitated by a virtual cone projected out from the users' fingers.…”
Section: Hand Gesture and Speech Interfaces In Armentioning
confidence: 99%
“…However, in our AR applications we wanted to support natural 3D object interaction. Previously other researchers have used speech input for descriptive commands and used hand tracking devices or DataGloves [11][12] [13] to explore gesture input in 3D graphics environments. Alternatively, computer vision based hand tracking techniques have been used in systems such as "VisSpace" [14] to estimate where users were pointing.…”
Section: Related Workmentioning
confidence: 99%
“…One of the first multimodal AR interfaces, SenseShapes [11], used volumetric regions of interest that were attached to the user's gaze direction or hand to provide visual information about interaction with virtual objects. Object selection was available with a data glove to detect user's gestures and with trackers to monitor hand position for interaction with objects.…”
Section: Related Workmentioning
confidence: 99%
“…Tabletop interaction (Section 4.1) allows 2D navigation of all objects and their relations to the various layers of the site. Used in conjunction with the [20,25] in conjunction with our VirtualTray (Section 4.5).…”
Section: Interaction In Vitamentioning
confidence: 99%
“…Object selection is the most frequently used 3D interaction technique, and we provide several ways to accomplish it. The user can walk toward an object and grab it, or point at it in the distance using our SenseShapes selection tools [25]. Selection at a distance is prone to ambiguity, especially when many similar objects are near each other, and SenseShapes assist in their disambiguation.…”
Section: D Multimodal Interactionmentioning
confidence: 99%