2000
DOI: 10.1111/1467-8659.00409
|View full text |Cite
|
Sign up to set email alerts
|

Haptic Cues for Image Disambiguation

Abstract: Haptic interfaces represent a revolution in human computer interface technology since they make it possible for users to touch and manipulate virtual objects. In this work we describe a cross-model interaction experiment to study the effect of adding haptic cues to visual cues when vision is not enough to disambiguate the images. We relate the results to those obtained in experimental psychology as well as to more recent studies on the subject.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2001
2001
2020
2020

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 4 publications
(4 reference statements)
0
3
0
Order By: Relevance
“…As Sjostrom (2001b) reports: "For a blind person, locating an object with a point probe can be as hard as finding a needle in a haystack" [Sjostrom 2001b]. An experiment by Faconti et al, examining how people interpret haptic versions of visual illusions, such as the Necker cube, found that the free interaction they had originally intended proved unfeasible [Faconti et al 2000]. If users were placed at a starting point on the relevant object (they were unable to find it without help), they could explore and recognize nearby parts of it, but they were unable to "jump" from one object to another, thus missing considerable portions of the data.…”
Section: Accessing Data Structure Using Multimodal Cuesmentioning
confidence: 99%
“…As Sjostrom (2001b) reports: "For a blind person, locating an object with a point probe can be as hard as finding a needle in a haystack" [Sjostrom 2001b]. An experiment by Faconti et al, examining how people interpret haptic versions of visual illusions, such as the Necker cube, found that the free interaction they had originally intended proved unfeasible [Faconti et al 2000]. If users were placed at a starting point on the relevant object (they were unable to find it without help), they could explore and recognize nearby parts of it, but they were unable to "jump" from one object to another, thus missing considerable portions of the data.…”
Section: Accessing Data Structure Using Multimodal Cuesmentioning
confidence: 99%
“…The fact that the user interacts with 3D models through their 2D representation might cause problems in correctly perceiving and disambiguating the objects represented (Faconti, 2000).…”
Section: Shape= F(time Int/ext Context)mentioning
confidence: 99%
“…This new interaction technology and modality support users in perceiving physical properties of the objects modelled (it is soft, the surface is rough, etc. ), and it would help users in solving ambiguous situations given by the 2D representation of 3D models (Faconti, 2000). Besides, it is a first step allowing users to use their natural skill they have when interacting with the physical worlds, also when they interact with the digital world (Bordegoni, 1998).…”
Section: Shape= F(time Int/ext Context)mentioning
confidence: 99%