2010
DOI: 10.1111/j.1460-9568.2010.07204.x
|View full text |Cite
|
Sign up to set email alerts
|

Are surface properties integrated into visuohaptic object representations?

Abstract: Object recognition studies have almost exclusively involved vision, focusing on shape rather than surface properties such as color. Visual object representations are thought to integrate shape and color information because changing the color of studied objects impairs their subsequent recognition. However, little is known about integration of surface properties into visuo-haptic multisensory representations. Here, participants studied objects with distinct patterns of surface properties (color in Experiment 1,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
22
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(25 citation statements)
references
References 34 publications
(104 reference statements)
3
22
0
Order By: Relevance
“…The end result was a series of object quadruplets in which shape and texture were exchanged between pairs so that, in each quadruplet, pairs could be used for either shape or texture discrimination. We used difference matrices based on the number of differences in the position and orientation of component blocks to calculate the mean difference in object shape for each of the sets (Lacey et al, 2007, 2009a, 2010). Paired t-tests showed no significant differences between these sets (all p values > .05) and they were therefore considered equally discriminable.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The end result was a series of object quadruplets in which shape and texture were exchanged between pairs so that, in each quadruplet, pairs could be used for either shape or texture discrimination. We used difference matrices based on the number of differences in the position and orientation of component blocks to calculate the mean difference in object shape for each of the sets (Lacey et al, 2007, 2009a, 2010). Paired t-tests showed no significant differences between these sets (all p values > .05) and they were therefore considered equally discriminable.…”
Section: Methodsmentioning
confidence: 99%
“…color and texture, in object representations (Kozhevnikov et al, 2005). We recently showed that texture information is integrated into both visual and haptic representations (Lacey et al, 2010) but did not examine individual differences in this respect. Here, we tested shape discrimination across changes in texture, and texture discrimination across changes in shape, in visual and haptic within-modal conditions.…”
Section: Introductionmentioning
confidence: 99%
“…However, it would be hasty to conclude from this that the visual capacities of the newly sighted are apparent viewpoint-independency of cross-modal object recognition in normal subjects, see Lacey, Peters and Sathian 2007, Lacey et al 2009, and Lacey, Hall and Sathian 2010 14 Held and colleagues do cite this study, but only to challenge an account of their subjects' rapid improvement in the TV task as the result of 'a rapid increase in the visual ability to create a three-dimensional representation, thus allowing for a more accurate mapping between haptic structures and visual ones ' (2011: 552). This may be right, but it overlooks the more fundamental problem that these results raise for the interpretation of their data.…”
Section: Further Directionsmentioning
confidence: 99%
“…Inspired by Nature, in which visual and haptic sensory feedback are known to be jointly exploited in the Human brain [1] this paper presents a haptic robot-environment interaction system for self-supervised learning of vision skills for safe navigation. For this purpose, the robot is provided with a mechanism to learn a mapping between the volumetric appearance of obstacles, given sensory data provided by a depth sensor, and their bendability, perceived by physically interacting with them with a small antenna (see Fig.…”
Section: Introductionmentioning
confidence: 99%