2014
DOI: 10.3389/fpsyg.2014.00730
|View full text |Cite
|
Sign up to set email alerts
|

Visuo-haptic multisensory object recognition, categorization, and representation

Abstract: Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
66
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 80 publications
(69 citation statements)
references
References 165 publications
(293 reference statements)
3
66
0
Order By: Relevance
“…In situations in which information from one modality or another is unavailable, it must be able to detect any changes in sensory input and share the accessible information between modalities efficiently. Much of the research conducted on visuohaptic processing has now established that the LOC is bimodal in terms of its representations of visual and haptic (familiar and novel) shape information (Amedi et al, 2001; James et al, 2002, 2005; James & Kim, 2010; Lacey et al, 2010, 2014; Peltier et al, 2007; Pietrini et al, 2004; Stilla & Sathian, 2008; Stoesz et al, 2003; Zhang et al, 2004), and the current data also support this notion. Moreover, a previous study has demonstrated neural convergence of visual and haptic inputs in the LOC through inverse effectiveness (Kim, Stevenson, & James, 2012).…”
Section: Discussionsupporting
confidence: 72%
See 1 more Smart Citation
“…In situations in which information from one modality or another is unavailable, it must be able to detect any changes in sensory input and share the accessible information between modalities efficiently. Much of the research conducted on visuohaptic processing has now established that the LOC is bimodal in terms of its representations of visual and haptic (familiar and novel) shape information (Amedi et al, 2001; James et al, 2002, 2005; James & Kim, 2010; Lacey et al, 2010, 2014; Peltier et al, 2007; Pietrini et al, 2004; Stilla & Sathian, 2008; Stoesz et al, 2003; Zhang et al, 2004), and the current data also support this notion. Moreover, a previous study has demonstrated neural convergence of visual and haptic inputs in the LOC through inverse effectiveness (Kim, Stevenson, & James, 2012).…”
Section: Discussionsupporting
confidence: 72%
“…Moreover, a previous study has demonstrated neural convergence of visual and haptic inputs in the LOC through inverse effectiveness (Kim, Stevenson, & James, 2012). Based on these findings, it is plausible that during crossmodal matching, some of the population of neurons within this region would be reactivated at test (see Lacey & Sathian (2014) for a review of mental imagery), while others would be activated by the sensory percept. The combination of activated and re-activated neural populations would produce greater activation in crossmodal matching tasks, which require reactivation of the encoded stimulus as well as activation for the current sensory input, than in intramodal matching tasks, which do not.…”
Section: Discussionmentioning
confidence: 99%
“…The LOC thus houses an object representation that is flexibly accessible, both bottom-up and top-down, and which is modality- and possibly view-independent. (From Lacey et al, 2014). …”
Section: Figurementioning
confidence: 99%
“…Here we focus on interactions between vision and touch in humans, including crossmodal interactions where tactile inputs evoke activity in neocortical regions traditionally considered visual, and multisensory integrative interactions. It is now established that cortical areas in both the ventral and dorsal pathways, previously identified as specialized for various aspects of visual processing, are also routinely recruited during the corresponding aspects of touch (for reviews see Amedi et al, 2005; Sathian & Lacey, 2007; Lacey & Sathian, 2011, 2014). When these regions are in classical visual cortex so that they would traditionally be regarded as unisensory, their engagement is referred to as crossmodal, whereas other regions lie in classically multisensory sectors of the association neocortex.…”
mentioning
confidence: 99%
“…Furthermore, these two sensory modalities perceive different aspects of an object with vision more capable of measuring "macrogeometric features" such as object orientation, size and gross shape and touch more involved in the perception of "microgeometric features" such as material differences (Woods and Newell 2004). Although visual and tactile information is processed in qualitatively different ways (Newell 2010), many behavioral and neurophysiological studies have demonstrated that crossmodal (visio-tactile) interaction plays a vital role in normal perception such as the recognition of objects and scenes, perception of material textures and interaction with near body space, and so forth (Lederman et al 1986;Shimojo and Shams 2001;Woods and Newell 2004;Macaluso and Driver 2005;Stilla and Sathian 2008;James and Kim 2010;Macaluso and Maravita 2010;Magosso 2010;Newell 2010;Lacey and Sathian 2014). For example, imagining how a touched texture will look may invoke visual imagery, whereas imagining how a seen texture would feel could activate areas associated with haptic processing (Klatzky and Lederman 2010).…”
Section: Introductionmentioning
confidence: 99%