2013
DOI: 10.1523/jneurosci.0251-13.2013
|View full text |Cite
|
Sign up to set email alerts
|

Joint Representation of Depth from Motion Parallax and Binocular Disparity Cues in Macaque Area MT

Abstract: Perception of depth is based on a variety of cues, with binocular disparity and motion parallax generally providing more precise depth information than pictorial cues. Much is known about how neurons in visual cortex represent depth from binocular disparity or motion parallax, but little is known about the joint neural representation of these depth cues. We recently described neurons in the middle temporal (MT) area that signal depth sign (near vs far) from motion parallax; here, we examine whether and how the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

7
40
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(47 citation statements)
references
References 53 publications
7
40
0
Order By: Relevance
“…For other neurons, such as the opposite cell illustrated in figure 7b, depth-tuning in the combined condition (orange) appears to be dominated by motion parallax tuning (black), and selectivity in the combined condition is slightly reduced by the incongruent disparity selectivity. These findings suggest that congruent cells might contribute to perceptual cue integration of depth cues, whereas opposite cells would not, but the study of Nadler et al [70] was not designed to test these issues directly. Additional studies, in which both psychophysical and neuronal performance is tested with both congruent and conflicting combinations of disparity and motion parallax cues, will be needed to evaluate whether activity of MT neurons can account for perceptual cue integration and cue weighting (as explored previously for multisensory neurons involved in heading perception, [71 -73]).…”
Section: Integration Of Binocular Disparity and Motion Parallax Cues mentioning
confidence: 98%
See 3 more Smart Citations
“…For other neurons, such as the opposite cell illustrated in figure 7b, depth-tuning in the combined condition (orange) appears to be dominated by motion parallax tuning (black), and selectivity in the combined condition is slightly reduced by the incongruent disparity selectivity. These findings suggest that congruent cells might contribute to perceptual cue integration of depth cues, whereas opposite cells would not, but the study of Nadler et al [70] was not designed to test these issues directly. Additional studies, in which both psychophysical and neuronal performance is tested with both congruent and conflicting combinations of disparity and motion parallax cues, will be needed to evaluate whether activity of MT neurons can account for perceptual cue integration and cue weighting (as explored previously for multisensory neurons involved in heading perception, [71 -73]).…”
Section: Integration Of Binocular Disparity and Motion Parallax Cues mentioning
confidence: 98%
“…In animals, a few studies have examined how neurons signal three-dimensional surface orientation based on combinations of motion and disparity gradients [67] or perspective gradients and disparity gradients [68,69]. Recently, Nadler et al [70] measured the depth-sign selectivity of macaque MT neurons based on both binocular disparity and motion parallax cues. One might expect neurons to prefer the same depth-sign (near or far) for each cue.…”
Section: Integration Of Binocular Disparity and Motion Parallax Cues mentioning
confidence: 99%
See 2 more Smart Citations
“…The current results have implications beyond stereopsis. There is theoretical and empirical evidence supporting the existence of neurons tuned to mismatches from studies of stereopsis (DeAngelis, Ohzawa, & Freeman, 1991;Prince, Cumming, & Parker, 2002;Tsao et al, 2003), binocular rivalry (Katyal et al, 2018;Kingdom et al, 2018;Said & Heeger, 2013), and integration of cues within (Kim, Angelaki, & Deangelis, 2015;Nadler et al, 2013;Rideaux & Welchman, 2018) and between sensory modalities (Gu, Angelaki, & DeAngelis, 2008;Kim, Pitkow, Angelaki, & DeAngelis, 2016;Morgan, DeAngelis, & Angelaki, 2008).…”
Section: Discussionmentioning
confidence: 99%