2005
DOI: 10.1038/nn1600
|View full text |Cite
|
Sign up to set email alerts
|

Prior experience of rotation is not required for recognizing objects seen from different angles

Abstract: An object viewed from different angles can be recognized and distinguished from similar distractors after the viewer has had experience watching it rotate. It has been assumed that as an observer watches the rotation, separate representations of individual views become associated with one another. However, we show here that once monkeys learned to discriminate individual views of objects, they were able to recognize objects across rotations up to 60 degrees , even though there had been no opportunity to learn … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
41
1

Year Published

2009
2009
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 43 publications
(45 citation statements)
references
References 25 publications
3
41
1
Order By: Relevance
“…Viewing sequence thus matters but most likely matters a good deal less than does 3-D form. This conclusion is in line with previous results suggesting that pure spatial continuity between images is enough to induce the kind of image binding described in temporal association experiments (Perry, Rolls, & Stringer, 2006) and also with the finding that previous experience with specific object rotations is not necessary for robust extrapolation across views (Wang, Obama, Yamashita, Sugihara, & Tanaka, 2005). In general, it may be the case that the sequence of views seen by the observer makes a small enough contribution to the processes governing canonicality, recognition, or generalization across views that removing it from the equation does not cause the visual system to break in a profound way .…”
Section: Does Sequence Order Influence Canonicality?supporting
confidence: 92%
“…Viewing sequence thus matters but most likely matters a good deal less than does 3-D form. This conclusion is in line with previous results suggesting that pure spatial continuity between images is enough to induce the kind of image binding described in temporal association experiments (Perry, Rolls, & Stringer, 2006) and also with the finding that previous experience with specific object rotations is not necessary for robust extrapolation across views (Wang, Obama, Yamashita, Sugihara, & Tanaka, 2005). In general, it may be the case that the sequence of views seen by the observer makes a small enough contribution to the processes governing canonicality, recognition, or generalization across views that removing it from the equation does not cause the visual system to break in a profound way .…”
Section: Does Sequence Order Influence Canonicality?supporting
confidence: 92%
“…The cross-modal nature of the task is crucial. In non-cross-modal tasks, matching can be done at the perceptual level: for example, matching different views of the same face by means of common features perceived through mental rotation (34) or recognizing the different vocalizations of an individual through the presence of unchanged physical features such as formants. On the contrary, in our task, monkeys had to rely on information in memory to match one stimulus, the voice, with the other, the face.…”
Section: Discussionmentioning
confidence: 99%
“…In addition, studies have investigated the tolerance to many other image transformations, such as tolerance for changes in size (Ito et al 1995), illumination (Braje et al 1998), in-plane orientation (Guyonneau et al 2006;Knowlton et al 2009), and orientation in depth (Logothetis et al 1994;Wallis and Bulthoff 2001;Wang et al 2005). In light of behavioral work from Zoccolan and colleagues (Alemi-Neissi et al 2013;Zoccolan et al 2009), we also expect some tolerance to these other transformations somewhere in the rat visual cortex, and on the basis of our present data we hypothesize to find this tolerance to be the strongest in and around area TO.…”
Section: Tolerance To Image Transformationsmentioning
confidence: 99%