2021
DOI: 10.48550/arxiv.2110.05861
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convolutional Neural Networks Are Not Invariant to Translation, but They Can Learn to Be

Valerio Biscione,
Jeffrey S. Bowers

Abstract: When seeing a new object, humans can immediately recognize it across different retinal locations: the internal object representation is invariant to translation. It is commonly believed that Convolutional Neural Networks (CNNs) are architecturally invariant to translation thanks to the convolution and/or pooling operations they are endowed with. In fact, several studies have found that these networks systematically fail to recognise new objects on untrained locations. In this work, we test a wide variety of CN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…However, CNNs are not architecturally invariant to translation, size, or illumination. In fact, several studies have found that these networks systematically fail to recognize new objects in untrained locations or orientations [10]. This is where data augmentation becomes essential.…”
Section: Dataset Augmentationmentioning
confidence: 99%
“…However, CNNs are not architecturally invariant to translation, size, or illumination. In fact, several studies have found that these networks systematically fail to recognize new objects in untrained locations or orientations [10]. This is where data augmentation becomes essential.…”
Section: Dataset Augmentationmentioning
confidence: 99%