2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489295
|View full text |Cite
|
Sign up to set email alerts
|

Analysing rotation-invariance of a log-polar transformation in convolutional neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…However, PTN only recognized global deformation. In [37], the results show that several angles are better fitted to CNNs with log-polar operations for all tested datasets. However, only rotation transformation is performed and experiments are carried out under different rotations.…”
Section: Introductionmentioning
confidence: 94%
See 1 more Smart Citation
“…However, PTN only recognized global deformation. In [37], the results show that several angles are better fitted to CNNs with log-polar operations for all tested datasets. However, only rotation transformation is performed and experiments are carried out under different rotations.…”
Section: Introductionmentioning
confidence: 94%
“…If the target in the cartesian coordinates changes in proportion, then it is equivalent to the target in the log-polar coordinates being displaced along the radius axis [37]. The rotation change in the target in cartesian coordinates is equivalent to the displacement change in the target in the log-polar coordinate space along the angular axis.…”
Section: Log-polar Transformationmentioning
confidence: 99%
“…(Esteves et al, 2017) extend spatial transformer networks (Jaderberg et al, 2015) to use the log-polar transform allowing them to fixate the high-resolution region on objects of interest. (Amorim et al, 2018) evaluate the rotation invariance of log-polar image representations in conjunction with CNNs. (Kim et al, 2020) similarly evaluate scale and rotation in- (Balasuriya, 2006), resampled to a uniform grid.…”
Section: Deep Learning On Foveated Imagesmentioning
confidence: 99%
“…In [147], Chen et al proposed two different polar transformation modules, Full Polar Convolution (FPolarConv) and Local Polar Fig. 11 Strategies for encoding scale invariance include using different filter sizes in each convolution layer (a), employing independent sub-networks for learning different scales and then aggregating their output predictions (b), using multi-scale ftlters with competitive pooling to select best option (c), and exploiting image and/or feature pyramids (d) Another type of domain conversion is based on log-radial harmonics or log-polar representation [65, [150][151][152][153], which renders the rotation of an image in the Cartesian coordinate system as a plane translation in one axis and scale change as a translation along the other main axes of the logarithmicpolar plane. A key advantage of the polar-log transformations lies in the fact that both rotation and scale are encoded as opposed to rotation-only invariance in the case of the polar transform approach.…”
Section: Rotation Invariancementioning
confidence: 99%