2022
DOI: 10.48550/arxiv.2202.03990
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Equivariance versus Augmentation for Spherical Images

Abstract: We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images. We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation. The chosen architectures can be considered baseline references for the respective design paradigms. Our models are trained and evaluated on single or multiple items from the MNIST or FashionMNIST dataset projected onto the sphere. F… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…Group equivariant CNNs on S 2 were studied in Cohen et al (2018) by implementing efficient Fourier analysis on M = S 2 and G = SO(3). In Gerken et al (2022) the performance of group equivariant CNNs on S 2 was compared to standard nonequivariant CNNs trained with data augmentation. For the task of semantic segmentation it was demonstrated that the non-equivariant networks are consistently outperformed by the equivariant networks with considerably fewer parameters.…”
Section: Related Literaturementioning
confidence: 99%
“…Group equivariant CNNs on S 2 were studied in Cohen et al (2018) by implementing efficient Fourier analysis on M = S 2 and G = SO(3). In Gerken et al (2022) the performance of group equivariant CNNs on S 2 was compared to standard nonequivariant CNNs trained with data augmentation. For the task of semantic segmentation it was demonstrated that the non-equivariant networks are consistently outperformed by the equivariant networks with considerably fewer parameters.…”
Section: Related Literaturementioning
confidence: 99%
“…Group convolutional layers thus commute with group transformations. In certain applications where larger symmetries are important, these networks have been shown to further improve performance compared to networks exhibiting less symmetry [12,13]. From a physical perspective, the symmetries considered in CNNs and, more generally, in G-CNNs are analogous to global symmetries of lattice field theories, which has led to numerous applications of CNNs in high energy physics (see [14] for a review).…”
Section: Introductionmentioning
confidence: 99%