2020
DOI: 10.48550/arxiv.2010.11661
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Generalized Spherical CNNs

Abstract: Many problems across computer vision and the natural sciences require the analysis of spherical data, for which representations may be learned efficiently by encoding equivariance to rotational symmetries. We present a generalized spherical CNN framework that encompasses various existing approaches and allows them to be leveraged alongside each other. The only existing non-linear spherical CNN layer that is strictly equivariant has complexity OpC 2 L 5 q, where C is a measure of representational capacity and L… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(12 citation statements)
references
References 18 publications
0
12
0
Order By: Relevance
“…A.4.3) and evaluate them similarly to the MNIST dataset and compute the Shrec17 retrieval metrics via the latent space linear classifierโ€™s predictions (Table 1). H-AE achieves the best classification and retrieval results for autoencoder-based models (Lohit & Trivedi, 2020), and is competitive with supervised models (Esteves et al, 2020; Cobb et al, 2021) despite the lower grid bandwidth and the small latent space. Using KNN classification instead of a linear classifier further improves performance (Table A.2).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…A.4.3) and evaluate them similarly to the MNIST dataset and compute the Shrec17 retrieval metrics via the latent space linear classifierโ€™s predictions (Table 1). H-AE achieves the best classification and retrieval results for autoencoder-based models (Lohit & Trivedi, 2020), and is competitive with supervised models (Esteves et al, 2020; Cobb et al, 2021) despite the lower grid bandwidth and the small latent space. Using KNN classification instead of a linear classifier further improves performance (Table A.2).…”
Section: Methodsmentioning
confidence: 99%
“…Specifically, pairs of features with any degree pair (๐“ 1 , ๐“ 2 ) may be used to produce a feature of degree ๐“ 3 as long as | ๐“ 1 ๐“ 2 | โ‰ค ๐“ 3 โ‰ค ๐“ 1 + ๐“ 2 . Features of the same degree are then concatenated to produce the final equivariant (steerable) output tensor. Since each produced feature (often referred to as a โ€œfragmentโ€ in the literature Kondor et al (2018); Cobb et al (2021)) is independently equivariant, computing only a subset of them still results in an equivariant output, albeit with lower representational power. Reducing the number of computed fragments is desirable since their computation cannot be easily parallelized.…”
Section: Expanded Background On So(3)-equivariancementioning
confidence: 99%
See 1 more Smart Citation
“…They yield a feature map transforming in the tensor product representation which is then decomposed into irreducible representations. To eschew the large tensors in this process, [24,30] introduce various refinements of this basic idea.…”
Section: Equivariant Deep Network Architectures For Machine Learningmentioning
confidence: 99%
“…The Clebsch-Gordan nets introduced in [23] have a similar structure but use as nonlinearities tensor products in the Fourier domain, instead of point-wise nonlinearities in the spatial domain. Several modifications of this approach led to a more efficient implementation in [24]. The constructions mentioned so far involve convolutions which map spherical features to features defined on SO(3).…”
Section: Introductionmentioning
confidence: 99%