2020
DOI: 10.1145/3386569.3392437
|View full text |Cite
|
Sign up to set email alerts
|

CNNs on surfaces using rotation-equivariant features

Abstract: This paper is concerned with a fundamental problem in geometric deep learning that arises in the construction of convolutional neural networks on surfaces. Due to curvature, the transport of filter kernels on surfaces results in a rotational ambiguity, which prevents a uniform alignment of these kernels on the surface. We propose a network architecture for surfaces that consists of vector-valued, rotation-equivariant features. The equivariance property makes it possible to locally align features, which were co… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
68
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 53 publications
(74 citation statements)
references
References 43 publications
0
68
0
Order By: Relevance
“…Our model differs from SURFMNet in two ways: (i) it accepts geometric data derived from rigidly-aligned polygon models as input as opposed to precalculated spectral shape descriptors, and (ii) it uses a Harmonic Surface Network (HSN) as a feature extractor in place of a fully connected residual network. We choose HSN as our feature extractor due to its ability to produce rotation-invariant features from polygon model geometry, an essential property for our descriptors (see Methods and Materials for additional details) [38]. Our network's FM layer estimates the forward and backward correspondences, C 12 and C 21 , between a source shape S 1 and a target shape S 2 .These functional maps are easily converted to dense P2P correspondences, T 12 and T 21 .…”
Section: Descriptor Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Our model differs from SURFMNet in two ways: (i) it accepts geometric data derived from rigidly-aligned polygon models as input as opposed to precalculated spectral shape descriptors, and (ii) it uses a Harmonic Surface Network (HSN) as a feature extractor in place of a fully connected residual network. We choose HSN as our feature extractor due to its ability to produce rotation-invariant features from polygon model geometry, an essential property for our descriptors (see Methods and Materials for additional details) [38]. Our network's FM layer estimates the forward and backward correspondences, C 12 and C 21 , between a source shape S 1 and a target shape S 2 .These functional maps are easily converted to dense P2P correspondences, T 12 and T 21 .…”
Section: Descriptor Learningmentioning
confidence: 99%
“…Existing methods either (i) suffer from poor expressivity or (ii) are too sensitive to differences in polygon model connectivity, or (iii) they don't produce rotation-invariant features in a manner that is conducive to learning spectral descriptors. In this study, we craft spectral descriptors in a similar point-cloud-based fashion, but we use a Harmonic Surface Network (HSN) as a feature extractor [38]. HSN-based feature extractors produce highly expressive intrinsic features that are strongly locally-aligned.…”
Section: Learning Intrinsic Features From Surfacesmentioning
confidence: 99%
“…Two-stream architectures Architectures with two streams and vector-valued features are also used in rotation-equivariant approaches for images [77] and surface meshes [16,76,12,57]. These networks constrain kernels to output complex-valued and rotation-or gaugeequivariant features, which are separated into orders of equivariance.…”
Section: Related Workmentioning
confidence: 99%
“…This means that a network with DeltaConv is able to learn from directional information, while being agnostic to the choice of basis vectors in tangent spaces. This is an alternative to building reference frame fields to compare results of filters along a surface [3,53,32], as well as to methods relying on equivariant filters to achieve such independence [16,76].…”
Section: Properties Of Our Networkmentioning
confidence: 99%
See 1 more Smart Citation