2022
DOI: 10.1137/21m1407616
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Transport Convolution: Deformable Convolutional Networks on Manifold-Structured Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…One of the key challenges to applying wavelets and similar constructions to nodebased graph signals is that graphs lack a natural translation operator, which prevents the construction of convolutional operators and traditional Littlewood-Paley theory [19,25,44]. This challenge is also present for general κ-dimensional simplices.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the key challenges to applying wavelets and similar constructions to nodebased graph signals is that graphs lack a natural translation operator, which prevents the construction of convolutional operators and traditional Littlewood-Paley theory [19,25,44]. This challenge is also present for general κ-dimensional simplices.…”
Section: Related Workmentioning
confidence: 99%
“…Unfortunately, these definitions do not generalize to non-homogeneous domains due to the lack of appropriate translation operators and dilation operators [44]. Instead, several methods have been proposed to generate similar bases, and overcomplete dictionaries to apply more abstract domains such as graphs and discretized manifolds [17,45,40].…”
Section: Orthonormal κ-Haarmentioning
confidence: 99%
“…In addition, defining the localized convolution kernels is necessary for this representation; strategies include localizing spectral filters [28], utilizing the classic geometric approaches (e.g., B-splines [29], wavelets [30], and extrinsic Euclidean convolution [31]). The methods with local parameterization also introduce a drawback that only the small receptive field is applied to aggregate the information from the surface meshes, wherefore the performances of these approaches depend on the mesh resolution of the 3D objects.…”
Section: Geometric Deep Learning On Meshesmentioning
confidence: 99%
“…In this work, we are interested in rotationally invariant features, thus we take a path closer to Schonsheck et al 17 and Masci et al, 7 we actually lift functions to functions on the bundle of tangent space rotations of our manifolds, a two-dimensional manifold, as opposed to Cohen et al 18 where the lifting results in functions on SOð3Þ-a three-dimensional manifold. Then, we add one or more extra local group convolution layers before summarising the data and eliminating path dependency.…”
Section: Introductionmentioning
confidence: 99%
“…[9][10][11][12][13][14][15] Global equivariance is often sought but proved complicated or even elusive in many cases when the underlying geometry is nontrivial. 16 An elementary construction on a general manifold is proposed by Schonsheck et al 17 via a fixed choice of geodesic paths used to transport filters between points on the manifold, ignoring the effects of path dependency (holonomy when paths are geodesics). The removal of this dependency can be obtained by summarizing local responses over local orientations, which is what was done by Masci et al 7 On the other hand, Cohen et al 18 lifted spherical functions to the 3D-rotation group SOð3Þ and used a generalization of Fourier transform on it to perform convolution.…”
Section: Introductionmentioning
confidence: 99%