2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00097
|View full text |Cite
|
Sign up to set email alerts
|

SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels

Abstract: We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
518
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 589 publications
(613 citation statements)
references
References 17 publications
0
518
1
Order By: Relevance
“…In contrast to previous approaches [14,28,34] which aggregate neighboring node features based on trainable weight functions, our method encodes node features under a explicitly defined spiral sequence, and a fully connected layer follows to encode input features combined with ordering information. It is a simple yet efficient approach.…”
Section: Main Conceptmentioning
confidence: 99%
“…In contrast to previous approaches [14,28,34] which aggregate neighboring node features based on trainable weight functions, our method encodes node features under a explicitly defined spiral sequence, and a fully connected layer follows to encode input features combined with ordering information. It is a simple yet efficient approach.…”
Section: Main Conceptmentioning
confidence: 99%
“…However, this operation requires identical graph input and handles the whole graph simultaneously, so it is not suitable for the variable and large graphs constructed from NVS. On the other hand, spatial convolution [20,38] aggregates a new feature vector for each vertex, us-ing its neighborhood information weighted by a trainable kernel function. Because of this property, we consider spatial convolution operation as a better choice when dealing with graphs from NVS.…”
Section: Spatial Feature Learning Modulementioning
confidence: 99%
“…The content of U is used to determine how the features are aggregated and the content of f l (j) defines what is aggregated. As such, several spatial convolution operations [20,38,40] on graphs were proposed by using different choice of kernel functions. Among them, SplineCNN [20] achieves state-of-the-art results in several applications, so in our work we use the same kernel function as in SplineCNN.…”
Section: Spatial Feature Learning Modulementioning
confidence: 99%
See 1 more Smart Citation
“…as applied to harmonic analysis on graphs (Kotzagiannidis and Dragotti (2019)). More recently, Fey et al (2018) have proposed tensor B-splines defined over a Cartesian product basis for geometric convolutional neural networks. Kronecker sums have been proposed as precision matrices for weighting the quadratic regularizer in smoothed multivariate spline regression.…”
Section: Rationale For the Proposed Multiway Kronecker Sum Modelmentioning
confidence: 99%