2018
DOI: 10.1145/3272127.3275110
|View full text |Cite
|
Sign up to set email alerts
|

Monte Carlo convolution for learning on non-uniformly sampled point clouds

Abstract: we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
190
0
1

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 233 publications
(201 citation statements)
references
References 34 publications
1
190
0
1
Order By: Relevance
“…Beyond learning on unstructured point clouds, there have been some notable extension works, such as learning with hierarchical structures [28,14,35,36], learning with selforganizing network [19], learning to map a 3D point cloud to a 2D grid [43,8], addressing large-scale point cloud segmentation [15], handling non-uniform point cloud [11], and employing spectral analysis [45]. Such ideas are orthogonal to our method, and adding them on top of our proposed convolution could be an interesting future research.…”
Section: Related Workmentioning
confidence: 99%
“…Beyond learning on unstructured point clouds, there have been some notable extension works, such as learning with hierarchical structures [28,14,35,36], learning with selforganizing network [19], learning to map a 3D point cloud to a 2D grid [43,8], addressing large-scale point cloud segmentation [15], handling non-uniform point cloud [11], and employing spectral analysis [45]. Such ideas are orthogonal to our method, and adding them on top of our proposed convolution could be an interesting future research.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, some attempts have been made to design a convolution that operates directly on points [2,45,20,14,13]. These methods use the spatial localization property of Figure 1.…”
Section: Introductionmentioning
confidence: 99%
“…As for now, employing different parameter settings for different datasets is practically more feasible. Many recent deep neural network models for point cloud processing, such as MCCNN [14], also trained separate models for different datasets. A potential solution is domain adaptation [2], especially multi-source domain adaptation, that has proven beneficial for learning source data from different domains [12].…”
Section: Generalizabilitymentioning
confidence: 99%
“…• A first and foremost work is to update our backbone network to stateof-the-art deep learning models for processing point clouds. For example, Hermosilla et al proposed MCCNN that utilizes Monte Carlo up-and down-sampling to preserve the original sample density, making it more suitable for non-uniformly distributed point clouds [14]. We consider MCCNN as an important future improvement to enhance the effectiveness and robustness of LassoNet.…”
Section: Limitations and Future Workmentioning
confidence: 99%