2018
DOI: 10.1007/978-3-030-01225-0_37
|View full text |Cite
|
Sign up to set email alerts
|

Fully-Convolutional Point Networks for Large-Scale Point Clouds

Abstract: This work proposes a general-purpose, fully-convolutional network architecture for efficiently processing large-scale 3D data. One striking characteristic of our approach is its ability to process unorganized 3D representations such as point clouds as input, then transforming them internally to ordered structures to be processed via 3D convolutions. In contrast to conventional approaches that maintain either unorganized or organized representations, from input to output, our approach has the advantage of opera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
84
0
2

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 178 publications
(92 citation statements)
references
References 19 publications
0
84
0
2
Order By: Relevance
“…Intuitively, such multiscale skip-connections are useful for point-based deep learning as well. A few recent works have exploited the power of multiscale representation [12,24,28,37,49] and skipconnection [8,43] in 3D learning. In this paper, we focus on point cloud upsampling and propose intra-level and interlevel point-based skip-connections.…”
Section: Related Workmentioning
confidence: 99%
“…Intuitively, such multiscale skip-connections are useful for point-based deep learning as well. A few recent works have exploited the power of multiscale representation [12,24,28,37,49] and skipconnection [8,43] in 3D learning. In this paper, we focus on point cloud upsampling and propose intra-level and interlevel point-based skip-connections.…”
Section: Related Workmentioning
confidence: 99%
“…Ins. mIOU mIOU PointNet [25] 80.4% 83.7% PointNet++ [27] 81.9% 85.1% FCPN [29] -84.0% SyncSpecCNN [51] 82.0% 84.7% SSCN [10] 83.3% 86.0% SPLATNet [36] 83.7% 85.4% SpiderCNN [49] 81.7% 85.3% SO-Net [19] 81.0% 84.9% PCNN [2] 81.8% 85.1% KCNet [34] 82.2% 83.7% ShapeContextNet [47] -84.6% SpecGCN [41] -85.4% 3DmFV [3] 81.0% 84.3% RSNet [12] 81.4% 84.9% PointCNN [20] 84.6% 86.1% DGCNN [45] 82.3% 85.1% SGPN [44] 82.8% 85.8% PointConv [46] 82.8% 85.7% Point2Seq [23] -85.2% InterpCNN (ours) 84.0% 86.3% work in Figure 2(b). During training we randomly sample 2,048 points from each object and use the original point clouds for testing.…”
Section: Catmentioning
confidence: 99%
“…Other works such as SLAM++ [24] or Fusion++ [12] operate on an object level and create semantic scene graphs for SLAM and loop closure. Non-incremental scene understanding methods, in contrast, process a 3D scan directly to obtain semantic, instance or part segmentation [19,20,21,5,10]. Independently from the approach, all these methods rely on the assumption that…”
Section: Rgb-d Scenementioning
confidence: 99%