Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00458
|View full text |Cite
|
Sign up to set email alerts
|

RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

Abstract: Figure 1: From an incomplete point cloud (N × 3) of a 3D scene (left), our method learns to jointly understand the 3D objects with semantic labels, poses (middle) and complete object meshes (right).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 44 publications
(22 citation statements)
references
References 58 publications
0
22
0
Order By: Relevance
“…Research in 3D scene understanding have been spurred forward with the introduction of larger-scale, annotated real-world RGB-D datasets [1,4,10]. This has enabled data-driven semantic understanding of 3D reconstructed environments, where we have now seen notable progress, such as for 3D semantic segmentation [7,11,17,31,32,36], object detection [29,30,44], instance segmentation [16,18,22,23,26,27,40,41], and recently panoptic segmentation [9]. Such 3D scene understanding tasks have been analogously defined to 2D image understanding, which considers RGB-only input without depth information.…”
Section: Related Workmentioning
confidence: 99%
“…Research in 3D scene understanding have been spurred forward with the introduction of larger-scale, annotated real-world RGB-D datasets [1,4,10]. This has enabled data-driven semantic understanding of 3D reconstructed environments, where we have now seen notable progress, such as for 3D semantic segmentation [7,11,17,31,32,36], object detection [29,30,44], instance segmentation [16,18,22,23,26,27,40,41], and recently panoptic segmentation [9]. Such 3D scene understanding tasks have been analogously defined to 2D image understanding, which considers RGB-only input without depth information.…”
Section: Related Workmentioning
confidence: 99%
“…Consider the minimization with respect to θ. Given σ I , we can see the minimization of the maximum likelihood under a conditional Gaussian distribution for each point is equivalent to minimizing a mean-of-squares error function given by L I in (6). Apply F θ on a point on a ray and µ, ρ, σ can be obtained.…”
Section: B 3d Reconstruction With Neural Uncertaintymentioning
confidence: 99%
“…Previous 3D representations for autonomous 3D reconstruction include point cloud [5], [6], volume [7], [8] and surface [9], [10]. To plan the view without global information of a scene, previous work resorts to a greedy strategy: given the current position of a robot and the reconstruction status, they quantify the quality of the candidate viewpoints via information gain to plan the next best view (NBV).…”
Section: Introductionmentioning
confidence: 99%
“…Research in 3D scene understanding has recently been spurred forward with the introduction of larger-scale, real-world 3D scanned scene datasets [1,8,3,14]. We have seen notable progress in development of methods for 3D semantic segmentation [41,42,48,49,9,26,32,53,28,56], object detection [46,47,39,40,38,58,35], and instance segmentation [23,55,54,31,24,13,18,29]. In particular, the introduction of sparse convolutional neural networks [15,6] have presented a computationally-efficient paradigm producing state-of-the-art results in such tasks.…”
Section: Related Workmentioning
confidence: 99%