2019
DOI: 10.48550/arxiv.1901.08906
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dense 3D Point Cloud Reconstruction Using a Deep Pyramid Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Jin et al [ 28 ] extended [ 27 ] using weak supervision to generate depth images for improved reconstruction. Mandikal et al [ 29 ] predicted a low-resolution point cloud from a 2D image and upsampled it to reconstruct a high-resolution point cloud. DeformNet [ 30 ] retrieved a point cloud shape template and fused it with image features.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Jin et al [ 28 ] extended [ 27 ] using weak supervision to generate depth images for improved reconstruction. Mandikal et al [ 29 ] predicted a low-resolution point cloud from a 2D image and upsampled it to reconstruct a high-resolution point cloud. DeformNet [ 30 ] retrieved a point cloud shape template and fused it with image features.…”
Section: Related Workmentioning
confidence: 99%
“…However, they faced limitations in generating high-resolution point clouds, as achieving higher resolution necessitates increasing the decoder’s output neuron count. While Mandikal et al’s [ 29 ] method can generate higher-resolution point clouds, it initially reconstructs sparse point clouds and incrementally increases resolution, potentially losing object details similar to upscaling a compressed 2D image. 3D-PSRNet’s [ 45 ] reconstruction method incorporates object parts into network training, allowing the segmentation network to capture detailed object part features.…”
Section: Related Workmentioning
confidence: 99%