2018
DOI: 10.1007/978-3-030-01249-6_6
|View full text |Cite
|
Sign up to set email alerts
|

Recovering 3D Planes from a Single Image via Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
82
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 100 publications
(83 citation statements)
references
References 29 publications
1
82
0
Order By: Relevance
“…where Q i is the 3D point at pixel i inferred from ground truth depth map. Note that our approach to plane parameter estimation is different from previous methods [23,31]. Those methods first predict plane parameter and then associate each pixel with a particular plane parameter.…”
Section: Plane Parameter Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…where Q i is the 3D point at pixel i inferred from ground truth depth map. Note that our approach to plane parameter estimation is different from previous methods [23,31]. Those methods first predict plane parameter and then associate each pixel with a particular plane parameter.…”
Section: Plane Parameter Estimationmentioning
confidence: 99%
“…Liu et al [23] propose a deep neural network that learns to infer plane parameters and assign plane IDs (segmentation masks) to each pixel in a single image. Yang and Zhou [31] cast the problem as a depth prediction problem and propose a training scheme which does not require ground truth 3D planes. However, these approaches are limited to predicting a fixed number of planes, which could lead to a degraded performance in complex scenes.…”
Section: Single-view Planar Reconstructionmentioning
confidence: 99%
“…used in conventional 3D reconstruction systems such as structure from motion (SfM) and visual SLAM, high-level geometric features provide more salient and robust information about the global geometry of the scene. This line of research has drawn interests on the exploration of extracting structures such as lines and junctions (wireframes) [14], planes [34,20], surfaces [11], and room layouts [37]. Among all the high-level geometric features, straight lines and their junctions (together called a wireframe [14]) are probably the most fundamental elements that can be used to assemble the 3D structures of a scene.…”
Section: Introductionmentioning
confidence: 99%
“…end while 25: E ← P -L (E) 26: return (V, E) 27: end procedure 28: procedure P -L (E) 29: sort E w.r.t confidence values in descending order 30:…”
Section: A Supplementary Materialsmentioning
confidence: 99%
“…However, it remains challenging for vision algorithms to detect and utilize such global structures from local image features, until recent advances in deep learning which makes learning high-level features possible from labeled data. The examples include detecting planes [30,19], surfaces [10], 2D wireframes [13], room layouts [35], key points for mesh fitting [31,29], and sparse scene representations from multiple images [6].…”
Section: Introductionmentioning
confidence: 99%