2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00219
|View full text |Cite
|
Sign up to set email alerts
|

LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image

Abstract: We propose an algorithm to predict room layout from a single image that generalizes across panoramas and perspective images, cuboid layouts and more general layouts (e.g. "L"-shape room). Our method operates directly on the panoramic image, rather than decomposing into perspective images as do recent works. Our network architecture is similar to that of RoomNet [16], but we show improvements due to aligning the image based on vanishing points, predicting multiple layout elements (corners, boundaries, size and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
249
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 255 publications
(261 citation statements)
references
References 33 publications
0
249
0
Order By: Relevance
“…Recently, there are several other works [22,9,24,21,18] related to room layouts, but they focus on a different problem, i.e., to reconstruct 3D room layouts from photos.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, there are several other works [22,9,24,21,18] related to room layouts, but they focus on a different problem, i.e., to reconstruct 3D room layouts from photos.…”
Section: Related Workmentioning
confidence: 99%
“…None of these methods predicts texture behind occlusion, which is subject of our approach. Other methods exploit more extended inputs to predict 3D scene representations, such as a panorama image [51], RGB-D [11] or a depth map [37,44].…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, having a fully completed 3D model of the scene is often an unnecessary complication, since most of the information present in such model would never be used if the novel vantage points are either nearby the original one and/or small in number. It is worth noting that generating such completed 3D scenes typically comes with high computational and memory cost [51,11,37,44].…”
Section: Introductionmentioning
confidence: 99%
“…However, it remains challenging for vision algorithms to detect and utilize such global structures from local image features, until recent advances in deep learning which makes learning high-level features possible from labeled data. The examples include detecting planes [30,19], surfaces [10], 2D wireframes [13], room layouts [35], key points for mesh fitting [31,29], and sparse scene representations from multiple images [6].…”
Section: Introductionmentioning
confidence: 99%