2021
DOI: 10.48550/arxiv.2105.02047
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cuboids Revisited: Learning Robust 3D Shape Fitting to Single RGB Images

Abstract: Humans perceive and construct the surrounding world as an arrangement of simple parametric models. In particular, man-made environments commonly consist of volumetric primitives such as cuboids or cylinders. Inferring these primitives is an important step to attain high-level, abstract scene descriptions. Previous approaches directly estimate shape parameters from a 2D or 3D input, and are only able to reproduce simple objects, yet unable to accurately parse more complex 3D scenes. In contrast, we propose a ro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 54 publications
(126 reference statements)
0
1
0
Order By: Relevance
“…Considering these problems, primitive-based models are proposed (Yao et al 2021;Kluger et al 2021;Yavartanoo et al 2021;Paschalidou et al 2021;Han et al 2021;He et al 2021;Yao et al 2022Yao et al , 2023b. Most use rendered 3D models as input (Yao et al 2021;Yavartanoo et al 2021;Paschalidou et al 2021;Han et al 2021), but few use realis-tic photos (Kluger et al 2021;He et al 2021;Yao et al 2022Yao et al , 2023b, among which (Yao et al 2022(Yao et al , 2023b are most similar to ours. However, they only focus on one animal in one image case, and the input images need to be carefully curated or pre-processed.…”
Section: Related Workmentioning
confidence: 99%
“…Considering these problems, primitive-based models are proposed (Yao et al 2021;Kluger et al 2021;Yavartanoo et al 2021;Paschalidou et al 2021;Han et al 2021;He et al 2021;Yao et al 2022Yao et al , 2023b. Most use rendered 3D models as input (Yao et al 2021;Yavartanoo et al 2021;Paschalidou et al 2021;Han et al 2021), but few use realis-tic photos (Kluger et al 2021;He et al 2021;Yao et al 2022Yao et al , 2023b, among which (Yao et al 2022(Yao et al , 2023b are most similar to ours. However, they only focus on one animal in one image case, and the input images need to be carefully curated or pre-processed.…”
Section: Related Workmentioning
confidence: 99%