2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00502
|View full text |Cite
|
Sign up to set email alerts
|

Hyperparameter-Free Losses for Model-Based Monocular Reconstruction

Abstract: This work proposes novel hyperparameter-free losses for single view 3D reconstruction with morphable models (3DMM). We dispense with the hyperparameters used in other works by exploiting geometry, so that the shape of the object and the camera pose are jointly optimized in a sole term expression. This simplification reduces the optimization time and its complexity. Moreover, we propose a novel implicit regularization technique based on random virtual projections that does not require additional 2D or 3D annota… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…This is often translated on appending multiple terms into the loss together with hyperparameters that help to keep the norm ofα id small [19]. In order to avoid the use of hyperparameters, we make use of the loss proposed in [13], which allows to simultaneously learn the 3D shape and the camera pose using a sole term expression.…”
Section: Single View Setupmentioning
confidence: 99%
See 2 more Smart Citations
“…This is often translated on appending multiple terms into the loss together with hyperparameters that help to keep the norm ofα id small [19]. In order to avoid the use of hyperparameters, we make use of the loss proposed in [13], which allows to simultaneously learn the 3D shape and the camera pose using a sole term expression.…”
Section: Single View Setupmentioning
confidence: 99%
“…In order to enforce a global scene consistency in the predictions, we define an objective that uses all the camera poses and the predicted 3D shape within a single term, which does not include any hyperparameter and is easy to minimize, as proposed by [13]. Our loss is defined as the sum of the reprojection errors across all the input views, as in Bundle Adjustment [21], which is the Maximum Likelihood estimator when the image error is zero-mean.…”
Section: Multi-view Setupmentioning
confidence: 99%
See 1 more Smart Citation