2012 IEEE Conference on Computer Vision and Pattern Recognition 2012
DOI: 10.1109/cvpr.2012.6247934
|View full text |Cite
|
Sign up to set email alerts
|

A constrained latent variable model

Abstract: Latent variable models provide valuable compact representations for learning and inference in many computer vision tasks. However, most existing models cannot directly encode prior knowledge about the specific problem at hand. In this paper, we introduce a constrained latent variable model whose generated output inherently accounts for such knowledge. To this end, we propose an approach that explicitly imposes equality and inequality constraints on the model's output during learning, thus avoiding the computat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
58
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 70 publications
(59 citation statements)
references
References 29 publications
1
58
0
Order By: Relevance
“…In the video, 81 features were tracked along 61 frames showing approximately two periods of bending movement. The training data in this experiment is the cardboard dataset obtained from [39], which is the same as used in the previous evaluations when using cardboard sequence. The results show a comparison of our reconstructed shapes with the results obtained from MP, PTA, KSFM methods.…”
Section: Qualitative Evaluationmentioning
confidence: 99%
See 4 more Smart Citations
“…In the video, 81 features were tracked along 61 frames showing approximately two periods of bending movement. The training data in this experiment is the cardboard dataset obtained from [39], which is the same as used in the previous evaluations when using cardboard sequence. The results show a comparison of our reconstructed shapes with the results obtained from MP, PTA, KSFM methods.…”
Section: Qualitative Evaluationmentioning
confidence: 99%
“…The testing data used for evaluation include: two articulated face sequences, surprise and talking, both captured using 3D scanner with 3D tracking of 83 facial landmarks and two surface models, cardboard and cloth [39]. This paper does not focus on feature detection and tracking.…”
Section: Rfmentioning
confidence: 99%
See 3 more Smart Citations