2012
DOI: 10.1007/978-3-642-33765-9_48
|View full text |Cite
|
Sign up to set email alerts
|

No Bias Left behind: Covariate Shift Adaptation for Discriminative 3D Pose Estimation

Abstract: Abstract. Discriminative, or (structured) prediction, methods have proved effective for variety of problems in computer vision; a notable example is 3D monocular pose estimation. All methods to date, however, relied on an assumption that training (source) and test (target) data come from the same underlying joint distribution. In many real cases, including standard datasets, this assumption is flawed. In presence of training set bias, the learning results in a biased model whose performance degrades on the (ta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

3
43
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 28 publications
(46 citation statements)
references
References 29 publications
3
43
0
Order By: Relevance
“…However, as we show here, the entirely unsupervised approach may be ineffective in dealing with more severe biases (e.g., those that induce structured changes between the training and test data) that make the training and test domains largely disjoint (e.g., see Figure 3 (a)). Here we expand on the findings in Yamada et al (2012), and propose a much more aggressive semi-supervised approach that is able to work even in the cases where the earlier formulation becomes less effective. It's important to note that the proposed approach is a strict generalization.…”
Section: Introductionmentioning
confidence: 92%
See 4 more Smart Citations
“…However, as we show here, the entirely unsupervised approach may be ineffective in dealing with more severe biases (e.g., those that induce structured changes between the training and test data) that make the training and test domains largely disjoint (e.g., see Figure 3 (a)). Here we expand on the findings in Yamada et al (2012), and propose a much more aggressive semi-supervised approach that is able to work even in the cases where the earlier formulation becomes less effective. It's important to note that the proposed approach is a strict generalization.…”
Section: Introductionmentioning
confidence: 92%
“…To address these issues we propose a new Semi-supervised Domain Adaptation (SSDA) approach, which is a generalization of the Unsupervised Domain Adaptation (USDA) we proposed earlier in Yamada et al (2012). Our domain adaptation method allows us to easily adapt a model learned on a (source) training set of featurepose pairs -{(x tr i , y tr i )} ntr i=1 , to a partially labeled (target) test set consisting of very few labeled examples -{(x te j , y te j )} n te j=1 (for which outputs are known) and many unlabeled samples for which outputs should be inferred -{(x te j )} nte j=n te .…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations