2017 IEEE International Joint Conference on Biometrics (IJCB) 2017
DOI: 10.1109/btas.2017.8272730
|View full text |Cite
|
Sign up to set email alerts
|

Subspace selection to suppress confounding source domain information in AAM transfer learning

Abstract: Active appearance models (AAMs) are a class of generative models that have seen tremendous success in face analysis. However, model learning depends on the availability of detailed annotation of canonical landmark points. As a result, when accurate AAM fitting is required on a different set of variations (expression, pose, identity), a new dataset is collected and annotated. To overcome the need for time consuming data collection and annotation, transfer learning approaches have received recent attention. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
2
2

Relationship

4
0

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…While several effective transfer learning techniques are available for AMM models [24], [25], such methods are typically used when only a handful of training images from the target domain are available, e.g. as low as 5 target training images [24], [26]. In our case, with the availability of a few hundred training images, it was possible to fully retrain the AAM model from scratch.…”
Section: Retraining and Fine-tuningmentioning
confidence: 99%
“…While several effective transfer learning techniques are available for AMM models [24], [25], such methods are typically used when only a handful of training images from the target domain are available, e.g. as low as 5 target training images [24], [26]. In our case, with the availability of a few hundred training images, it was possible to fully retrain the AAM model from scratch.…”
Section: Retraining and Fine-tuningmentioning
confidence: 99%
“…We compare the performance of different methods on the cognitively healthy older adult subset (H) versus the dementia subset (D) of T f and T p in terms of the convergence rate. To measure the convergence rate, we use its standard definition in the literature [2,8,32] as the percentage of test examples that converge to the ground truth landmark points given a tolerance in the root mean squared (RMS) fitting error (here, 5% of the face diagonal).…”
Section: Discussionmentioning
confidence: 99%
“…Here (x) shows the error for each sample and α is a hyper-parameter that controls the overall relative importance between source and target samples. Source sample weights {w xj = PT (xj ) PS (xj ) | j ∈ {1, ..., N S }} play a major role in instance-based transfer learning methods, as they control the individual effect of source samples (Asgarian et al, 2017). We describe different weighting approaches including our five baselines models and our proposed weighting strategy in the following.…”
Section: Transfer Learningmentioning
confidence: 99%