2016
DOI: 10.1007/s11042-016-3297-2
|View full text |Cite
|
Sign up to set email alerts
|

Transfer useful knowledge for headpose estimation from low resolution images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…The proposed method uses dense sampling intervals with multivariate labeling distributions (MLDs) to show input face image head pose angles. Unstructured behavior of the dynamic environment makes focus-of-attention(FOA) challenging; it gives rise to the possibility of poor quality, occlusion, low-resolution image, disturbance, and non-linear relationship between head pose angle and ground truth value [16]. This research gap encouraged us to incorporate HPE based attention mapping.…”
Section: Introductionmentioning
confidence: 99%
“…The proposed method uses dense sampling intervals with multivariate labeling distributions (MLDs) to show input face image head pose angles. Unstructured behavior of the dynamic environment makes focus-of-attention(FOA) challenging; it gives rise to the possibility of poor quality, occlusion, low-resolution image, disturbance, and non-linear relationship between head pose angle and ground truth value [16]. This research gap encouraged us to incorporate HPE based attention mapping.…”
Section: Introductionmentioning
confidence: 99%
“…Transfer learning, which studies how to transfer the knowledge learned from the source domain to the target domain, can solve new problems faster and better with a small amount of data and low cost [25]. Transfer learning is an effective method when lacking data, and it has achieved many results in the research of face pose estimation [26]- [29]. These successes motivate us to use transfer learning to solve the problem in this paper.…”
Section: Introductionmentioning
confidence: 99%