2020
DOI: 10.1109/tip.2020.2984373
|View full text |Cite
|
Sign up to set email alerts
|

Web-Shaped Model for Head Pose Estimation: An Approach for Best Exemplar Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 43 publications
(22 citation statements)
references
References 42 publications
0
22
0
Order By: Relevance
“…Comparisons on the Pointing’04 Dataset. The proposed head-pose estimation method is compared with MGD [ 53 ], kCovGa [ 54 ], CovGA [ 54 ], CNN [ 55 ], fuzzy [ 56 ], MSHF [ 57 ], SAE-XGB [ 58 ], Hopenet [ 26 ], FSA-Net [ 27 ], hGLLiM [ 59 ], 3DDFA [ 37 ] and 4C_4S_var4 [ 36 ]. Among them, MGD, kCovGa, CovGA, CNN, fuzzy, MSHF, hGLLiM, and SAE-XGB have been trained on the Pointing’04 dataset following a five-fold cross-validation protocol; Hopenet and FSA-Net have been trained on another dataset called 300W-LP, while 3DDFA, 4C_4S_var4 and our method have not been trained with any head-pose label.…”
Section: Experimental Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Comparisons on the Pointing’04 Dataset. The proposed head-pose estimation method is compared with MGD [ 53 ], kCovGa [ 54 ], CovGA [ 54 ], CNN [ 55 ], fuzzy [ 56 ], MSHF [ 57 ], SAE-XGB [ 58 ], Hopenet [ 26 ], FSA-Net [ 27 ], hGLLiM [ 59 ], 3DDFA [ 37 ] and 4C_4S_var4 [ 36 ]. Among them, MGD, kCovGa, CovGA, CNN, fuzzy, MSHF, hGLLiM, and SAE-XGB have been trained on the Pointing’04 dataset following a five-fold cross-validation protocol; Hopenet and FSA-Net have been trained on another dataset called 300W-LP, while 3DDFA, 4C_4S_var4 and our method have not been trained with any head-pose label.…”
Section: Experimental Resultsmentioning
confidence: 99%
“…Recently, Andrea et al [ 35 ] proposed exploiting a quad-tree based representation of facial features and estimate head pose by guiding the subdivision of the locations of a set of landmarks detected over the face image into smaller and smaller quadrants. In [ 36 ], a web-shaped model was proposed to associate each of them to a specific face sector over the detected landmarks. Although this method does not need to train on datasets with head-pose labels, it performs poorly under large poses.…”
Section: Related Workmentioning
confidence: 99%
“…As the proposed approach deals with the normalization of faces of arbitrary poses, the head pose estimation may also be considered as part of related work. A recent approach proposed by Barra et al [12] first detects the facial landmarks, and applies the web-shaped model to associate each landmark to a specific face sector. The obtained information is used to build a feature vector to infer the head pose.…”
Section: Related Workmentioning
confidence: 99%
“…This method significantly improves the performance of the dataset where subjects are in movement. Recently, [30] relies on a cascade of two models and applies a web-shaped model over the detected landmarks to associate each landmark with a specific target area. This method detects the target at a reasonable distance and resolution to capture the best frame in the video.…”
Section: A Human Pose Estimationmentioning
confidence: 99%