2014
DOI: 10.1007/s00371-014-1049-8
|View full text |Cite
|
Sign up to set email alerts
|

Learning to pool high-level features for face representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…Different features are combined by simply concatenating them, as proposed, for instance, for pedestrian detection in Liang et al (2012). Feature pooling and/or dimensionality reduction techniques (Huang et al (2014)) might be used as well, but we prefer to stick with a simple approach and the obtained results reported in the following are promising. Similarly, we have employed an early-fusion strategy, combining the features from the very beginning, before the classification and decision take place.…”
Section: Methodsmentioning
confidence: 99%
“…Different features are combined by simply concatenating them, as proposed, for instance, for pedestrian detection in Liang et al (2012). Feature pooling and/or dimensionality reduction techniques (Huang et al (2014)) might be used as well, but we prefer to stick with a simple approach and the obtained results reported in the following are promising. Similarly, we have employed an early-fusion strategy, combining the features from the very beginning, before the classification and decision take place.…”
Section: Methodsmentioning
confidence: 99%
“…(18). 3.2 Self-Adaptive PCA dictionary learning Recently, there have been many works about learning dictionaries [14,[40][41][42][43] from natural image patches. It is well-known that the KSVD [20,21] can represent various image local structures by learning a universal dictionary from natural image dataset.…”
Section: Split Bregman Based Iterative Algorithm For the Proposed Asnmentioning
confidence: 99%
“…To obtain the intensity of the whole frame, the Prkachin and Solomon pain intensity (PSPI) scale was computed as the sum of intensity of AU4, the maximum intensity of AU6 and AU7, the maximum intensity of AU9 and AU10, and the one of AU10. The PSPI is in a range of [0,15]. Note that the PSPI FACS pain scale is currently the only intensity metric which measures on a frame-by-frame scheme.…”
Section: Pain Intensity Detectionmentioning
confidence: 99%
“…The second-order feature pooling takes into account the pairwise correlation of local features. Considering its superiority in performance compared with the first-order pooling method, the second-order pooling is applied to a few hot topics such as object recognition, human pose estimation [13], touch saliency prediction [14], and face representation [15], etc.…”
Section: Related Workmentioning
confidence: 99%