2010
DOI: 10.1007/978-3-642-13772-3_42
|View full text |Cite
|
Sign up to set email alerts
|

Recognition of Facial Expressions by Cortical Multi-scale Line and Edge Coding

Abstract: Abstract. Face-to-face communications between humans involve emotions, which often are unconsciously conveyed by facial expressions and body gestures. Intelligent human-machine interfaces, for example in cognitive robotics, need to recognize emotions. This paper addresses facial expressions and their neural correlates on the basis of a model of the visual cortex: the multi-scale line and edge coding. The recognition model links the cortical representation with Paul Ekman's Action Units which are related to the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 19 publications
(27 reference statements)
0
2
0
Order By: Relevance
“…This we expected due to our previous experience with cortical models: multi-scale keypoints, lines and edges provide very useful information to generate saliency maps for Focus of Attention or to detect faces by grouping facial landmarks defined by keypoints at eyes, nose and mouth [7]. In [11] we were able to use lines and edges to recognize facial expressions with success, and in [9] we have shown that lines and edges are very useful for face and object recognition. The method included here for the detection of head poses is not biological, but in principle we can integrate our methods for face detection and recognition of facial expressions.…”
Section: Discussionmentioning
confidence: 89%
“…This we expected due to our previous experience with cortical models: multi-scale keypoints, lines and edges provide very useful information to generate saliency maps for Focus of Attention or to detect faces by grouping facial landmarks defined by keypoints at eyes, nose and mouth [7]. In [11] we were able to use lines and edges to recognize facial expressions with success, and in [9] we have shown that lines and edges are very useful for face and object recognition. The method included here for the detection of head poses is not biological, but in principle we can integrate our methods for face detection and recognition of facial expressions.…”
Section: Discussionmentioning
confidence: 89%
“…An interesting aspect for future research is the incorporation of age and biometric differences (e.g., gender, colour of the skin, age, birth marks, etc. ), also expression classification already achieved by using multiscale lines and edges (Sousa et al, 2010). As for now, face recognition with extreme expressions or newly grown beards, etc., remains a big challenge.…”
Section: Discussionmentioning
confidence: 99%
“…On the basis of models of neural processing schemes, it is now possible to create a cortical architecture bootstrapped by global and local gist (Martins et al, 2009;, with face and figure-ground segregation (Farrajota et al, 2011;Rodrigues & du Buf, 2006, 2009a, focus-of-attention (Martins et al, 2009;Rodrigues & du Buf, 2006), face/object categorisation and recognition (Rodrigues & du Buf, 2006, 2009a, including recognition of facial expressions (Sousa et al, 2010).…”
Section: Cortical Backgroundmentioning
confidence: 99%
“…Models of simple, complex and end-stopped cells in visual area V1 have been developed and these models have been used for line, edge and keypoint detection du Buf, 2006, 2009). Lines and edges have been successfully used for multiple applications like object segregation, scale selection, saliency maps and disparity maps (Rodrigues et al, 2012), optical flow (Farrajota et al, 2011), face detection and recognition (Rodrigues and du Buf, 2006), facial expression recognition (Sousa et al, 2010), etc. The model for keypoint detection was computationally too expensive to be used in real-time applications at the time it was developed, but recent advances in computer hardware and code optimizations led to a much faster model that can now be used in real time (Terzic et al, 2015).…”
Section: Introductionmentioning
confidence: 99%