2011
DOI: 10.1007/978-3-642-24571-8_51
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Latent Discriminative Dynamic of Multi-dimensional Affective Signals

Abstract: Abstract. During face-to-face communication, people continuously exchange para-linguistic information such as their emotional state through facial expressions, posture shifts, gaze patterns and prosody. These affective signals are subtle and complex. In this paper, we propose to explicitly model the interaction between the high level perceptual features using Latent-Dynamic Conditional Random Fields. This approach has the advantage of explicitly learning the sub-structure of the affective signals as well as th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
3

Year Published

2012
2012
2020
2020

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 71 publications
(57 citation statements)
references
References 18 publications
0
54
3
Order By: Relevance
“…Dahmane et al [2] use Gabor filter energies to compute their visual features. Ramirez et al [15], conversely, prefer to extract high-level features such as gaze direction, head tilt or smile intensity. Similarly, Gunes et al [6] focus on spontaneous head movements.…”
Section: Introductionmentioning
confidence: 99%
“…Dahmane et al [2] use Gabor filter energies to compute their visual features. Ramirez et al [15], conversely, prefer to extract high-level features such as gaze direction, head tilt or smile intensity. Similarly, Gunes et al [6] focus on spontaneous head movements.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, the field of dimensional continuous emotion analysis has gained rising attention, and a significant number of works has been published on this topic [2,3,12,10]. Introduced by Russel [11], this emotion description originated a radically different approach on describing emotional states.…”
Section: Introductionmentioning
confidence: 99%
“…For AVEC 2011: UCL (Meng and Bianchi-Berthouze 2011), Uni-ULM (Glodek et al 2011), GaTechKim (Kim et al 2011), LSU (Calix et al 2011), Waterloo (Sayedelahl et al 2011), NLPR (Pan et al 2011), USC (Ramirez et al 2011), GaTechSun (Sun and Moore 2011), I2R-SCUT (Cen et al 2011), UCR (Cruz et al 2011) and UMontreal (Dahmane and Meunier 2011a, b). For AVEC 2012: UPMC-UAG (Nicolle et al 2012), Supelec-Dynamixyz-MinesTelecom (Soladie et al 2012), UPenn (Savran et al 2012a), USC (Ozkan et al 2012), Delft (van der Maaten 2012), Uni-ULM (Glodek et al 2012), Waterloo2 (Fewzee and Karray 2012).…”
Section: Audio/visual Emotion Challenge 2011/2012mentioning
confidence: 99%