2021
DOI: 10.48550/arxiv.2110.13570
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Graph Representation of Person-specific Cognitive Processes from Audio-visual Behaviours for Automatic Personality Recognition

Abstract: This paper proposes to recognise the true (self-reported) personality from the learned simulation of the target subject's cognition. This approach builds on two following findings in cognitive science: (i) human cognition partially determines expressed behaviour and is directly linked to true personality traits; and (ii) in dyadic interactions individuals' nonverbal behaviours are influenced by their conversational partner's behaviours. In this context, we hypothesise that during a dyadic interaction, a target… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 80 publications
0
7
0
Order By: Relevance
“…We totally train the proposed model for 40 epochs, including Evaluation Metric. We follow previous AU occurrence recognition studies [Shao et al, 2021a;Churamani et al, 2021;Li et al, 2019b;Song et al, 2021c] using a common metric: frame-based F1 score, to evaluate the performance of our approach, which is denoted as F 1 = 2 P •R P +R . It takes the recognition precision P and recall rate R into consideration.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…We totally train the proposed model for 40 epochs, including Evaluation Metric. We follow previous AU occurrence recognition studies [Shao et al, 2021a;Churamani et al, 2021;Li et al, 2019b;Song et al, 2021c] using a common metric: frame-based F1 score, to evaluate the performance of our approach, which is denoted as F 1 = 2 P •R P +R . It takes the recognition precision P and recall rate R into consideration.…”
Section: Methodsmentioning
confidence: 99%
“…As a result, [Li et al, 2018] 39.0 35.2 48.6 76.1 72.9 81.9 86.2 58.8 37.5 59.1 35.9 35.8 55.9 JAA-Net [Shao et al, 2018] 47.2 44.0 54.9 77.5 74.6 84.0 86.9 61.9 43.6 60.3 42.7 41.9 60.0 LP-Net [Niu et al, 2019] 43.4 38.0 54.2 77.1 76.7 83.8 87.2 63.3 45.3 60.5 48.1 54.2 61.0 ARL [Shao et al, 2019] 45.8 39.8 55.1 75.7 77.2 82.3 86.6 58.8 47.6 62.1 47.4 [55.4] 61.1 SEV-Net [Yang et al, 2021] [58.2] [50.4] 58.3 [81.9] 73.9 [87.8] 87.5 61.6 [52.6] 62.2 44.6 47.6 63.9 FAUDT [Jacob and Stenger, 2021] 51.7 [49.3] [61.0] 77.8 79.5 82.9 86.3 [67.6] 51.9 63.0 43.7 [56.3] 64.2 SRERL [Li et al, 2019a] 46.9 45. [Li et al, 2018] 41.5 26.4 66.4 50.7 [80.5] [89.3] 88.9 15.6 48.5 JAA-Net [Shao et al, 2018] 43.7 46.2 56.0 41.4 44.7 69.6 88.3 58.4 56.0 LP-Net [Niu et al, 2019] 29.9 24.7 72.7 46.8 49.6 72.9 93.8 65.0 56.9 ARL [Shao et al, 2019] 43.9 the categorical cross-entropy loss is introduced as:…”
Section: Training Strategymentioning
confidence: 99%
See 3 more Smart Citations