2018
DOI: 10.1609/aaai.v32i1.11502
|View full text |Cite
|
Sign up to set email alerts
|

Deception Detection in Videos

Abstract: We present a system for covert automated deception detection using information available in a video. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely used for action… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
45
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 72 publications
(46 citation statements)
references
References 26 publications
(37 reference statements)
1
45
0
Order By: Relevance
“…Facial affect contributed towards the best multimodal approach, which obtained an AUC of 91% and accuracy of 84% through adaptive boosting (AdaBoost) across facial affect, visual, and vocal modalities. The 91% AUC achieved by our approach was higher than the AUC of the best-performing automated approach on this dataset (88% AUC) that did not use facial affect, but also used an SVM with interpretable visual, vocal, and verbal features (Wu et al 2018). These results demonstrate the discriminative power of facial affect as a feature set in multimodal machine learning models for automated deception detection.…”
Section: Key Results and Discussionmentioning
confidence: 70%
“…Facial affect contributed towards the best multimodal approach, which obtained an AUC of 91% and accuracy of 84% through adaptive boosting (AdaBoost) across facial affect, visual, and vocal modalities. The 91% AUC achieved by our approach was higher than the AUC of the best-performing automated approach on this dataset (88% AUC) that did not use facial affect, but also used an SVM with interpretable visual, vocal, and verbal features (Wu et al 2018). These results demonstrate the discriminative power of facial affect as a feature set in multimodal machine learning models for automated deception detection.…”
Section: Key Results and Discussionmentioning
confidence: 70%
“…Computer vision baselines. We compare our method with five computer vision baselines with the same experimental setup as our method Baltrusaitis et al 2018;Demyanov et al 2015;Wu et al 2018). These methods used features extracted from the video, including facial emotion, head and eye movement, facial action units, and time-aggregated features as described below.…”
Section: Baselinesmentioning
confidence: 99%
“…(Baltrusaitis et al 2018) computes eye movements from the estimated eye ball positions, and uses the movement distributions over time as features. (Wu et al 2018) extracts the individual dense trajectory features from videos, MFCC features from audio, micro-expression features and text features from transcripts, and uses an ensemble method called late fusion to come up with a joint prediction. Since our dataset doesn't have transcripts and annotated micro-expressions, we remove the text features and replace micro-expressions by FAU features (Demyanov et al 2015).…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…However, most approaches in deception detection use only traditional (ensemble) machine learning methods, such as logistic regression, support vector machines, random forests, or naive bayes classifiers to predict veracity (Wu et al, 2018;Banerjee et al, 2015;Fornaciari and Poesio, 2013). Only few studies have explored deep learning methods based on transformer-networks (Fornaciari et al, 2021;Kao et al, 2020;Kennedy et al, 2019), which show increased prediction performances over the traditional approaches, but do not focus on interpreting the linguistic properties of the texts.…”
Section: Introductionmentioning
confidence: 99%