2018 13th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2018) 2018
DOI: 10.1109/fg.2018.00032
|View full text |Cite
|
Sign up to set email alerts
|

Human Behaviour-Based Automatic Depression Analysis Using Hand-Crafted Statistics and Deep Learned Spectral Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
46
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 93 publications
(58 citation statements)
references
References 24 publications
2
46
0
Order By: Relevance
“…This paper will focus on techniques for accurate detection of depression levels over time based on facial expression captured in videos. Although spatial information is essential, the dynamics of facial behavior is also very important to interpret depression [10], [11], [12]. Alghowinem et al [13] have identified behavioural cues related to depressed individuals, such as slower head movements and avoidance of eye contact.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This paper will focus on techniques for accurate detection of depression levels over time based on facial expression captured in videos. Although spatial information is essential, the dynamics of facial behavior is also very important to interpret depression [10], [11], [12]. Alghowinem et al [13] have identified behavioural cues related to depressed individuals, such as slower head movements and avoidance of eye contact.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning architectures, and in particular CNNs, provide state-of-the-art performance in many visual recognition applications, such as image classification [14] and object detection [15], as well as assisted medical diagnosis [16]. In depression detection, deep learning architectures that process on videos typically exploit spatial and temporal information separately (e.g., by cascading a 2D CNN and then a recurrent NN), which deteriorate the modeling of spatio-temporal relationships [11], [17]. A deep two-stream architecture has also been proposed to exploit facial appearance and facial optical flow [10].…”
Section: Introductionmentioning
confidence: 99%
“…We examined 42 eyes from 42 adults with correctedto-normal vision (median [interquartile range, IQR] age: 26 [22][23][24][25][26][27][28][29] years), and 14 eyes from seven adults with an established diagnosis of glaucoma (69 [64][65][66][67][68][69][70][71][72][73][74] years of age).…”
Section: Participants and Proceduresmentioning
confidence: 99%
“…To understand which applies, we first need to briefly unpack the discussion around if emotion data is biometric data, because if it is, processing will require the explicit consent of the data subject (Art 9, GDPR). Data concerning health, particularly mental health (art 4(15) could conceivably be read from facial coding (e.g., see work of Song, et al [2018] on detecting depression using CV), and thus require, we argue requires explicit consent.…”
Section: Gdpr: Eai Testing Established Data Protection Principlesmentioning
confidence: 99%