2019 14th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2019) 2019
DOI: 10.1109/fg.2019.8756584
|View full text |Cite
|
Sign up to set email alerts
|

Encoding Visual Behaviors with Attentive Temporal Convolution for Depression Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…To the best of our knowledge, this is the first work that extends the Neural Architecture Search (NAS) technique to automatic depression analysis. • The experimental results show that our approach achieved new state-of-the-art results with 27% RMSE and 30% MAE improvements over the previous state-of-the-art method [21], [19].…”
Section: Introductionmentioning
confidence: 85%
See 2 more Smart Citations
“…To the best of our knowledge, this is the first work that extends the Neural Architecture Search (NAS) technique to automatic depression analysis. • The experimental results show that our approach achieved new state-of-the-art results with 27% RMSE and 30% MAE improvements over the previous state-of-the-art method [21], [19].…”
Section: Introductionmentioning
confidence: 85%
“…Haque et al [22] use a Causal Convolutional Neural Network (C-CNN) to deep learn sentence-level depression cues from 3D facial landmarks. Du et al [21] propose a Atrous Residual Temporal Convolutional Network (DepArt-Net) that generates multi-scale contextual features from several lowlevel visual behaviors, then temporally fuse them through attention mechanism to capture the long-range depressionrelated cues. Song et al [16], [17] propose to use Fourier transforms to encode facial attribute time-series (AUs, gazes, and head poses) of a clip into a length-independent spectral representation, incorporating multi-scale temporal information.…”
Section: A Facial Attributes-based Depression Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…RELATED WORK Many methods have been developed for the task of automatic depression detection. Some methods are based on a single modality like audio [14,24,30], text [6,40] or visual cues [7,37], while others combine at least two modalities that tend to achieve a higher accuracy [26,31].…”
Section: Introductionmentioning
confidence: 99%
“…ADE features can either be hand-crafted or based on deep learning models. Examples of widely used hand-crafted features include Local Binary Patterns (LBP) [18], Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP) [19], Local Binary Patterns from Three Orthogonal Planes (LBP-TOP) [20], and others (e.g., Facial Action Units (FAUs), Landmarks, Head Poses, Gazes) [21]. However, since 2013, depression recognition challenges such as Audio-Visual Emotion Recognition Challenge (AVEC2013) [22] have recorded depression data by Human-Computer Interaction.…”
Section: Introductionmentioning
confidence: 99%