2017
DOI: 10.1109/taffc.2016.2537327
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Pain Assessment with Facial Activity Descriptors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
93
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(100 citation statements)
references
References 53 publications
1
93
0
Order By: Relevance
“…Despite the face still receiving more attention than other affective channels in building affect aware systems (e.g., [32][33] [34]), the emergence of low-cost movement sensing technology is increasingly leading researchers to target movement as an affective modality, including in clinical contexts. Beyond work aimed at assessing affective states during sedentary clinical settings [35], there is also a growing interest in their assessment during physical activity and in situ [36].…”
Section: Background: Automatic Detection Of Pain Related Affect From mentioning
confidence: 99%
“…Despite the face still receiving more attention than other affective channels in building affect aware systems (e.g., [32][33] [34]), the emergence of low-cost movement sensing technology is increasingly leading researchers to target movement as an affective modality, including in clinical contexts. Beyond work aimed at assessing affective states during sedentary clinical settings [35], there is also a growing interest in their assessment during physical activity and in situ [36].…”
Section: Background: Automatic Detection Of Pain Related Affect From mentioning
confidence: 99%
“…In cases where hybrid, mixed, or multiple features of the same type were used, the fusion of features was performed either before the learning step (cf. [75], [123]), or by fusing the decisions of classifiers trained separately for each feature (cf. [107], [116]).…”
Section: Feature Extractionmentioning
confidence: 99%
“…Next, the corresponding temporal series were employed for each facial descriptor obtained from all video frames, which is inspired by [28].…”
Section: Temporal Descriptors For Video Sequencementioning
confidence: 99%
“…These signals could be treated as the state signal, speed signal and acceleration signal of the descriptor signal, respectively. Subsequently, several parameters could be extracted from the temporal signals to better depict the characteristics of the signal variance over time [28]. A total of 16 temporal geometric facial features were extracted for each video sequence, and we categorized these parameters into 6 groups as follows:…”
Section: Temporal Descriptors For Video Sequencementioning
confidence: 99%