2018 41st International Conference on Telecommunications and Signal Processing (TSP) 2018
DOI: 10.1109/tsp.2018.8441252
|View full text |Cite
|
Sign up to set email alerts
|

Video-based Pain Level Assessment: Feature Selection and Inter-Subject Variability Modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…To address this, using the SAFEPA system, which is described in Figure 7 and operates on a frame-by-frame basis, we selected the highest-level prediction from all predictions made for the frames belonging to that video. The accuracy of the SAFEPA system on the entire unseen Biovid dataset was 33.28%, which is an improvement over the results reported in [37,38]. In [37], the authors proposed facial activity descriptors to detect pain and estimate its intensity.…”
Section: Experiments 3: Test In Real-time: How Well the Safepa System...mentioning
confidence: 83%
See 3 more Smart Citations
“…To address this, using the SAFEPA system, which is described in Figure 7 and operates on a frame-by-frame basis, we selected the highest-level prediction from all predictions made for the frames belonging to that video. The accuracy of the SAFEPA system on the entire unseen Biovid dataset was 33.28%, which is an improvement over the results reported in [37,38]. In [37], the authors proposed facial activity descriptors to detect pain and estimate its intensity.…”
Section: Experiments 3: Test In Real-time: How Well the Safepa System...mentioning
confidence: 83%
“…Their model was applied on the BioVid dataset and for leave-one-subject-out cross-validation; to recognize five levels of pain, an accuracy of 30.80% was achieved. In [38], Bourou and his colleagues proposed a feature selection and inter-subject variability modeling approach for video-based pain level assessment. They utilized lasso regression Furthermore, FEAPAS model was trained on JAFFE dataset using an ADAM optimizer and 150 × 150 input size and achieved a remarkable accuracy of 96.80% in 10-fold crossvalidation, which is a highly competitive result compared to the Ensemble classifier of VGG16 and ResNet50 models used in [17] that achieved an accuracy of 96.40%.…”
Section: Experiments 3: Test In Real-time: How Well the Safepa System...mentioning
confidence: 99%
See 2 more Smart Citations
“…Werner et al [22] proposed a novel feature set, called facial activity descriptors, to describe facial actions for pain detection and pain intensity estimation. Bourou et al [23] calculated several distances (e.g., mean and median) from ROIs to classify pain expressions.…”
Section: Facial Expressionmentioning
confidence: 99%