2019
DOI: 10.1515/teme-2019-0024
|View full text |Cite
|
Sign up to set email alerts
|

Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods

Abstract: Deep neural networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example in hospitals as a pain recognition tool, the current procedures are only suitable to a limited extent. The advantage of deep neural methods is that they can learn complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans. However, the disadvantage is that du… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0
2

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

5
3

Authors

Journals

citations
Cited by 48 publications
(27 citation statements)
references
References 20 publications
0
24
0
2
Order By: Relevance
“…In addition, it is difficult for humans to interpret which features and feature combinations affect the decision-making in which way. Explainable Artificial Intelligence (AI) methods such as Layer-wise Relevance Propagation [142] and Local Interpretable Model-Agnostic Explanations (LIME) [143] could be used to make decision-making transparent and comprehensible to humans [23]. Some approaches (e.g.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, it is difficult for humans to interpret which features and feature combinations affect the decision-making in which way. Explainable Artificial Intelligence (AI) methods such as Layer-wise Relevance Propagation [142] and Local Interpretable Model-Agnostic Explanations (LIME) [143] could be used to make decision-making transparent and comprehensible to humans [23]. Some approaches (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…By adding diagnostic decision explanation capabilities to such technical solutions (e.g. [23]), they could be used to train caregivers, medical practitioners, and nursing staff to improve their ability to correctly assess pain.…”
Section: Introductionmentioning
confidence: 99%
“…Explanations in human-human interaction have the function to make something clear by giving a detailed description, a reason, or justification [17,18]. In the context of explaining decisions of black box classifiers, there is a strong focus on visual explanations where areas relevant for a decision are highlighted [26,33] [7]. This is closely related to learning structural descriptions from near misses (most similar instance not belonging to the target class) which as been shown to make learning more efficient [34].…”
Section: Visual Verbal and Contrastive Explanationsmentioning
confidence: 99%
“…Moreover, medical applications such as pain or depression detection [186] demand transparency in the decision models. That is, the learned models should be interpretable to humans, and it should be possible to generate comprehensible explanations of the predictions made by the models [59,194].…”
Section: A D D R E S S I N G O P E N C H a L L E N G E S I N A U T O mentioning
confidence: 99%
“…Therefore, we trained a CNN model to discriminate between pain, happiness and disgust, and to apply explainable AI methods to make the predictions made by this CNN model transparent. 4 In [194] (Publication B.3.3), the use of two explainable AI methods, namely Local Interpretable Model-Agnostic Explanations (LIME) [149] and Layer-wise Relevance Propagation (LRP) [8], to explain the pain, happiness and disgust predictions made by the above-mentioned CNN model is illustrated with the help of image samples taken from the BioVid Heat Pain Database [190]. 5 LIME and LRP can be used to make CNN models transparent, and subsequently help in identifying discrepancies in the models.…”
Section: Interpretable Models and Decision Explanationsmentioning
confidence: 99%