2020
DOI: 10.1109/access.2020.3041173
|View full text |Cite
|
Sign up to set email alerts
|

Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

Abstract: Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 111 publications
(113 reference statements)
0
13
0
Order By: Relevance
“…Figure. 6 shows the simulated pain facial expression generated using the median transient parameters from trials rated "strongly agree" and "agree" ("agree*"), and those rated "strongly disagree" and "disagree" ("disagree*") from all participants. Activation intensities of the AUs are plotted with a simulated sine wave force profile following the method from Section "Simulation Using transient parameter pairs", the shaded regions represent standard error across participants.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Figure. 6 shows the simulated pain facial expression generated using the median transient parameters from trials rated "strongly agree" and "agree" ("agree*"), and those rated "strongly disagree" and "disagree" ("disagree*") from all participants. Activation intensities of the AUs are plotted with a simulated sine wave force profile following the method from Section "Simulation Using transient parameter pairs", the shaded regions represent standard error across participants.…”
Section: Resultsmentioning
confidence: 99%
“…We included six Action Units (AU4: Brow Lowering, AU7: Lid Tightening, AU9: Nose Wrinkling, AU10: Upper Lip Raising, AU26: Jaw Dropping and AU43: Eye Closing) 27 as these have been shown to be present in pain expressions of different intensities and across different cultures 6,25 . Using MakeHuman 19 with the FACSHuman 20 plugin, a natural expression mesh and 6 maximum AU activation mesh were generated for each AU.…”
Section: Methodsmentioning
confidence: 99%
“…In both SPs and physical mannequins, simulating patients with different demographic identities such as age, gender, and ethnicity is challenging 6 . Some commercially available mannequins such as the Paediatric HAL 7 offer gender and skin colour variations, but options are limited and cannot easily be swapped between training sessions.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, virtual human face simulation systems frequently used in computer graphics (CG) such as FACSHuman [17] can render high-fidelity human avatars of different demographics, but cannot respond to physical inputs. The benefits and limitations of using these two approaches were evaluated in greater detail in our recent review in facial expression rendering in medical training simulators [18].…”
Section: Introductionmentioning
confidence: 99%