Physicians use pain expressions shown in a patient's face to regulate their palpation methods during physical examination. Training to interpret patients' facial expressions with different genders and ethnicities still remains a challenge, taking novices a long time to learn through experience. This paper presents MorphFace: a controllable 3D physical-virtual hybrid face to represent pain expressions of patients from different ethnicity-gender backgrounds. It is also an intermediate step to expose trainee physicians to the gender and ethnic diversity of patients. We extracted four principal components from the Chicago Face Database to design a four degrees of freedom (DoF) physical face controlled via tendons to span ∼ 85% of facial variations among gender and ethnicity. Details such as skin colour, skin texture, and facial expressions are synthesized by a virtual model and projected onto the 3D physical face via a frontmounted LED projector to obtain a hybrid controllable patient face simulator. A user study revealed that certain differences in ethnicity between the observer and the MorphFace lead to different perceived pain intensity for the same pain level rendered by the MorphFace. This highlights the value of having MorphFace as a controllable hybrid simulator to quantify perceptual differences during physician training.
Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.
Respiratory protective equipment (RPE) is traditionally designed through anthropometric sizing to enable mass production. However, this can lead to long-standing problems of low-compliance, severe skin trauma, and higher fit test failure rates among certain demographic groups, particularly females and non-white ethnic groups. Additive manufacturing could be a viable solution to produce custom-fitted RPE, but the manual design process is time-consuming, cost-prohibitive and unscalable for mass customization. This paper proposes an automated design pipeline which generates the computer-aided design models of custom-fit RPE from unprocessed three-dimensional (3D) facial scans. The pipeline successfully processed 197 of 205 facial scans with <2 min/scan. The average and maximum geometric error of the mask were 0.62 mm and 2.03 mm, respectively. No statistically significant differences in mask fit were found between male and female, Asian and White, White and Others, Healthy and Overweight, Overweight and Obese, Middle age, and Senior groups.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.