Purpose Evaluation of a novel ultrasound-simulation-app for training fetal echocardiography as a possible useful addition for students, residents and specialist doctors. Furthermore, comparison to a conventional learning-method with special attention on orientation and recognition of physiological structures. Methods Prospective two-arm study with the participation of 226 clinical students. 108 students were given an extract from a textbook on fetal echocardiography (PDF-group, n = 108) for 30 min to study. 118 students were able to use the new ultrasound-simulator-app (Simulator-group, n = 118) to learn for 30 min. The knowledge of the students was examined both before and after the learning-period by having them identify sonographic structures in videos using single-choice selection. Results There were no significant differences between the two groups regarding age (p = 0.87), gender (p = 0.28), and the number of previously performed ultrasound-examinations (p = 0.45). In the Simulator-group, there was a significantly higher learning effect regarding the proportion of students with an increase of correct answers in the video test examination (p = 0.005). At the end of learning, the students in the Simulator-group needed significantly less time to display the structures in the app’s simulation (median initially 10.9 s vs. 6.8 s at the end; p < 0.001). Conclusions The novel ultrasound-simulation-app seems to be a useful addition and improvement to ultrasound training. Previous difficulties such as simultaneously having patients, ultrasound-machines, and professors at disposal can thus be avoided. This means that another important step towards remote learning can be taken, which has been proven increasingly essential lately, due to the COVID-19 pandemic.
Introduction To date, most ways to perform facial expression recognition rely on two-dimensional images, advanced approaches with three-dimensional data exist. These however demand stationary apparatuses and thus lack portability and possibilities to scale deployment. As human emotions, intent and even diseases may condense in distinct facial expressions or changes therein, the need for a portable yet capable solution is signified. Due to the superior informative value of three-dimensional data on facial morphology and because certain syndromes find expression in specific facial dysmorphisms, a solution should allow portable acquisition of true three-dimensional facial scans in real time. In this study we present a novel solution for the three-dimensional acquisition of facial geometry data and the recognition of facial expressions from it. The new technology presented here only requires the use of a smartphone or tablet with an integrated TrueDepth camera and enables real-time acquisition of the geometry and its categorization into distinct facial expressions. Material and Methods Our approach consisted of two parts: First, training data was acquired by asking a collective of 226 medical students to adopt defined facial expressions while their current facial morphology was captured by our specially developed app running on iPads, placed in front of the students. In total, the list of the facial expressions to be shown by the participants consisted of “disappointed”, “stressed”, “happy”, “sad” and “surprised”. Second, the data were used to train a self-normalizing neural network. A set of all factors describing the current facial expression at a time is referred to as “snapshot”. Results In total, over half a million snapshots were recorded in the study. Ultimately, the network achieved an overall accuracy of 80.54% after 400 epochs of training. In test, an overall accuracy of 81.15% was determined. Recall values differed by the category of a snapshot and ranged from 74.79% for “stressed” to 87.61% for “happy”. Precision showed similar results, whereas “sad” achieved the lowest value at 77.48% and “surprised” the highest at 86.87%. Conclusions With the present work it can be demonstrated that respectable results can be achieved even when using data sets with some challenges. Through various measures, already incorporated into an optimized version of our app, it is to be expected that the training results can be significantly improved and made more precise in the future. Currently a follow-up study with the new version of our app that encompasses the suggested alterations and adaptions, is being conducted. We aim to build a large and open database of facial scans not only for facial expression recognition but to perform disease recognition and to monitor diseases’ treatment progresses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.