Purpose For the visualization of pulmonary ventilation with Electrical Impedance Tomography (EIT) most devices use standard reconstruction models, featuring common thorax dimensions and predetermined electrode locations. Any discrepancies between the available model and the patient in terms of body shape and electrode position lead to incorrectly displayed impedance distributions. This work addresses that problem by presenting and evaluating a method for 3D model generation of the thorax and any affixed electrodes based on handheld video-footage. Methods Therefore, a process was developed, providing users with the ability to capture a patient's chest and the attached electrodes via smartphone. Once data is collected, extracted images are used to generate a 3D model with a structure from motion approach and locate electrodes with ArUco markers. For the evaluation of the developed method, multiple tests were performed in laboratory environments, which were compared with manually created reference models and differences quantified based on mean distance, standard deviation, and maximum distance. Results The implemented workflow allows for automated model reconstruction based on videos or selected images captured with a handheld device. It generates sparse point clouds from which a surface mesh is reconstructed and returns relative coordinates of any identified ArUco marker. The average value for the mean distance error of two model generations was 5.4 mm while the mean standard deviation was 6.0 mm. The average runtime of twelve reconstructions was 5:17 min, with a minimal runtime of 3:22 min and a maximal runtime of 7:29 min. Conclusion The presented methods and results show that model reconstruction of a patient’s thorax and applied electrodes at an emergency site is feasible with already available devices. This is a first step toward the automated generation of patient-specific reconstruction models for Electrical Impedance Tomography based on images recorded with handheld devices.
We introduce a wearable-based recognition system for the classification of natural hand gestures during dynamic activities with surgical instruments. An armbandbased circular setup of eight EMG-sensors was used to superficially measure the muscle activation signals over the broadest cross-section of the lower arm. Instrument-specific surface EMG (sEMG) data acquisition was performed for 5 distinct instruments. In a first proof-of-concept study, EMG data were analyzed for unique signal courses and features, and in a subsequent classification, both decision tree (DTR) and shallow artificial neural network (ANN) classifiers were trained. For DTR, an ensemble bagging approach reached precision and recall rates of 0.847 and 0.854, respectively. The ANN network architecture was configured to mimic the ensemble-like structure of the DTR and achieved 0.952 and 0.953 precision and recall rates, respectively. In a subsequent multi-user study, classification achieved 70 % precision. Main errors potentially arise for instruments with similar gripping style and performed actions, interindividual variations in the acquisition procedure as well as muscle tone and activation magnitude. Compared to hand-mounted sensor systems, the lower arm setup does not alter the haptic experience or the instrument gripping, which is critical, especially in an intraoperative environment. Currently, drawbacks of the fixed consumer product setup are the limited data sampling rate and the denial of frequency features into the processing pipeline.
Purpose For an in-depth analysis of the learning benefits that a stereoscopic view presents during endoscopic training, surgeons required a custom surgical evaluation system enabling simulator independent evaluation of endoscopic skills. Automated surgical skill assessment is in dire need since supervised training sessions and video analysis of recorded endoscope data are very time-consuming. This paper presents a first step towards a multimodal training evaluation system, which is not restricted to certain training setups and fixed evaluation metrics. Methods With our system we performed data fusion of motion and muscle-action measurements during multiple endoscopic exercises. The exercises were performed by medical experts with different surgical skill levels, using either two or three-dimensional endoscopic imaging. Based on the multi-modal measurements, training features were calculated and their significance assessed by distance and variance analysis. Finally, the features were used automatic classification of the used endoscope modes. Results During the study, 324 datasets from 12 participating volunteers were recorded, consisting of spatial information from the participants’ joint and right forearm electromyographic information. Feature significance analysis showed distinctive significance differences, with amplitude-related muscle information and velocity information from hand and wrist being among the most significant ones. The analyzed and generated classification models exceeded a correct prediction rate of used endoscope type accuracy rate of 90%. Conclusion The results support the validity of our setup and feature calculation, while their analysis shows significant distinctions and can be used to identify the used endoscopic view mode, something not apparent when analyzing time tables of each exercise attempt. The presented work is therefore a first step toward future developments, with which multivariate feature vectors can be classified automatically in real-time to evaluate endoscopic training and track learning progress.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.