Birth asphyxia is a major newborn mortality problem in low-resource countries. International guideline provides treatment recommendations; however, the importance and effect of the different treatments are not fully explored. The available data is collected in Tanzania, during newborn resuscitation, for analysis of the resuscitation activities and the response of the newborn. An important step in the analysis is to create activity timelines of the episodes, where activities include ventilation, suction, stimulation etc. Methods:The available recordings are noisy real-world videos with large variations. We propose a two-step process in order to detect activities possibly overlapping in time. The first step is to detect and track the relevant objects, like bag-mask resuscitator, heart rate sensors etc., and the second step is to use this information to recognize the resuscitation activities. The topic of this paper is the first step, and the object detection and tracking are based on convolutional neural networks followed by post processing. Results: The performance of the object detection during activities were 96.97 % (ventilations), 100 % (attaching/removing heart rate sensor) and 75 % (suction) on a test set of 20 videos. The system also estimate the number of health care providers present with a performance of 71.16 %. Conclusion:The proposed object detection and tracking system provides promising results in noisy newborn resuscitation videos. Significance: This is the first step in a thorough analysis of newborn resuscitation episodes, which could provide important insight about the importance and effect of different newborn resuscitation activities.
Birth asphyxia is one of the leading causes of neonatal deaths. A key for survival is performing immediate and continuous quality newborn resuscitation. A dataset of recorded signals during newborn resuscitation, including videos, has been collected in Haydom, Tanzania, and the aim is to analyze the treatment and its effect on the newborn outcome. An important step is to generate timelines of relevant resuscitation activities, including ventilation, stimulation, suction, etc., during the resuscitation episodes. Methods: We propose a two-step deep neural network system, ORAA-net, utilizing low-quality video recordings of resuscitation episodes to do activity recognition during newborn resuscitation. The first step is to detect and track relevant objects using Convolutional Neural Networks (CNN) and post-processing, and the second step is to analyze the proposed activity regions from step 1 to do activity recognition using 3D CNNs. Results: The system recognized the activities newborn uncovered, stimulation, ventilation and suction with a mean precision of 77.67 %, a mean recall of 77,64 %, and a mean accuracy of 92.40 %. Moreover, the accuracy of the estimated number of Health Care Providers (HCPs) present during the resuscitation episodes was 68.32 %. Conclusion: The results indicate that the proposed CNN-based two-step ORAAnet could be used for object detection and activity recognition in noisy low-quality newborn resuscitation videos. Significance: A thorough analysis of the effect the different resuscitation activities have on the newborn outcome could potentially allow us to optimize treatment guidelines, training, debriefing, and local quality improvement in newborn resuscitation.
Out-of-hospital cardiac arrest (OHCA) is recognized as a global mortality challenge, and digital strategies could contribute to increase the chance of survival. In this paper, we investigate if cardiopulmonary resuscitation (CPR) quality measurement using smartphone video analysis in real-time is feasible for a range of conditions. With the use of a web-connected smartphone application which utilizes the smartphone camera, we detect inactivity and chest compressions and measure chest compression rate with real-time feedback to both the caller who performs chest compressions and over the web to the dispatcher who coaches the caller on chest compressions. The application estimates compression rate with 0.5 s update interval, time to first stable compression rate (TFSCR), active compression time (TC), hands-off time (TWC), average compression rate (ACR), and total number of compressions (NC). Four experiments were performed to test the accuracy of the calculated chest compression rate under different conditions, and a fifth experiment was done to test the accuracy of the CPR summary parameters TFSCR, TC, TWC, ACR, and NC. Average compression rate detection error was 2.7 compressions per minute (±5.0 cpm), the calculated chest compression rate was within ±10 cpm in 98% (±5.5) of the time, and the average error of the summary CPR parameters was 4.5% (±3.6). The results show that real-time chest compression quality measurement by smartphone camera in simulated cardiac arrest is feasible under the conditions tested.
Background Approximately 3-8% of all newborns do not breathe spontaneously at birth, and require time critical resuscitation. Resuscitation guidelines are mostly based on best practice, and more research on newborn resucitation is highly sought for. Methods The NewbornTime project will develop artificial intelligence (AI) based solutions for activity recognition during newborn resuscitations based on both visible light spectrum videos and infrared spectrum (thermal) videos. In addition, time-of-birth detection will be developed using thermal videos from the delivery rooms. Deep Neural Network models will be developed, focusing on methods for limited supervision and solutions adapting to on-site environments. A timeline description of the video analysis output enables objective analysis of resuscitation events. The project further aims to use machine learning to find patterns in large amount of such timeline data to better understand how newborn resuscitation treatment is given and how it can be improved. The automatic video analysis and timeline generation will be developed for on-site usage, allowing for data-driven simulation and clinical debrief for health-care providers, and paving the way for automated real-time feedback. This brings added value to the medical staff, mothers and newborns, and society at large. Discussion The project is a interdisciplinary collaboration, combining AI, image processing, blockchain and cloud technology, with medical expertise, which will lead to increased competences and capacities in these various fields. Trial registration ISRCTNregistry, number ISRCTN12236970
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.