Hollow organ perforation can precipitate a life-threatening emergency due to peritonitis followed by fulminant sepsis and fatal circulatory collapse. Pneumoperitoneum is typically detected as subphrenic free air on frontal chest X-ray images; however, treatment is reliant on accurate interpretation of radiographs in a timely manner. Unfortunately, it is not uncommon to have misdiagnoses made by emergency physicians who have insufficient experience or who are too busy and overloaded by multitasking. It is essential to develop an automated method for reviewing frontal chest X-ray images to alert emergency physicians in a timely manner about the life-threatening condition of hollow organ perforation that mandates an immediate second look. In this study, a deep learning-based approach making use of convolutional neural networks for the detection of subphrenic free air is proposed. A total of 667 chest X-ray images were collected at a local hospital, where 587 images (positive/negative: 267/400) were used for training and 80 images (40/40) for testing. This method achieved 0.875, 0.825, and 0.889 in sensitivity, specificity, and AUC score, respectively. It may provide a sensitive adjunctive screening tool to detect pneumoperitoneum on images read by emergency physicians who have insufficient clinical experience or who are too busy and overloaded by multitasking.
In traditional agricultural quality control, agricultural products are screened manually and then packaged and transported. However, long-term fruit storage is challenging in tropical climates, especially in the case of cherry tomatoes. Cherry tomatoes that appear rotten must be immediately discarded while grading; otherwise, other neighboring cherry tomatoes could rot. An insufficient agricultural workforce is one of the reasons for an increasing number of rotten tomatoes. The development of smart-technology agriculture has become a primary trend. This study proposed a You Only Look Once version 4 (YOLOv4)-driven appearance grading filing mechanism to grade cherry tomatoes. Images of different cherry-tomato appearance grades and different light sources were used as training sets, and the cherry tomatoes were divided into four categories according to appearance (perfect (pedicled head), good (not pedicled head), defective, and discardable). The AI server ran the YOLOv4 deep-learning framework for deep image learning training. Each dataset group was calculated by considering 100 of the four categories as the difference, and the total numbers of images were 400, 800, 1200, 1600, and 2000. Each dataset group was split into an 80% training set, 10% verification set, and 10% test set to overcome the identification complexity of different appearances and light source intensities. The experimental results revealed that models using 400–2000 images were approximately 99.9% accurate. Thus, we propose a new mechanism for rapidly grading agricultural products.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.