Deep learning algorithms have shown excellent performances in the field of medical image recognition, and practical applications have been made in several medical domains. Little is known about the feasibility and impact of an undetectable adversarial attacks, which can disrupt an algorithm by modifying a single pixel of the image to be interpreted. The aim of the study was to test the feasibility and impact of an adversarial attack on the accuracy of a deep learning-based dermatoscopic image recognition system. First, the pre-trained convolutional neural network DenseNet-201 was trained to classify images from the training set into 7 categories. Second, an adversarial neural network was trained to generate undetectable perturbations on images from the test set, to classifying all perturbed images as melanocytic nevi. The perturbed images were classified using the model generated in the first step. This study used the HAM-10000 dataset, an open source image database containing 10,015 dermatoscopic images, which was split into a training set and a test set. The accuracy of the generated classification model was evaluated using images from the test set. The accuracy of the model with and without perturbed images was compared. The ability of 2 observers to detect image perturbations was evaluated, and the inter observer agreement was calculated. The overall accuracy of the classification model dropped from 84% (confidence interval (CI) 95%: 82–86) for unperturbed images to 67% (CI 95%: 65–69) for perturbed images (Mc Nemar test, P < .0001). The fooling ratio reached 100% for all categories of skin lesions. Sensitivity and specificity of the combined observers calculated on a random sample of 50 images were 58.3% (CI 95%: 45.9–70.8) and 42.5% (CI 95%: 27.2–57.8), respectively. The kappa agreement coefficient between the 2 observers was negative at -0.22 (CI 95%: −0.49–−0.04). Adversarial attacks on medical image databases can distort interpretation by image recognition algorithms, are easy to make and undetectable by humans. It seems essential to improve our understanding of deep learning-based image recognition systems and to upgrade their security before putting them to practical and daily use.
Background Chest radiographs are routinely performed in intensive care unit (ICU) to confirm the correct position of an endotracheal tube (ETT) relative to the carina. However, their interpretation is often challenging and requires substantial time and expertise. The aim of this study was to propose an externally validated deep learning model with uncertainty quantification and image segmentation for the automated assessment of ETT placement on ICU chest radiographs. Methods The CarinaNet model was constructed by applying transfer learning to the RetinaNet model using an internal dataset of ICU chest radiographs. The accuracy of the model in predicting the position of the ETT tip and carina was externally validated using a dataset of 200 images extracted from the MIMIC-CXR database. Uncertainty quantification was performed using the level of confidence in the ETT–carina distance prediction. Segmentation of the ETT was carried out using edge detection and pixel clustering. Results The interrater agreement was 0.18 cm for the ETT tip position, 0.58 cm for the carina position, and 0.60 cm for the ETT–carina distance. The mean absolute error of the model on the external test set was 0.51 cm for the ETT tip position prediction, 0.61 cm for the carina position prediction, and 0.89 cm for the ETT–carina distance prediction. The assessment of ETT placement was improved by complementing the human interpretation of chest radiographs with the CarinaNet model. Conclusions The CarinaNet model is an efficient and generalizable deep learning algorithm for the automated assessment of ETT placement on ICU chest radiographs. Uncertainty quantification can bring the attention of intensivists to chest radiographs that require an experienced human interpretation. Image segmentation provides intensivists with chest radiographs that are quickly interpretable and allows them to immediately assess the validity of model predictions. The CarinaNet model is ready to be evaluated in clinical studies. Graphical Abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.