2020
DOI: 10.11610/isij.4615
|View full text |Cite
|
Sign up to set email alerts
|

Model Fooling Attacks Against Medical Imaging: A Short Survey

Abstract: This study aims to find a list of methods to fool artificial neural networks used in medical imaging. We collected a short list of publications related to machine learning model fooling to see if these methods have been used in the medical imaging domain. Specifically, we focused our interest to pathological whole slide images used to study human tissues. While useful, machine learning models such as deep neural networks can be fooled by quite simple attacks involving purposefully engineered images. Such attac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…They changed as few as five pixels in the general CIFAR-10 dataset using a gradient-based dual iterative fusion method [8]. A short survey of model fooling attacks in the medical domain by Sipola et al shows that at least adversarial images and patches have been used in experiments [17]. One-pixel attack is a more advanced method, in which only one pixel of an image is modified in order to fool the classifier [20].…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…They changed as few as five pixels in the general CIFAR-10 dataset using a gradient-based dual iterative fusion method [8]. A short survey of model fooling attacks in the medical domain by Sipola et al shows that at least adversarial images and patches have been used in experiments [17]. One-pixel attack is a more advanced method, in which only one pixel of an image is modified in order to fool the classifier [20].…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…Furthermore, these threats are real in medical imaging [8], [9]. Following categories of threats have been identified for model fooling against medical imaging: (i) adversarial images, (ii) adversarial patches, (iii) one-pixel attacks and (iv) training process tampering [10]. The first three are in the category of adversarial examples.…”
Section: Introductionmentioning
confidence: 99%
“…They are specifically crafted images that deceive a classifier to make false predictions about input images. If such an attack does not need knowledge of the inside workings of the classifier, it is known as a black-box attack, because the only output needed is the prediction confidence score of the classifier [10], [11]. When an adversarial example is given Fig.…”
Section: Introductionmentioning
confidence: 99%
“…Our present analysis is a natural extension to our prior studies related to the attack method. Earlier, we have introduced a list of methods to fool artificial neural networks used in medical imaging [14]. One-pixel attack appeared to be a comprehensive and realistic attack vector, so we decided to further investigate it as a conceptual framework in the medical imaging domain [15].…”
Section: Introductionmentioning
confidence: 99%