2021
DOI: 10.1109/access.2021.3064838
|View full text |Cite
|
Sign up to set email alerts
|

These do not Look Like Those: An Interpretable Deep Learning Model for Image Recognition

Abstract: Interpretation of the reasoning process of a prediction made by a deep learning model is always desired. However, when it comes to the predictions of a deep learning model that directly impacts on the lives of people then the interpretation becomes a necessity. In this paper, we introduce a deep learning model: negative-positive prototypical part network (NP-ProtoPNet). This model attempts to imitate human reasoning for image recognition while comparing the parts of a test image with the corresponding parts of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 47 publications
(45 citation statements)
references
References 29 publications
(30 reference statements)
0
45
0
Order By: Relevance
“…A substantial improvement over the above work was made by Chen et al with the development of their model ProtoPNet [ 16 ]. The models Gen-ProtoPNet [ 17 ] and NP-ProtoPNet [ 18 ] are close variations of ProtoPNet.…”
Section: Materials and Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A substantial improvement over the above work was made by Chen et al with the development of their model ProtoPNet [ 16 ]. The models Gen-ProtoPNet [ 17 ] and NP-ProtoPNet [ 18 ] are close variations of ProtoPNet.…”
Section: Materials and Methodsmentioning
confidence: 99%
“…So, we propose an interpretable deep learning model: pseudo prototypical part network (Ps-ProtoPNet), and experiment it over the dataset of CT-scan images, see Section 2.4 . Ps-ProtoPNet is closely related to ProtoPNet [ 16 ], Gen-ProtoPNet [ 17 ] and NP-Proto-PNet [ 18 ], but strikingly different from these models.…”
Section: Introductionmentioning
confidence: 99%
“…As mentioned in the introduction, many networks have been emerged to classify the X-ray images of Covid-19 patients along with X-ray images of normal people and pneumonia patients, see [4], [12], [22], [23], [25], [36], [37], [40], [41], [58]. A study summarizes some papers on Covid-19, and it points out some problems, such as: lack of reliable and adequate amount of data for deep learning algorithms [5].…”
Section: Literature Reviewmentioning
confidence: 99%
“…The objective of this work is to find an interpretable method to do image classification so that we can tell why an image is classified in a certain way. In this work, we introduce an interpretable deep learning model: generalized prototypical part network (Gen-ProtoPNet), and experiment it over the dataset of three different classes of Xrays, see Section V. Gen-ProtoPNet is a close variation of ProtoPNet [7] and NP-ProtoPNet [41]. To predict the class of a test image, ProtoPNet calculates the similarity scores between learned prototypical parts (with square spacial di-mensions 1 × 1) of images from each class and parts of the test image using L2 distance function.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation