DOI: 10.29007/rp6q
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Image Classification Model Using Formal Concept Analysis Based Classifier

Abstract: Massive amounts of data gathered over the last decade have contributed significantly to the applicability of deep neural networks. Deep learning is a good technique to process huge amounts of data because they get better as we feed more data into them. However, in the existing literature, a deep neural classifier is often treated as a ”black box” technique because the process is not transparent and the researchers cannot gain information about how the input is associated to the output. In many domains like med… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Some limitations of the proposal regard the lack of experimentation on higher-dimensional datasets like images. In this sense, to reduce the FCA complexity, the literature suggests techniques like clustering or Linear Discriminant Analysis to group common characteristics and reduce the number of Formal Context attributes [ 34 , 35 ].…”
Section: Discussionmentioning
confidence: 99%
“…Some limitations of the proposal regard the lack of experimentation on higher-dimensional datasets like images. In this sense, to reduce the FCA complexity, the literature suggests techniques like clustering or Linear Discriminant Analysis to group common characteristics and reduce the number of Formal Context attributes [ 34 , 35 ].…”
Section: Discussionmentioning
confidence: 99%