DOI: 10.18297/etd/248
|View full text |Cite
|
Sign up to set email alerts
|

Learning understandable classifier models.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 129 publications
0
2
0
Order By: Relevance
“…Therefore, most of the existing DL models can only be used as black-boxes despite their good performance because their knowledge is hidden and can hardly be used to explain their decision making process. Thus limiting their applicability in domains where both justifications of decisions and interpretable inference are required from machines as in medical applications and business intelligence [20].…”
Section: XIIImentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, most of the existing DL models can only be used as black-boxes despite their good performance because their knowledge is hidden and can hardly be used to explain their decision making process. Thus limiting their applicability in domains where both justifications of decisions and interpretable inference are required from machines as in medical applications and business intelligence [20].…”
Section: XIIImentioning
confidence: 99%
“…Initial number of filters for each layer is shown in the legend [3]. Figure 37 shows the sensitivity of ResNet-56 layers to pruning and it can be observed that layers such as Conv 10,14,16,18,20,34,36,38,52 and 54 are more sensitive to filter pruning than other convolutional layers. Likewise for ResNet-110, the layer sensitivity to pruning is depicted in Figure 38 and it can be observed that Conv 1, 2, 38, 78, and 108 are sensitive to pruning.…”
Section: Resnet-56/110 On Cifar-10mentioning
confidence: 99%