2021
DOI: 10.48550/arxiv.2109.04236
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

Abstract: The remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations. Such increases in memory and computational demands make deep learning prohibitive for resourceconstrained hardware platforms such as mobile devices. Recent efforts aim to reduce these overheads, while preserving model performance as much as possible, and include parameter reduction techniques, parameter quantization, and lossless compression tec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…Prominent examples are DeepLIFT [122], Deconvolution [147], LRP [11] and Deep Taylor Decomposition [83]. Notably, LRP has been shown to achieve faithful intermediate attribution as can be seen in [18,144]. These methods have the advantage that they efficiently provide attributions for intermediate neurons "for free" as a by-product, without any additional algorithmic extensions of the aforementioned.…”
Section: Gradient-based and Modified Backpropagation-based Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Prominent examples are DeepLIFT [122], Deconvolution [147], LRP [11] and Deep Taylor Decomposition [83]. Notably, LRP has been shown to achieve faithful intermediate attribution as can be seen in [18,144]. These methods have the advantage that they efficiently provide attributions for intermediate neurons "for free" as a by-product, without any additional algorithmic extensions of the aforementioned.…”
Section: Gradient-based and Modified Backpropagation-based Methodsmentioning
confidence: 99%
“…These methods have the advantage that they efficiently provide attributions for intermediate neurons "for free" as a by-product, without any additional algorithmic extensions of the aforementioned. However, this by-product has usually been ignored in the literature except for a few works that use this information to directly improve specific aspects of deep models [81,144,128,18], or regard them as proxy representations of explanations for the identification and eradication of systematic Clever Hans behavior [6].…”
Section: Gradient-based and Modified Backpropagation-based Methodsmentioning
confidence: 99%
See 2 more Smart Citations