2016 IEEE Spoken Language Technology Workshop (SLT) 2016
DOI: 10.1109/slt.2016.7846335
|View full text |Cite
|
Sign up to set email alerts
|

Entropy-based pruning of hidden units to reduce DNN parameters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…can be adjusted to control the rank of the effective offset matrix and the complexity of the adaptation model. Unlike other work on low-rank model compression [28,25,29], we apply low-rank approximation only to the domain-dependent parameters without compromising the performance of the base model.…”
Section: Factorized Hidden Layermentioning
confidence: 99%
“…can be adjusted to control the rank of the effective offset matrix and the complexity of the adaptation model. Unlike other work on low-rank model compression [28,25,29], we apply low-rank approximation only to the domain-dependent parameters without compromising the performance of the base model.…”
Section: Factorized Hidden Layermentioning
confidence: 99%