2020
DOI: 10.48550/arxiv.2004.10275
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How to Train your DNN: The Network Operator Edition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Nevertheless, while performance continuously improves, the models become larger with a massive increase in the number of parameters. In fact, modern Neural Network (NN) models may have billions and even trillions of parameters, which makes the deployment of these models a challenging task (Chang et al, 2020). One way to mitigate this issue is compressing the model's parameters to reduce its overall memory footprint while satisfying an accuracy constraint.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, while performance continuously improves, the models become larger with a massive increase in the number of parameters. In fact, modern Neural Network (NN) models may have billions and even trillions of parameters, which makes the deployment of these models a challenging task (Chang et al, 2020). One way to mitigate this issue is compressing the model's parameters to reduce its overall memory footprint while satisfying an accuracy constraint.…”
Section: Introductionmentioning
confidence: 99%