2021
DOI: 10.48550/arxiv.2107.04191
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Structured Model Pruning of Convolutional Networks on Tensor Processing Units

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…In the future, we will design loss functions that can better reflect the quality difference between HR and super-resolved video frames. To reduce complexity, model pruning can be used to minimize computational and storage requirements for model inference [6].…”
Section: Discussionmentioning
confidence: 99%
“…In the future, we will design loss functions that can better reflect the quality difference between HR and super-resolved video frames. To reduce complexity, model pruning can be used to minimize computational and storage requirements for model inference [6].…”
Section: Discussionmentioning
confidence: 99%
“…The LASAO detector takes an average of 0.142 seconds to detect whether the subclavian artery is obstructed in a chest MRI image. Deep learning has enabled lots of artificial intelligence applications lately in image classification [4,7,13,15,19,22,24,27,33,[40][41][42][43]45] In this paper, last experiment is to probe the performances of the deep learning method to detect the left subclavian artery and aortic arch from a chest MRI image. It is difficult for a convolutional neural network model to precisely classify the objects in an image with multiple scales as well as small sizes, and complicated image background [19,43].…”
Section: Methodsmentioning
confidence: 99%
“…Several layers such as conv-layer and fully-connected layer have parameters whereas pooling and ReLU may not have parameters. The performance of a structured model after deployment is mostly dominated by the model complexity [10]. Lighter the model, faster the inferences and vice versa, generating a trade-off between the efficacy and performance matrix of a deployed structured model on tensor processing unit (TPU).…”
Section: Convolutional Neural Network and Pretrained Modelsmentioning
confidence: 99%