2021
DOI: 10.48550/arxiv.2112.13896
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks

Abstract: In principle, sparse neural networks should be significantly more efficient than traditional dense networks. Neurons in the brain exhibit two types of sparsity; they are sparsely interconnected and sparsely active. These two types of sparsity, called weight sparsity and activation sparsity, when combined, offer the potential to reduce the computational cost of neural networks by two orders of magnitude. Despite this potential, today's neural networks deliver only modest performance benefits using just weight s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 51 publications
0
1
0
Order By: Relevance
“…The latter are difficult to exploit on modern hardware technology that is typically designed for regular dense data structures. Recently, a few approaches such as [58] have shown lower resource utilisation based on complementary kernel sparsity, however their application to real-world big data problems is yet to be demonstrated.…”
Section: Resultsmentioning
confidence: 99%
“…The latter are difficult to exploit on modern hardware technology that is typically designed for regular dense data structures. Recently, a few approaches such as [58] have shown lower resource utilisation based on complementary kernel sparsity, however their application to real-world big data problems is yet to be demonstrated.…”
Section: Resultsmentioning
confidence: 99%