2023
DOI: 10.1145/3570305
|View full text |Cite
|
Sign up to set email alerts
|

YaConv: Convolution with Low Cache Footprint

Abstract: This paper introduces YaConv , a new algorithm to compute convolution using GEMM microkernels from a BLAS library that is efficient for multiple CPU architectures. Previous approaches either create a copy of each image element for each filter element or reload these elements into cache for each GEMM call, leading to redundant instances of the image elements in cache. Instead, YaConv loads each image element once int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…Korostelev et al [17] combine the ideas of modifying the GEMM routine and decomposing convolution into multiple GEMM operations [2] to create a new method that avoids packing redundancy while keeping the overall GEMM structure (tiling, packing, micro-kernel). Contrary to the approach in SConv, [17] works only on unitary stride convolutions and improves only non-pointwise convolutions. Moreover, their experiments focus on isolated convolutions and do not provide results for complete models.…”
Section: Sconv Reduces Cache Misses In All Levels Of Cachementioning
confidence: 99%
“…Korostelev et al [17] combine the ideas of modifying the GEMM routine and decomposing convolution into multiple GEMM operations [2] to create a new method that avoids packing redundancy while keeping the overall GEMM structure (tiling, packing, micro-kernel). Contrary to the approach in SConv, [17] works only on unitary stride convolutions and improves only non-pointwise convolutions. Moreover, their experiments focus on isolated convolutions and do not provide results for complete models.…”
Section: Sconv Reduces Cache Misses In All Levels Of Cachementioning
confidence: 99%