2022 IEEE 33rd International Conference on Application-Specific Systems, Architectures and Processors (ASAP) 2022
DOI: 10.1109/asap54787.2022.00024
|View full text |Cite
|
Sign up to set email alerts
|

FusedGCN: A Systolic Three-Matrix Multiplication Architecture for Graph Convolutional Networks

Abstract: Systolic Array (SA) architectures are well suited for accelerating matrix multiplications through the use of a pipelined array of Processing Elements (PEs) communicating with local connections and pre-orchestrated data movements. Even though most of the dynamic power consumption in SAs is due to multiplications and additions, pipelined data movement within the SA constitutes an additional important contributor. The goal of this work is to reduce the dynamic power consumption associated with the feeding of data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 30 publications
(60 reference statements)
0
4
0
Order By: Relevance
“…Recent research already investigated low-power coding to decrease the power consumption in DNNs accelerators [14]. In particular, it uses bus-invert for coding the mantissa of the bfloat16 numbers used in a systolic array.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Recent research already investigated low-power coding to decrease the power consumption in DNNs accelerators [14]. In particular, it uses bus-invert for coding the mantissa of the bfloat16 numbers used in a systolic array.…”
Section: Related Workmentioning
confidence: 99%
“…Second, Ref. [14] focuses only on the data-path of the core accelerator, disregarding potential savings in the memories and interconnects-which however typically exhibit the highest power needs. Moreover, Ref.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Matrix multiplications are at the heart of deep learning algorithms and their computation in hardware maps naturally onto Systolic Arrays (SA) [5]. Tensor processing units [6] and other related architectures [7]- [10] are characteristic examples of newly designed SAs.…”
Section: Introductionmentioning
confidence: 99%