2022 IEEE 30th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM) 2022
DOI: 10.1109/fccm53951.2022.9786179
|View full text |Cite
|
Sign up to set email alerts
|

CoMeFa: Compute-in-Memory Blocks for FPGAs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Binary Neural Network (1b) [30,31] customises and constrains the training process to achieve LUT compatibility, TLMAC derives optimised LUT initialisations from state-of-the-art quantised models directly. function as both a compute engine and traditional memory [3,7,32]. However, these circuits were largely tested through simulations.…”
Section: Routing Optimisationmentioning
confidence: 99%
See 2 more Smart Citations
“…Binary Neural Network (1b) [30,31] customises and constrains the training process to achieve LUT compatibility, TLMAC derives optimised LUT initialisations from state-of-the-art quantised models directly. function as both a compute engine and traditional memory [3,7,32]. However, these circuits were largely tested through simulations.…”
Section: Routing Optimisationmentioning
confidence: 99%
“…To implement bit-serial operations close to the memory port, it had to add logic for serialisation and enhance memory cell access. CoMeFa [3] enhances both ports of dual-port BRAMs with write drivers and sense amplifiers. Circuits for processing elements are proposed that write input and read output data.…”
Section: Related Research 21 Computing-in-memory On Fpgasmentioning
confidence: 99%
See 1 more Smart Citation
“…The new developments in the programmability of FPGAs allow these devices to abandon their original niche and be used to solve general-purpose problems. In particular, several recent works on FPGAs focus on the implementation of numerical linear algebra (NLA) kernels [4][5][6], which are a crucial component of a myriad of applications. However, most works focus on dense NLA [7][8][9], as well as other algorithms more suited for FPGAs such as image processing [10,11], machine learning [12], etc.…”
Section: Introductionmentioning
confidence: 99%