2020
DOI: 10.1007/s11265-020-01555-w
|View full text |Cite
|
Sign up to set email alerts
|

FARM: A Flexible Accelerator for Recurrent and Memory Augmented Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(15 citation statements)
references
References 22 publications
0
12
0
Order By: Relevance
“…The history-based read weighting is computed in three steps: 1) the write weighting 𝑤 𝑤 is first expanded to an 𝑁 × 𝑁 matrix to derive linkage matrix 𝐿. The linkage tracks the order in which memory locations are written to; 2) the precedence vector 𝑝 is updated to track the degree each memory entry is most [4] and tiled architecture [33] for accelerating MANN's memory unit. recently written to; and 2) a forward and backward pass is used to merge the read weighting 𝑤 𝑟 from the previous time step with the linkage matrix 𝐿, as well as the content-based read weighting 𝑟 𝑢 to update the read weighting 𝑤 𝑟 .…”
Section: Softmentioning
confidence: 99%
See 2 more Smart Citations
“…The history-based read weighting is computed in three steps: 1) the write weighting 𝑤 𝑤 is first expanded to an 𝑁 × 𝑁 matrix to derive linkage matrix 𝐿. The linkage tracks the order in which memory locations are written to; 2) the precedence vector 𝑝 is updated to track the degree each memory entry is most [4] and tiled architecture [33] for accelerating MANN's memory unit. recently written to; and 2) a forward and backward pass is used to merge the read weighting 𝑤 𝑟 from the previous time step with the linkage matrix 𝐿, as well as the content-based read weighting 𝑟 𝑢 to update the read weighting 𝑤 𝑟 .…”
Section: Softmentioning
confidence: 99%
“…MANNA's distributed architecture provides more memory bandwidth and compute parallelism, but its H-tree NoC still incurs a traffic bottleneck when running DNC's history-based attention mechanisms. DNC accelerators have been developed recently [4,30]. In [4], the efficiency mainly comes from analog-based processing elements such as analog-to-digital converters (ADCs), which are more sensitive to variations and noise, and less portable between process technologies.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this manner, MCA is able to perform the matrix multiplication operation in just one step. Resistive memory in its memristive crossbar array (MCA) configuration can significantly accelerate matrix multiply operations commonly found in neural network, graph, and image processing workloads [19,[82][83][84][85][86][87][88][89]. Figure 5a shows a mathematical abstraction of one such operation for a single-layer perceptron implementing binary classification, where x are the inputs to the perceptron, w are the weights to the perceptron, sgn is the signum function, and y is the output of the perceptron [90].…”
Section: Memory Typesmentioning
confidence: 99%
“…In contrast, DRAM requires multiple loads/stores and computation steps to perform the same operation. This advantage is exploited by management policies which map matrix vector multiplication to ReRAM crossbar arrays [19,[82][83][84][85][86][87][88][89].…”
Section: Energy Efficiencymentioning
confidence: 99%