2018 IEEE 36th International Conference on Computer Design (ICCD) 2018
DOI: 10.1109/iccd.2018.00070
|View full text |Cite
|
Sign up to set email alerts
|

R-Cache: A Highly Set-Associative In-Package Cache Using Memristive Arrays

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…Several works propose hardware accelerators for basecalling [63,77,78] or read mapping [54,[56][57][58]62,[65][66][67][68]71,[79][80][81][82][83]. Among these accelerators, non-volatile memory (NVM)-based processing in memory (PIM) accelerators offer high performance and efficiency since NVM-based PIM provides in-situ and highly-parallel computation support for matrix-vector mul-tiplications (MVM) [101][102][103][104][105][106][107][108][109][110][111] and string matching operations [112][113][114][115][116][117][118][119][120][121][122][123][124][125][126][127][128][129][130]…”
Section: State-of-the-art Solutionsmentioning
confidence: 99%
“…Several works propose hardware accelerators for basecalling [63,77,78] or read mapping [54,[56][57][58]62,[65][66][67][68]71,[79][80][81][82][83]. Among these accelerators, non-volatile memory (NVM)-based processing in memory (PIM) accelerators offer high performance and efficiency since NVM-based PIM provides in-situ and highly-parallel computation support for matrix-vector mul-tiplications (MVM) [101][102][103][104][105][106][107][108][109][110][111] and string matching operations [112][113][114][115][116][117][118][119][120][121][122][123][124][125][126][127][128][129][130]…”
Section: State-of-the-art Solutionsmentioning
confidence: 99%
“…Therefore, it can be said that in any time frame, without considering the time limit, the maximum swap operations = (the number of cache sets*the number of ways in each cache set that are based on the WF technology). This upper bound of migrations in each time frame is determined by relations (2) and (3).…”
Section: Migration Constraintmentioning
confidence: 99%
“…The growth in the required data for complex applications and an increase in the number of cores on a single chip lead to high demands for memory capacity and bandwidth [1, 2]. To mitigate the speed gap between processor and off‐chip memory, it is indispensable to exploit large, multi‐level, on‐chip cache hierarchy [3].…”
Section: Introductionmentioning
confidence: 99%
“…The bandwidth efficient write probe and on-chip neighboring tag cache (NTC) helps to optimize DRAM cache bandwidth by removing some of the tag check accesses to DRAM. R-Cache [32] proposes an RRAM based in-package memory that eliminates the bandwidth overhead of tag checks via in-situ comparison.…”
Section: Existing In-package Cache Proposalsmentioning
confidence: 99%