2019
DOI: 10.1016/j.micpro.2019.102868
|View full text |Cite
|
Sign up to set email alerts
|

Near-memory computing: Past, present, and future

Abstract: The conventional approach of moving data to the CPU for computation has become a significant performance bottleneck for emerging scale-out data-intensive applications due to their limited data reuse. At the same time, the advancement in 3D integration technologies has made the decade-old concept of coupling compute units close to the memory -called nearmemory computing (NMC) -more viable. Processing right at the "home" of data can significantly diminish the data movement problem of data-intensive applications.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
29
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 62 publications
(29 citation statements)
references
References 86 publications
(171 reference statements)
0
29
0
Order By: Relevance
“…RMCs' approach is similar to previous efforts to enable nearmemory compute [37], and programmable SSD controllers [9] except RMCs target remote memory access over the network fabric.…”
Section: Rmcmentioning
confidence: 99%
“…RMCs' approach is similar to previous efforts to enable nearmemory compute [37], and programmable SSD controllers [9] except RMCs target remote memory access over the network fabric.…”
Section: Rmcmentioning
confidence: 99%
“…It is shown that the power consumption of data movement is hundreds of times higher than that of floating‐point operations, [ 1 ] so the system performance is mainly limited by the memory speed which is also known as “the memory wall.” [ 2 ] Whereas the rise of emerging technologies such as Internet of Things (IoT), artificial intelligence (AI), and Big Data which require data‐centric and low power consumption computing platforms, further magnifies the drawback of traditional computing systems. Strategies like adopting multiple levels of cache, [ 3 ] increasing memory bandwidth, [ 4 ] and near‐memory computing [ 5 ] have been proposed to alleviate this problem which yet still seem to be insufficient to meet the increasing computing demands.…”
Section: Introductionmentioning
confidence: 99%
“…
traditional computing systems. Strategies like adopting multiple levels of cache, [3] increasing memory bandwidth, [4] and near-memory computing [5] have been proposed to alleviate this problem which yet still seem to be insufficient to meet the increasing computing demands.In-memory computing (IMC) which refers to the realization of computational tasks within the memory unit aims to reduce the frequent movement of data across the bus (Figure 1b), providing new insights for building highly efficient computing systems. IMC with function of parallel computation, reducing either the computational complexity or accessed amount of data effectively can be performed on both volatile and non-volatile memories.
…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers found that a crossbar array with programmable devices at cross-points has in-memory computing(IMC) capability, where data storage and computing can happen at the same place. IMC demonstrations have been carried out for different cross-point programmable devices, such as ReRAM, FLASH, SRAM, and others [1][5] [10] [17] [19]. Among all the candidates, ReRAM is considered to be one of the most promising ones for its non-volatilely, scalability, linearity, and multi-level programmability [6] [14] [18].…”
Section: Introductionmentioning
confidence: 99%