2019
DOI: 10.1016/j.micpro.2019.01.009
|View full text |Cite
|
Sign up to set email alerts
|

Processing data where it makes sense: Enabling in-memory computation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
95
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 169 publications
(95 citation statements)
references
References 20 publications
0
95
0
Order By: Relevance
“…The latency and energy associated with accessing data from the memory units are key performance bottlenecks for a range of applications, in particular, for the increasingly prominent AI-related workloads. [11] The energy cost associated with moving data is a key challenge for both severely energy constrained mobile and edge computing as well as high-performance computing in a cloud environment due to cooling constraints. The current approaches, such as using hundreds of processors in parallel [12] or application-specific processors, [13] are not likely to fully overcome the challenge of data movement.…”
Section: In-memory Computingmentioning
confidence: 99%
“…The latency and energy associated with accessing data from the memory units are key performance bottlenecks for a range of applications, in particular, for the increasingly prominent AI-related workloads. [11] The energy cost associated with moving data is a key challenge for both severely energy constrained mobile and edge computing as well as high-performance computing in a cloud environment due to cooling constraints. The current approaches, such as using hundreds of processors in parallel [12] or application-specific processors, [13] are not likely to fully overcome the challenge of data movement.…”
Section: In-memory Computingmentioning
confidence: 99%
“…Many studies have already pointed out the memory bottleneck problem and highlighted the need to shift the paradigm to data centric computing [14], [15]. They have shown that the consideration of PIM-specific constraints, a new architecture for the PIM, the set of memory intensive benchmarks to benefit from the PIM and the methods to accurately identify the PIM offloading candidates are necessary.…”
Section: Related Workmentioning
confidence: 99%
“…They have shown that the consideration of PIM-specific constraints, a new architecture for the PIM, the set of memory intensive benchmarks to benefit from the PIM and the methods to accurately identify the PIM offloading candidates are necessary. Also, the simulation infrastructure to measure the performance of the PIM should be established [14], [15].…”
Section: Related Workmentioning
confidence: 99%
“…Allowing PBF to modify these code pages enables dynamic code rewriting [62]. PBF can also be used for replicating data between hosts [82] or within a host [18], checkpointing [16], persistent memory logging [79], memory compression [63] and encryption [33], near-memory processing [55], offloading memory management tasks, etc. Table 2.…”
Section: Other Use Casesmentioning
confidence: 99%