1999
DOI: 10.1109/54.748803
|View full text |Cite
|
Sign up to set email alerts
|

Computational RAM: implementing processors in memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
67
0

Year Published

2000
2000
2022
2022

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 180 publications
(67 citation statements)
references
References 9 publications
0
67
0
Order By: Relevance
“…Several processor-in-memory architectures have been proposed such as the computational RAM (C·RAM) [4], smart memories [5], and intelligent RAM (IRAM) [6]. These approaches maintain the separation between a lowswing/low-SNR memory array and a high-swing/high-SNR logic.…”
Section: Introductionmentioning
confidence: 99%
“…Several processor-in-memory architectures have been proposed such as the computational RAM (C·RAM) [4], smart memories [5], and intelligent RAM (IRAM) [6]. These approaches maintain the separation between a lowswing/low-SNR memory array and a high-swing/high-SNR logic.…”
Section: Introductionmentioning
confidence: 99%
“…Since PIM processors are usually not as sophisticated as state-of-the-art microprocessors due to on-chip space constraints, systems using PIMs alone in a multiprocessor may sacrifice performance on uniprocessor computations [Saulsbury96] [Kogge94], while SoC (System-on-aChip) solutions (e.g., the IRAM [Patterson97] and the Mitsubishi M32R/D [Mitsubishi99]) limit the application domain. DIVA's support for a broad range of familiar parallel programming paradigms, including task parallelism for irregular computations, distinguishes it from systems with restricted applicability (such as to SIMD parallelism [Elliot99][Gokhale95] [Patterson97]), as well as systems requiring a novel programming methodology or compiler technology to configure logic [Babb99], or to manage a complex memory, computation and communication hierarchy [Kang99]. DIVA's PIM-to-PIM interconnect improves upon approaches that serialize communication through the host, which decreases bandwidth by introducing added traffic on the processor memory bus [Oskin98] [Gokhale95].…”
Section: Introductionmentioning
confidence: 99%
“…Computation in main memory (e.g. Computational RAM [16]) has been studied, but it failed to see much light due to the fast advancement of I/O interfaces. However, in light of power consumption walls and Big Data, moving computation to high capacity secondary storage is becoming an attractive option.…”
Section: Related Workmentioning
confidence: 99%