Second NASA/ESA Conference on Adaptive Hardware and Systems (AHS 2007) 2007
DOI: 10.1109/ahs.2007.30
|View full text |Cite
|
Sign up to set email alerts
|

An RLDRAM II Implementation of a 10Gbps Shared Packet Buffer for Network Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…This ensures its stability and future adoption. It is also already adopted in highspeed networking solutions [27], [28]. Moreover, RLDRAM is supported by several industry players such as Intel [37], Xilinx [38], Lattice [39], and Northwest Logic [40].…”
Section: Other Considerations: a Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This ensures its stability and future adoption. It is also already adopted in highspeed networking solutions [27], [28]. Moreover, RLDRAM is supported by several industry players such as Intel [37], Xilinx [38], Lattice [39], and Northwest Logic [40].…”
Section: Other Considerations: a Discussionmentioning
confidence: 99%
“…RLDRAM. RLDRAM memory is originally targeted at high-speed routers [27] and network processing [28]. Researchers also envisioned the usage of RLDRAM as a lowlatency memory module in a heterogeneous memory system to increase overall memory performance [29], [30].…”
Section: Related Workmentioning
confidence: 99%
“…Reduced Latency DRAM (RLDRAM3 [30]) was designed as a deter− ministic latency DRAM module for use in high−speed applications such as network controllers [40]. While the pin−bandwidth of RL− DRAM3 is comparable to DDR3, its core latencies are extremely small, due to the use of many small arrays.…”
Section: Rldrammentioning
confidence: 99%
“…As a result, long DRAM latency remains as a significant system performance bottleneck for many modern applications [243,252], such as in-memory databases [11,37,64,221,361], data analytics (e.g., Spark) [22,23,64,366], graph traversals [7,351,365], pointer chasing workloads [116], CHAPTER 1. INTRODUCTION Google's datacenter workloads [154], and buffers for network packets in routers or network processors [16,110,174,344,356,371]. For example, a recent study by Google reported that memory latency is more important than memory bandwidth for the applications running in Google's datacenters [154].…”
Section: Introductionmentioning
confidence: 99%