2004 Workshop on High Performance Switching and Routing, 2004. HPSR.
DOI: 10.1109/hpsr.2004.1303414
|View full text |Cite
|
Sign up to set email alerts
|

On the design of hybrid DRAM/SRAM memory schemes for fast packet buffers

Abstract: Absfract-I n this paper we address the design of a packet buffer for future high-speed routers that support line rates as high as OC-3072 (160 Gbls), and a high number of ports and service classes.We describe a general design for hybrid DRAWSRAM packet buffers that exploits bank organization of DRAM. This general scheme includes some designs previously proposed as particular cases.Based on this general scheme we propose a new scheme that randomly chooses a DRAM memory bank for every transfer hetaeen SRAM and D… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…The static variety of RAM employs only latching circuitry to manage the storage, whereas the Dynamic RAM variety uses a refreshing circuit to maintain the bit information constantly. SRAM has a faster response time and has a low packing density compared to DRAM [16]. In the comparative study, we have visualized the contribution of these data storage in a particular board.…”
Section: Memory Interfacesmentioning
confidence: 99%
“…The static variety of RAM employs only latching circuitry to manage the storage, whereas the Dynamic RAM variety uses a refreshing circuit to maintain the bit information constantly. SRAM has a faster response time and has a low packing density compared to DRAM [16]. In the comparative study, we have visualized the contribution of these data storage in a particular board.…”
Section: Memory Interfacesmentioning
confidence: 99%
“…Notice that an RList FIFO queue, due to its sequential data access, can be mostly kept in DRAM while supporting SDRAM like access speed (more than 100Gb/s). This is achieved by using SRAM based buffers for the head and tail parts of each list, and storing internal items in several interleaved DRAM banks [19].…”
Section: Ppq -The Power Approachmentioning
confidence: 99%
“…The merge phase starts by initialization of the BPQ with the smallest keys of the sublists (lines [17][18][19][20]. From now on until all keys have been merged, we extract the smallest key in the list (line 23), put it in the output array, deletes it from the BPQ and insert a new one, taken from the corresponding sublist which the extracted key originally came from (line 27), i.e.…”
Section: Power Sortingmentioning
confidence: 99%
“…In this section, we describe a scheme, first presented in [18], that exploits DRAM bank organization as in [17] (see Section 4). It allows a data granularity of b < B and, thus, reduces the SRAM size.…”
Section: Random Bau Scheme (Rbau)mentioning
confidence: 99%
“…SRAM only stores the tail and head of queues in order to ensure the line rate and DRAM stores the rest of them in order to ensure the large storage that is needed. To our knowledge, the hybrid design proposals made in this field are [29], [17], and [18].…”
Section: Introductionmentioning
confidence: 99%