Proceedings. IEEE INFOCOM '98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Compu
DOI: 10.1109/infcom.1998.659664
|View full text |Cite
|
Sign up to set email alerts
|

Implementing distributed packet fair queueing in a scalable switch architecture

Abstract: Abstract-To support the Internet's explosive growth and expansion into a true integrated services network, there is a need for cost-effective switching technologies that can simultaneously provide high capacity switching and advanced QoS. Unfortunately, these two goals are largely believed to be contradictory in nature. To support QoS, sophisticated packet scheduling algorithms, such as Fair Queueing, are needed to manage queueing points. However, the bulk of current research in packet scheduling algorithms as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
55
0

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 79 publications
(56 citation statements)
references
References 22 publications
0
55
0
Order By: Relevance
“…Additionally, the loosely-coupled input and output schedulers are able to find very efficient long-term solutions to the crossbar scheduling problem, with capability for advanced QoS, without requiring speedup [6] [7] [8] [9] [10]. These facts allow significant cost reductions, since they eliminate the need for speedup and egress buffering.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, the loosely-coupled input and output schedulers are able to find very efficient long-term solutions to the crossbar scheduling problem, with capability for advanced QoS, without requiring speedup [6] [7] [8] [9] [10]. These facts allow significant cost reductions, since they eliminate the need for speedup and egress buffering.…”
Section: Introductionmentioning
confidence: 99%
“…With multicell output buffers, a credit scheduler may produce grants in consecutive time-slots, in addition to a first pending grant, thus providing matching opportunities for other inputs as well 5 . This means that two or more input (grant) schedulers may select a grant/credit from the same output at the same time, thus, multiple cells may reach an output buffer in the same time.…”
Section: Switch Descriptionmentioning
confidence: 99%
“…The buffered crossbar has one buffer per crosspoint (combined input crosspoint queueing -CICQ), and has received much research attention recently because it features simple and efficient scheduling [5] [6] [7] [8].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, buffered crossbars (combined input-crosspoint queueing -CICQ), have emerged as an advantageous architecture; they contain small buffers at their crosspoints, and use backpressure to the ingress line cards to prevent these crosspoint buffers from overflowing. The first observation about buffered crossbars concerned the simplicity and high efficiency of their scheduling [6]- [10]; no internal speedup is needed to compensate for scheduler inefficiencies, thus allowing the increase of port speed. A subsequent observation was that buffered crossbars can directly switch variable-size packets [6] [11].…”
mentioning
confidence: 99%
“…The first observation about buffered crossbars concerned the simplicity and high efficiency of their scheduling [6]- [10]; no internal speedup is needed to compensate for scheduler inefficiencies, thus allowing the increase of port speed. A subsequent observation was that buffered crossbars can directly switch variable-size packets [6] [11]. Doing so, without any segmentation, eliminates the need for speedup to cope with cell-padding overhead; in turn, the lack of speedup eliminates egress queueing, and the lack of segmentation eliminates reassembly buffers, thus reducing cost [12].…”
mentioning
confidence: 99%