In this paper we analyse a multiplexer handling a number of identical and independent Worst Case Traffic (WCT) sources. Each WeT source produces a periodic stream of cells consisting of a constant number of back-to-back cells followed by a silent period of constant duration. The WeT can model the traffic produced by a "malicious" user who sends an ON/OFF traffic where a burst of back-to-back cells whose length is the largest compatible with the tolerance introduced in the control function alternates with an idle period whose length is the smallest compatible with the policed peak cell rate. WeT can also model, for example, the traffic produced by some ATM Adaptation Layer multiplexing schemes in the Terminal Equipment.Exact results are obtained, both for the discrete and the fluid-flow model. The numerical examples show the dramatic impact that WeT can have on the multiplexer buffer requeriments. The model presented can be useful to assess the convenience of using a traffic shaping device at the entry point of the ATM network.
The explosive and robust growth of the Internet owes a lot to the "end-to-end principle", which pushes stateful operations to the end-points. The Internet grew both in traffic volume, and in the richness of the applications it supports. The growth also brought along new security issues and network monitoring applications. Edge devices, in particular, tend to perform upper layer packet processing. A whole new class of applications require stateful processing.In this paper we study the impact of stateful networking applications on architectural bottlenecks. The analysis covers applications with a variety of statefulness levels. The study emphasizes the data cache behavior. Nevertheless, we also discuss other issues, such as branch prediction and ILP. Additionally, we analyze the architectural impact through the TCP connection life. Our results show an important memory bottleneck due to maintaining the states. Moreover, depending on the target of the application, the memory bottleneck may be concentrated within a set of packets or distributed along the TCP connection lifetime.
Absfract-I n this paper we address the design of a packet buffer for future high-speed routers that support line rates as high as OC-3072 (160 Gbls), and a high number of ports and service classes.We describe a general design for hybrid DRAWSRAM packet buffers that exploits bank organization of DRAM. This general scheme includes some designs previously proposed as particular cases.Based on this general scheme we propose a new scheme that randomly chooses a DRAM memory bank for every transfer hetaeen SRAM and DRAM. The numerical results show that this scheme nould require an SRAM size almost an order of magnitude lower than previously proposed schemes without the problem of memory fragmentation.
In this paper, an analytic approximation is derived for the end-to-end delay-jitter incurred by a periodic traffic with constant packet size. It is assumed that the periodic traffic is multiplexed with a background packet stream under the FCFS service discipline in each queue along the path to its destination. The processes governing the packet arrivals and the packet sizes of the background traffics are assumed to be general renewal processes. A very simple analytical approximation is derived and its accuracy is assessed by means of event-driven simulations.
The trend of the networking processing is to increase the intelligence of the routers (i.e. security capacities). This means that there is an increment in the workload generated per packet and new types of applications are emerging, such as stateful programs. On the other hand, Internet traffic continues to grow vigorously. This fact involves an increment of the traffic aggregation levels and overloades the processing capacities of the routers.In this paper we show the importance of traffic aggregation level on networking application studies. We also classify the applications according to the data management of the packet processing. Hence, we present the different impacts on the data cache performance depending on the application category. Our results show that traffic aggregation level may affect the cache performance depending on the networking application category. Stateful applications show a significant sensitivity to this impact.
Abstract-In order to support the enormous growth of the Internet, innovative research in every router's subsystem is needed. In this paper we focus our attention on packet buffer design for routers supporting high-speed line rates. More specifically, we address the design of packet buffers using Virtual Output Queuing (VOQ) discipline, which are used in most modern router architectures. The design is based on a previously proposed scheme that uses a combination of SRAM and DRAM modules. We propose a storage scheme that achieves a conflict-free memory bank organization. This leads to a reduction of the granularity of DRAM accesses, resulting in a decrease of storage capacity needed by the SRAM. In the DRAM/SRAM scheme, SRAM memory bandwidth needs to fit the line rate. Since memory bandwidth is limited by its size, searching for memory schemes having a small SRAM size arises as an essential issue for high speed line rates (e.g. OC768, 40 Gbps and OC3072, 160 Gbps).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.