The online collection of coarse-grained traffic information, such as the total number of flows, is gaining in importance due to a wide range of applications, such as congestion control and network security. In this paper, we focus on an active queue management scheme called SRED since it estimates the number of active flows and uses the quantity to indicate the level of congestion. However, SRED has several limitations, such as instability in estimating the number of active flows and underestimation of active flows in the presence of non-responsive traffic. We present a Markov model to examine the capability of SRED in estimating the number of flows. We show how the SRED cache hit rate can be used to quantify the number of active flows. We then propose a modified SRED scheme, called hash-based two-level caching (HaTCh), which uses hashing and a two-level caching mechanism to accurately estimate the number of active flows under various workloads. Simulation results indicate that the proposed scheme provides a more accurate estimation of the number of active flows than SRED, stabilizes the estimation with respect to workload fluctuations, and prevents performance degradation by efficiently isolating non-responsive flows. Keywords: Flow estimation, Markov model, nonresponsive flows, SRED, HaTCh.Manuscript received July 25, 2007; revised Oct. 18, 2007. A preliminary version of the paper was presented at IEEE CDC, Dec. 2003 I. IntroductionMeasuring and monitoring traffic is an important but admittedly difficult problem, primarily because of the huge volume of traffic to be processed in high-speed networks. To accurately capture the characteristics of network traffic, measuring devices have to maintain per-flow information. However, the complexity of these operations has been the main obstacle for deploying such devices in high-speed networks. To address this problem, various techniques have been proposed. For example, a sampling technique was standardized by the IETF Internet Protocol Flow Information Export (IPFIX) working group [1], and a variation focused on implementation issues was investigated in [2]. Recently, a variation of a hashing scheme called a space-code bloom filter (SCBF) [3] was proposed to avoid the overhead of per-flow maintenance. An SCBF collects network traffic on the arrival of each packet and periodically stores it in permanent storage devices. The required network information can then be obtained offline. Therefore, SCBF can support fine-grained traffic information, which is essential for network management, planning, accounting, and billing.On the other hand, obtaining coarse-grained traffic information, such as the total number of active flows, online is important for congestion control and network security [4]- [7]. In this context, flow counting techniques, called direct bitmap and multiresolution bitmap, have been proposed by Estan and others in [7]. These techniques are based on a bitmap data structure, in which each source is hashed into a bit, and a bit is marked when the source is...
In addition to unresponsive UDP traffic, aggressive TCP flows pose a serious challenge to congestion control and stability of the future Internet. This paper considers the problem of dealing with such unresponsive TCP sessions that can be considered to collectively constitute a Denialof-Service (DoS) attack on conforming TCP sessions. The proposed policing scheme, called HaDQ (HaTCh-based Dynamic Quarantine), is based on a recently proposed HaTCh mechanism, which accurately estimates the number of active flows without maintenance of per-flow states in a router. We augment HaTCh with a small Content Addressable Memory (CAM), called quarantine memory, to dynamically quarantine and penalize the unresponsive TCP flows. We exploit the advantage of the smaller, first-level cache of HaTCh for isolating and detecting the aggressive flows. The aggressive flows from the smaller cache are then moved to the quarantine memory and are precisely monitored for taking appropriate punitive action. While the proposed HaDQ technique is quite generic in that it can work with or without any AQM scheme, in this paper we have integrated HaDQ and an AQM scheme to compare it against some of the existing techniques. For this, we extend the HaTCh scheme to develop a complete AQM mechanism, called HRED.Simulation-based performance analysis indicates that by using a proper configuration of the monitoring period and the detection threshold, the proposed HaDQ scheme can achieve a low false drop rate (false positives) of less than 0.1%. Comparison with two AQM schemes (CHOKe and FRED), which were proposed for handling unresponsive UDP flows, shows that HaDQ is more effective in penalizing the bandwidth attackers and enforcing fairness between conforming and aggressive TCP flows.
No abstract
When individuals interact with one another to accomplish specific goals, they learn from others' experiences to achieve the tasks at hand. The same holds for learning in virtual environments, such as video games. Deep multiagent reinforcement learning shows promising results in terms of completing many challenging tasks. To demonstrate its viability, most algorithms use value decomposition for multiple agents. To guide each agent, behavior value decomposition is utilized to decompose the combined Q-value of the agents into individual agent Q-values. A different mixing method can be utilized, using a monotonicity assumption based on value decomposition algorithms such as QMIX and QVMix. However, this method selects individual agent actions through a greedy policy. The agents, which require large numbers of training trials, are not addressed. In this paper, we propose a novel hybrid policy for the action selection of an individual agent known as Q-value Selection using Optimization and DRL (QSOD). A grey wolf optimizer (GWO) is used to determine the choice of individuals' actions. As in GWO, there is proper attention among the agents facilitated through the agents' coordination with one another. We used the StarCraft 2 Learning Environment to compare our proposed algorithm with the state-of-the-art algorithms QMIX and QVMix. Experimental results demonstrate that our algorithm outperforms QMIX and QVMix in all scenarios and requires fewer training trials.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright 漏 2023 scite Inc. All rights reserved.
Made with 馃挋 for researchers