Abstract-In this paper we consider the general problem of resource provisioning within cloud computing. We analyze the problem of how to allocate resources to different clients such that the service level agreements (SLAs) for all of these clients are met. A model with multiple service request classes generated by different clients is proposed to evaluate the performance of a cloud computing center when multiple SLAs are negotiated between the service provider and its customers. For each class, the SLA is specified by the request rejection probabilities of the clients in that class. The proposed solution supports cloud service providers in the decision making about 1) defining realistic SLAs, 2) the dimensioning of data centers, 3) whether to accept new clients, and 4) the amount of resources to be reserved for high priority clients. We illustrate the potential of the solution by a number of experiments conducted for a large and therefore realistic number of resources.
Network robustness research aims at finding a measure to quantify network robustness. Once such a measure has been established, we will be able to compare networks, to improve existing networks and to design new networks that are able to continue to perform well when it is subject to failures or attacks. In this paper we survey a large amount of robustness measures on simple, undirected and unweighted graphs, in order to offer a tool for network administrators to evaluate and improve the robustness of their network. The measures discussed in this paper are based on the concepts of connectivity (including reliability polynomials), distance, betweenness and clustering. Some other measures are notions from spectral graph theory, more precisely, they are functions of the Laplacian eigenvalues. In addition to surveying these graph measures, the paper also contains a discussion of their functionality as a measure for topological network robustness.
In this Chapter we give an overview of statistical methods for anomaly detection (AD), thereby targeting an audience of practitioners with general knowledge of statistics. We focus on the applicability of the methods by stating and comparing the conditions in which they can be applied and by discussing the parameters that need to be set
DNS tunnels allow circumventing access and security policies in firewalled networks. Such a security breach can be misused for activities like free web browsing, but also for command & control traffic or cyber espionage, thus motivating the search for effective automated DNS tunnel detection techniques. In this paper we develop such a technique, based on the monitoring and analysis of network flows. Our methodology combines flow information with statistical methods for anomaly detection. The contribution of our paper is twofold. Firstly, based on flow-derived variables that we identified as indicative of DNS tunnelling activities, we identify and evaluate a set of non-parametrical statistical tests that are particularly useful in this context. Secondly, the efficacy of the resulting tests is demonstrated by extensive validation experiments in an operational environment, covering many different usage scenarios.
Power grids vulnerability is a key issue in society. A component failure may trigger cascades of failures across the grid and lead to a large blackout. Complex network approaches have shown a direction to study some of the problems faced by power grids. Within Complex Network Analysis structural vulnerabilities of power grids have been studied mostly using purely topological approaches, which assumes that flow of power is dictated by shortest paths. However, this fails to capture the real flow characteristics of power grids. We have proposed a flow redistribution mechanism that closely mimics the flow in power grids using the Power Transfer Distribution Factor (PTDF). With this mechanism we enhance existing cascading failure models to study the vulnerability of power grids. We apply the model to the European high-voltage grid to carry out a comparative study for a number of centrality measures. 'Centrality' gives an indication of the criticality of network components. Our model offers a way to find those centrality measures that give the best indication of node vulnerability in the context of power grids, by considering not only the network topology but also the power flowing through the network. In addition, we use the model to determine the spare capacity that is needed to make the grid robust to targeted attacks. We also show a brief comparison of the end results with other power grid systems to generalise the result.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.