Recent developments in intelligent transport systems (ITS) based on smart mobility significantly improves safety and security over roads and highways. ITS networks are comprised of the Internet-connected vehicles (mobile nodes), roadside units (RSU), cellular base stations and conventional core network routers to create a complete data transmission platform that provides real-time traffic information and enable prediction of future traffic conditions. However, the heterogeneity and complexity of the underlying ITS networks raise new challenges in intrusion prevention of mobile network nodes and detection of security attacks due to such highly vulnerable mobile nodes. In this paper, we consider a new type of security attack referred to as crossfire attack, which involves a large number of compromised nodes that generate low intensity traffic in a temporally coordinated fashion such that target links or hosts (victims) are disconnected from the rest of the network. Detection of such attacks is challenging since the attacking traffic flows are indistinguishable from the legitimate flows. With the support of software-defined networking that enables dynamic network monitoring and traffic characteristic extraction, we develop a machine learning model that can learn the temporal correlation among traffic flows traversing in the ITS network, thus differentiating legitimate flows from coordinated attacking flows. We use different deep learning algorithms to train the model and study the performance using Mininet-WiFi emulation platform. The results show that our approach achieves a detection accuracy of at least 80%.
Having received significant attention in the industry, the cloud market is nowadays fiercely competitive with many cloud providers. On one hand, cloud providers compete against each other for both existing and new cloud users. To keep existing users and attract newcomers, it is crucial for each provider to offer an optimal price policy which maximizes the final revenue and improves the competitive advantage. The competition among providers leads to the evolution of the market and dynamic resource prices over time. On the other hand, cloud providers may cooperate with each other to improve their final revenue. Based on a Service Level Agreement, a provider can outsource its users' resource requests to its partner to reduce the operation cost and thereby improve the final revenue. This leads to the problem of determining the cooperating parties in a cooperative environment. This paper tackles these two issues of the current cloud market. First, we solve the problem of competition among providers and propose a dynamic price policy. We employ a discrete choice model to describe the user's choice behavior based on his obtained benefit value. The choice model is used to derive the probability of a user choosing to be served by a certain provider. The competition among providers is formulated as a non-cooperative stochastic game where the players are providers who act by proposing the price policy simultaneously. The game is modelled as a Markov Decision Process whose solution is a Markov Perfect Equilibrium. Then, we address the cooperation among providers by presenting a novel algorithm for determining a cooperation strategy that tells providers whether to satisfy users' resource requests locally or outsource them to a certain provider. The algorithm yields the optimal cooperation structure from which no provider unilaterally deviates to gain more revenue. Numerical simulations are carried out to evaluate the performance of the proposed models.
With the proliferation of network devices and rapid development in information technology, networks such as Internet of Things are increasing in size and becoming more complex with heterogeneous wired and wireless links. In such networks, link faults may result in a link disconnection without immediate replacement or a link reconnection, e.g., a wireless node changes its access point. Identifying whether a link disconnection or a link reconnection has occurred and localizing the failed link become a challenging problem. An active probing approach requires a long time to probe the network by sending signaling messages on different paths, thus incurring significant communication delay and overhead. In this paper, we adopt a passive approach and develop a three-stage machine learning-based technique, namely ML-LFIL that identifies and localizes link faults by analyzing the measurements captured from the normal traffic flows, including aggregate flow rate, end-to-end delay and packet loss. ML-LFIL learns the traffic behavior in normal working conditions and different link fault scenarios. We train the learning model using support vector machine, multi-layer perceptron and random forest. We implement ML-LFIL and carry out extensive experiments using Mininet platform. Performance studies show that ML-LFIL achieves high accuracy while requiring much lower fault localization time compared to the active probing approach.
Abstract-Cloud computing infrastructures are providing resources on demand for tackling the needs of large-scale distributed applications. Determining the amount of resources to allocate for a given computation is a difficult problem though. This paper introduces and compares four automated resource allocation strategies relying on the expertise that can be captured in workflow-based applications. The evaluation of these strategies was carried out on the Aladdin/Grid'5000 testbed using a real application from the area of medical image analysis. Experimental results show that optimized allocation can help finding a tradeoff between amount of resources consumed and applications makespan.
International audienceThrough the recent emergence of joint resource and network virtualization, dynamic composition and provisioning of time-limited and isolated virtual infrastructures is now possible. One other benefit of infrastructure virtualization is the capability of transparent reliability provisioning (reliability becomes a service provided by the infrastructure). In this context, we discuss the motivations and gains of introducing customizable reliability of virtual infrastructures when executing large-scale distributed applications, and present a framework to specify, allocate and deploy virtualized infrastructure with reliability capabilities. An approach to efficiently specify and control the reliability at runtime is proposed. We illustrate these ideas by analyzing the introduction of reliability at the virtual-infrastructure level on a real application. Experimental results, obtained with an actual medical-imaging application running in virtual infrastructures provisioned in the experimental large-scale Grid'5000 platform, show the benefits of the virtualization of reliability
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.