It has been recently observed that Software Defined Networks (SDN) can change the paths of different connections in the network at a relatively frequent pace to improve the overall network performance, including delay and packet loss, or to respond to other needs such as security. These changes mean that a network that SDN controls will seldom operate in steady state; rather, the network may often be in transient mode, especially when the network is heavily loaded and path changes are critically important. Hence, we propose a transient analysis of such networks to better understand how frequent changes in paths and the switches’ workloads may affect multi-hop networks’ performance. Since conventional queueing models are difficult to solve for transient behaviour and simulations take excessive computation time due to the need for statistical accuracy, we use a diffusion approximation to study a multi-hop network controlled by SDN. The results show that network optimization should consider the transient effects of SDN and that transients need to be included in the design of algorithms for SDN controllers that optimize network performance.
No abstract
The Internet of Things is paving the way for the transition into the fourth industrial revolution with the mad rush of connecting physical devices and systems to the internet. IoT is a promising technology to drive the agricultural industry, which is the backbone for sustainable development especially in developing countries like those in Africa that are experiencing rapid population growth, stressed natural resources, reduced agricultural productivity due to climate change, and massive food wastage. In this paper, we assessed challenges in the adoption of IoT in developing countries in agriculture. We propose a cost effective, energy efficient, secure, reliable and heterogeneous (independent of the IoT protocol) three layer architecture for IoT driven agriculture. The first layer consists of IoT devices and it is made up of IoT driven agriculture systems such as smart poultry, smart irrigation, theft detection, pest detection, crop monitoring, food preservation, and food supply chain systems. The IoT devices are connected to the gateways by low power LoRaWAN network. The gateways and local processing servers co-located with the gateways create the second layer. The cloud layer is the third layer, which exploits the open source FIWARE platform to provide a set of public and free-to-use API specifications that come along with open source reference implementations.
Queuing theory has been extensively used in the modelling and performance analysis of cloud computing systems. The phenomenon of the task (or request) reneging, that is, the dropping of requests from the request queue often occur in cloud computing systems, and it is important to consider it when developing performance evaluations models for cloud computing infrastructures. Majority of studies in the performance evaluation of cloud computing data centres with the use of queuing theory do not consider the fact that the tasks could be removed from queue without being serviced. The removal of tasks from the queue could be due to the user impatience, execution deadline expiration, security reasons, or as an active queue management strategy. The reneging could be correlated in nature, that is, if a request is dropped (or reneged) at any time epoch, and then there is a probability that a request may or may not be dropped at the next time epoch. This kind of dropping (or reneging) of requests is referred to as correlated request reneging. In this paper we have modelled a cloud computing infrastructure with correlated request reneging using queuing theory. An M/M/1/N queuing model with correlated reneging has been used to study the performance analysis of the load balancing server of a cloud computing system. The steady-state as well as the transient performance analyses have been carried out. Important measures of performance like average queue size, average delay, probability of task blocking, and the probability of no waiting in the queue are studied. Finally, some comparisons are performed which describe the effect of correlated task reneging over simple exponential reneging.
Today's telecommunication network infrastructure and services are dramatically changing due partially to the rapid increase in the amount of traffic generation and its transportation. This rapid change is also caused by the increased demand for a high quality of services and the recent interest in green networking strengthened by cutting down carbon emission and operation cost. Access networks generate short electronic packets of different sizes, which are aggregated into larger optical packets at the ingress edge nodes of the optical backbone network. It is transported transparently in the optical domain, reconverted into the electronic domain at the egress edge nodes, and delivered to the destination access networks. Packet aggregation provides many benefits at the level of MAN, and core networks such as, increased spectral efficiency, energy efficiency, optimal resource utilisation, simplified traffic management which significantly reduces protocol and signalling overhead. However, packet aggregation introduces performance bottleneck at the edge node as the packets from the access networks are temporarily stored in the aggregation buffers during the packet aggregation process. In this article, we apply the diffusion approximation model and other stochastic modelling methods to analytically evaluate the performance of a new packet aggregation mechanism which was developed specifically for an N-GREEN (Next Generation of Routers for Energy Efficiency) metro network. We obtain the distribution of the packets' queue in the aggregation buffer, which influences the distribution of the waiting time (delay) experienced by packets in the aggregation buffer. We then, demonstrate the influence of the probability p of successfully inserting the packet data units from the aggregation queue to the optical ring within a defined timeslot ∆. We also discuss the performance evaluation of the complete ring by deriving the utilisation of each link.
Unmanned Aerial Vehicles (UAVs) are rapidly gaining popularity in a wide variety of applications, e.g., agriculture, health care, environmental management, supply chains, law enforcement, surveillance, and photography. Dones are often powered by batteries, making energy a critical resource that must be optimised during the mission of the drone. The duration of a done’s mission depends on the amount of energy required to perform some manoeuvering actions (takeoff, level flight, hovering, and landing), the energy required to power the ICT modules in the drone, the drone’s speed, payload, and the wind. In this paper, we present a model that minimizes the energy consumption of a low power drone and maximizes the time required to completely drain the drone’s battery and ensure the safe landing of the drone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.