Fog Computing is one of the new computing structures which takes the Cloud to the verge of the network. The structure is formulated for applications that need low latency. Fog Computing has been projected to improve the disadvantages of Cloud Computing. The system is confronted with the variability of dynamic resources that are heterogeneous and distributed. Hence, efficient scheduling and resource allocation are necessary to maximize the use of these resources and the satisfaction of users. In this paper, a resource-aware scheduler RACE (Resource Aware Cost-Efficient Scheduler) is proposed to distribute the incoming application modules to Fog devices that maximize resource utilization at the Fog layer, reduces the monetary cost of using Cloud resources with minimum execution time of applications and minimum bandwidth usage. This RACE comprises of two algorithms. The ModuleScheduler in RACE categorizes the incoming application modules according to their computation and bandwidth requirements which are then placed by CompareModule. Comprehensive experimental results obtained from the simulation by using ifogsim simulator show that our approach performs better in most of the cases as compared to the Traditional Cloud placement and the baseline algorithm. INDEX TERMS Cloud/Fog environment, scheduling, internet of things (IoTs)
The ever-growing number of Internet of Things (IoT) devices increases the amount of data produced on daily basis. To handle such a massive amount of data, cloud computing provides storage, processing, and analytical services. Besides this, real-time applications, i.e., online gaming, smart traffic management, and smart healthcare, cannot tolerate the high latency and bandwidth consumption. The fog computing paradigm brings the cloud services closer to the network edge to provide quality of service (QoS) to such applications. However, efficient task scheduling becomes critical for improving the performance due to the heterogeneous nature, resource-constrained, and distributed environment of fog resources. With an efficient task scheduling algorithm, the response time to application requests can be reduced along with bandwidth and cloud resource costs. This paper presents a genetic algorithm-based solution to find an efficient scheduling approach for mapping application modules in a cloud fog computing environment. Our proposed solution is based on the execution time as a fitness function to determine an efficient module scheduling on the available fog devices. The proposed approach has been evaluated and compared against baseline algorithms in terms of execution time, monetary cost, and bandwidth. Comprehensive simulation results show that the proposed approach offers a better scheduling strategy than the existing scheduler.
In the past few years, mobile data traffic has seen exponential growth due to the emergence of smart applications. Although throughput enhancement techniques such as macro- and femtocells reduce cell size, they are relatively expensive to implement. Mobile device-to-device (D2D) communication has emerged as a solution to support the growing popularity of multimedia content for local service in next-generation 5G cellular networks. Content sharing is the prominent feature, which helps D2D communication in reducing offload traffic on the network, improving the energy efficiency of the device, and reducing backhaul connectivity costs. In traditional mapping approaches such as one to one or one to many, a massive amount of traffic is distributed among the devices resulting in high-energy consumption. In this paper, we propose a novel energy-efficient content sharing scheme called Energy-Efficient Collaboration-based Content (EECC) sharing strategy in D2D communication that shares content equally across devices based on their capacities and battery life under mobility. The proposed work includes cluster formation, cluster head selection, and helper node selection. In addition, we relied on a cooperative caching policy to ensure that content is distributed efficiently. The simulation results indicate a 12.05% reduction in energy compared to the state-of-the-art technique with a 2-gigabyte video file. To evaluate scalability, we increased the file size from 3 to 4 gigabytes, yet the performance in terms of energy consumption remained the same.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.