The promise of low latency connectivity and efficient bandwidth utilization has driven the recent shift from vehicular cloud computing (VCC) towards vehicular edge computing (VEC). This paper presents an advanced deep learning-based computational offloading algorithm for multilevel vehicular edge-cloud computing networks. To conserve energy and guarantee the efficient utilization of shared resources among multiple vehicles, an integration model of computational offloading, and resource allocation is formulated as a binary optimization problem to minimize the total cost of the entire system in terms of time and energy. However, this problem is considered NP-hard and it is computationally prohibitive to solve this type of problem, particularly for large-scale vehicles, due to the curse-of-dimensionality problem. Therefore, an equivalent reinforcement learning form is generated and we propose a distributed deep learning algorithm to find the near-optimal computational offloading decisions in which a set of deep neural networks are used in parallel. Finally, simulation results show that the proposed algorithm can exhibit fast convergence and significantly reduce the overall consumption of an entire system compared to the benchmark solutions.INDEX TERMS Computation offloading, vehicular edge-cloud computing, autonomous vehicles, 5G, resource allocation, deep reinforcement learning.
Tomato is one of the most important vegetables worldwide. It is considered a mainstay of many countries’ economies. However, tomato crops are vulnerable to many diseases that lead to reducing or destroying production, and for this reason, early and accurate diagnosis of tomato diseases is very urgent. For this reason, many deep learning models have been developed to automate tomato leaf disease classification. Deep learning is far superior to traditional machine learning with loads of data, but traditional machine learning may outperform deep learning for limited training data. The authors propose a tomato leaf disease classification method by exploiting transfer learning and features concatenation. The authors extract features using pre‐trained kernels (weights) from MobileNetV2 and NASNetMobile; then, they concatenate and reduce the dimensionality of these features using kernel principal component analysis. Following that, they feed these features into a conventional learning algorithm. The experimental results confirm the effectiveness of concatenated features for boosting the performance of classifiers. The authors have evaluated the three most popular traditional machine learning classifiers, random forest, support vector machine, and multinomial logistic regression; among them, multinomial logistic regression achieved the best performance with an average accuracy of 97%.
The Internet of Things (IoT) is permeating our daily lives through continuous environmental monitoring and data collection. The promise of low latency communication, enhanced security, and efficient bandwidth utilization lead to the shift from mobile cloud computing to mobile edge computing. In this study, we propose an advanced deep reinforcement resource allocation and security-aware data offloading model that considers the constrained computation and radio resources of industrial IoT devices to guarantee efficient sharing of resources between multiple users. This model is formulated as an optimization problem with the goal of decreasing energy consumption and computation delay. This type of problem is non-deterministic polynomial time-hard due to the curse-ofdimensionality challenge, thus, a deep learning optimization approach is presented to find an optimal solution. In addition, a 128-bit Advanced Encryption Standard-based cryptographic approach is proposed to satisfy the data security requirements. Experimental evaluation results show that the proposed model can reduce offloading overhead in terms of energy and time by up to 64.7% in comparison with the local execution approach. It also outperforms the full offloading scenario by up to 13.2%, where it can select some computation tasks to be offloaded while optimally rejecting others. Finally, it is adaptable and scalable for a large number of mobile devices.
With the recent advances and development of autonomous control systems of cars, the design and development of reliable infrastructure and communication networks become a necessity. The recent release of the fifth-generation cellular system (5G) promises to provide a step towards reliability or a panacea. However, designing autonomous vehicle networks has more requirements due to the high mobility and traffic density of such networks and the latency and reliability requirements of applications run over such networks. To this end, we proposed a multilevel cloud system for autonomous vehicles which was built over the Tactile Internet. In addition, base stations at the edge of the radio-access network (RAN) with different technologies of antennas are used in our system. Finally, simulation results show that the proposed system with multilevel clouding can significantly reduce the round-trip latency and the network congestion. In addition, our system can be adapted in the mobility scenario.
Conserving energy resources and enhancing computation capability have been the key design challenges in the era of the Internet of Things (IoT). The recent development of energy harvesting (EH) and Mobile Edge Computing (MEC) technologies have been recognized as promising techniques for tackling such challenges. Computation offloading enables executing the heavy computation workloads at the powerful MEC servers. Hence, the quality of computation experience, for example, the execution latency, could be significantly improved. In a situation where mobile devices can move arbitrarily and having multi servers for offloading, computation offloading strategies are facing new challenges. The competition of resource allocation and server selection becomes high in such environments. In this paper, an optimized computation offloading algorithm that is based on integer linear optimization is proposed. The algorithm allows choosing the execution mode among local execution, offloading execution, and task dropping for each mobile device. The proposed system is based on an improved computing strategy that is also energy efficient. Mobile devices, including energy harvesting (EH) devices, are considered for simulation purposes. Simulation results illustrate that the energy level starts from 0.979 % and gradually decreases to 0.87 % . Therefore, the proposed algorithm can trade-off the energy of computational offloading tasks efficiently.
COVID-19 comes from a large family of viruses identi ed in 1965; to date, seven groups have been recorded which have been found to affect humans. In the healthcare industry, there is much evidence that Al or machine learning algorithms can provide effective models that solve problems in order to predict con rmed cases, recovered cases, and deaths. Many researchers and scientists in the eld of machine learning are also involved in solving this dilemma, seeking to understand the patterns and characteristics of virus attacks, so scientists may make the right decisions and take speci c actions. Furthermore, many models have been considered to predict the Coronavirus outbreak, such as the retro prediction model, pandemic Kaplan's model, and the neural forecasting model. Other research has used the time series-dependent face book prophet model for COVID-19 prediction in India's various countries. Thus, we proposed a prediction and analysis model to predict COVID-19 in Saudi Arabia. The time series dependent face book prophet model is used to t the data and provide future predictions. This study aimed to determine the pandemic prediction of COVID-19 in Saudi Arabia, using the Time Series Analysis to observe and predict the coronavirus pandemic's spread daily or weekly. We found that the proposed model has a low ability to forecast the recovered cases of the COVID-19 dataset. In contrast, the proposed model of death cases has a high ability to forecast the COVID-19 dataset. Finally, obtaining more data could empower the model for further validation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.