The promise of low latency connectivity and efficient bandwidth utilization has driven the recent shift from vehicular cloud computing (VCC) towards vehicular edge computing (VEC). This paper presents an advanced deep learning-based computational offloading algorithm for multilevel vehicular edge-cloud computing networks. To conserve energy and guarantee the efficient utilization of shared resources among multiple vehicles, an integration model of computational offloading, and resource allocation is formulated as a binary optimization problem to minimize the total cost of the entire system in terms of time and energy. However, this problem is considered NP-hard and it is computationally prohibitive to solve this type of problem, particularly for large-scale vehicles, due to the curse-of-dimensionality problem. Therefore, an equivalent reinforcement learning form is generated and we propose a distributed deep learning algorithm to find the near-optimal computational offloading decisions in which a set of deep neural networks are used in parallel. Finally, simulation results show that the proposed algorithm can exhibit fast convergence and significantly reduce the overall consumption of an entire system compared to the benchmark solutions.INDEX TERMS Computation offloading, vehicular edge-cloud computing, autonomous vehicles, 5G, resource allocation, deep reinforcement learning.
In recent years, rapid development has been made to the Internet of Things communication technologies, infrastructure, and physical resources management. These developments and research trends address challenges such as heterogeneous communication, quality of service requirements, unpredictable network conditions, and a massive influx of data. One major contribution to the research world is in the form of software-defined networking applications, which aim to deploy rule-based management to control and add intelligence to the network using high-level policies to have integral control of the network without knowing issues related to low-level configurations. Machine learning techniques coupled with software-defined networking can make the networking decision more intelligent and robust. The Internet of Things application has recently adopted virtualization of resources and network control with software-defined networking policies to make the traffic more controlled and maintainable. However, the requirements of software-defined networking and the Internet of Things must be aligned to make the adaptations possible. This paper aims to discuss the possible ways to make software-defined networking enabled Internet of Things application and discusses the challenges solved using the Internet of Things leveraging the software-defined network. We provide a topical survey of the application and impact of software-defined networking on the Internet of things networks. We also study the impact of machine learning techniques applied to software-defined networking and its application perspective. The study is carried out from the different perspectives of software-based Internet of Things networks, including wide-area networks, edge networks, and access networks. Machine learning techniques are presented from the perspective of network resources management, security, classification of traffic, quality of experience, and quality of service prediction. Finally, we discuss challenges and issues in adopting machine learning and software-defined networking for the Internet of Things applications.
Decarbonisation, energy security and expanding energy access are the main driving forces behind the worldwide increasing attention in renewable energy. This paper focuses on the solar photovoltaic (PV) technology because, currently, it has the most attention in the energy sector due to the sharp drop in the solar PV system cost, which was one of the main barriers of PV large-scale deployment. Firstly, this paper extensively reviews the technical challenges, potential technical solutions and the research carried out in integrating high shares of small-scale PV systems into the distribution network of the grid in order to give a clearer picture of the impact since most of the PV systems installations were at small scales and connected into the distribution network. The paper reviews the localised technical challenges, grid stability challenges and technical solutions on integrating large-scale PV systems into the transmission network of the grid. In addition, the current practices for managing the variability of large-scale PV systems by the grid operators are discussed. Finally, this paper concludes by summarising the critical technical aspects facing the integration of the PV system depending on their size into the grid, in which it provides a strong point of reference and a useful framework for the researchers planning to exploit this field further on.
Virtual reality (VR) is considered to be one of the main use cases of the fifth-generation cellular system (5G). In addition, it has been categorized as one of the ultra-low latency applications in which VR applications require an end-to-end latency of 5 ms. However, the limited battery capacity and computing resources of mobile devices restrict the execution of VR applications on these devices. As a result, mobile edge-cloud computing is considered as a new paradigm to mitigate resource limitations of these devices through computation offloading process with low latency. To this end, this paper introduces an efficient multi-player with multi-task computation offloading model with guaranteed performance in network latency and energy consumption for VR applications based on mobile edge-cloud computing. In addition, this model has been formulated as an integer optimization problem whose objective is to minimize the sum cost of the entire system in terms of network latency and energy consumption. Afterwards, a low-complexity algorithm has been designed which provides comprehensive processes for deriving the optimal computation offloading decision in an efficient manner. Furthermore, we provide a prototype and real implementation for the proposed system using OpenAirInterface software. Finally, simulations have been conducted to validate our proposed model and prove that the network latency and energy consumption can be reduced by up to 26.2%, 27.2% and 10.9%, 12.2% in comparison with edge and cloud execution, respectively.
With the recent advances and development of autonomous control systems of cars, the design and development of reliable infrastructure and communication networks become a necessity. The recent release of the fifth-generation cellular system (5G) promises to provide a step towards reliability or a panacea. However, designing autonomous vehicle networks has more requirements due to the high mobility and traffic density of such networks and the latency and reliability requirements of applications run over such networks. To this end, we proposed a multilevel cloud system for autonomous vehicles which was built over the Tactile Internet. In addition, base stations at the edge of the radio-access network (RAN) with different technologies of antennas are used in our system. Finally, simulation results show that the proposed system with multilevel clouding can significantly reduce the round-trip latency and the network congestion. In addition, our system can be adapted in the mobility scenario.
A salinity gradient solar pond (SGSP) is capable of storing a significant quantity of heat for an extended period of time. It is a great option for providing hot water at a reduced energy cost. Additionally, SGSP is used in low-temperature industrial applications such as saltwater desalination, space heating, and power generation. Solar pond thermal performance is dependent on a variety of operational variables, including the soil conditions, the climate of the particular site, the thickness of the solar pond layers, the depth of the water table, and the salt content of the pond. As such, this study examines the thermal performance of a solar pond under a variety of operational conditions. The solar pond model is used to test the thermal performance by simulating two-dimensional heat and mass transport equations. The equations are solved using the finite difference technique utilizing MATLAB® scripts. Salt distributions and temperature profiles are computed for a variety of factors influencing SGSP’s thermal performance. The main distinguishing variables influencing the thermal performance of SGSP are soil conditions, such as soil texture, types, the moisture level in soil, and water table depth. The final findings indicated that the fine sand dry soil performed better than the other soil types owing to its poor heat conductivity. The economic results indicated that the period of return (POR) of the intended system is around 2 years. The solar pond construction costs such as excavation, transportation, salt and lining, were considered based on the local prices. This modeled study extracted the greatest possible energy is 110W/m2, with the fine sand dry at 62.48°C lowest temperature. This study suggested that the climatic conditions of Lahore is better than climatic conditions of Islamabad. Additionally, deeper water tables are suggested for improved thermal performance of the pond.
In this paper, a drain-engineered double-gate Tunnel-FET (DE-DG-TFET) to enhance the electrical characteristics and analog parameters of a conventional DG-TFET is proposed and examined through calibrated TCAD simulations. Unlike DG-TFET, a constant n-type doping, N cd , (5 × 10 17 cm −3 − 2 × 10 18 cm −3), in the channel/drain regions of DE-DG-TFET is used, resulting in a p +-n-n structure instead of conventional p +-in structure. Further, p +-n-n is modified to p +-nn + using electrostatic doping (ED) method on the drain side with Hafnium (ϕ m = 3.9 eV) as a lateral (top and bottom) and side metal electrode. A high n +-drain doping ensures the drain contact remains ohmic. Higher electric field at p +-n source-channel junction enhances the ON-state BTBT current. While the absence of metallurgical junction provides large tunneling width across the channel/drain junction, resulting in suppression of ambipolar current (I AMB). At N cd doping of 1 × 10 18 cm −3 , DE-DG-TFET demonstrates 7 times increase in I ON while I AMB is suppressed by 5 orders of magnitude. In addition to this, the proposed device improves analog/RF figures of merit, 45% in voltage gain and 5 times in peak f T. K E Y W O R D S ambipolarity, band-to-band tunneling (BTBT), charge plasma, electrostatic doping (ED), JLTFET
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.