Abstract:Recently, the 5G as the next-generation network is a popular research and discussed widely. The architecture of 5G is a heterogeneous network, and it can support more networked types, like the ultra-dense network, traditional cellular network, and Machine to Machine communication. Although the high frequency and larger bandwidth have been using in 5G, resource allocation is still a critical issue that needs to be discussed and solved. Consider the spectrum resource is limited, but almost all users hope that eq… Show more
“…In order to discover the ideal TS, an updated particle swarm optimisation technique is also employed to solve resource allocation and reduce energy usage. The results of the study show that the method used can schedule the tasks efficiently [13]. A strategy to boost task performance using the hybrid scheduling of CPU and network IO resources was put forth by D. Wang et al The benchmark's total performance was significantly enhanced by the proposed CPU/IO scheduling approach, according to experimental data [14].…”
With the continuous integration of IoT technology and information technology, edge computing, as an emerging computing paradigm, makes full use of terminals to process and analyse real-time data. The explosion of Internet of Things (IoT) devices has created challenges for traditional cloud-based data processing models due to high latency and availability requirements. This paper proposes a new edge computation-based framework for iot data processing and scheduling using deep reinforcement learning. The system architecture incorporates distributed iot data access, realtime processing, and an intelligent scheduler based on Deep q networks (DQN). A large number of experiments show that compared with traditional scheduling methods, the average task completion time is reduced by 20% and resource utilization is increased by 15%. The unique integration of edge computing and deep reinforcement learning provides a flexible and efficient platform for lowlatency iot applications. Key results obtained from testing the proposed system, such as reduced task completion time and increased resource utilization.
“…In order to discover the ideal TS, an updated particle swarm optimisation technique is also employed to solve resource allocation and reduce energy usage. The results of the study show that the method used can schedule the tasks efficiently [13]. A strategy to boost task performance using the hybrid scheduling of CPU and network IO resources was put forth by D. Wang et al The benchmark's total performance was significantly enhanced by the proposed CPU/IO scheduling approach, according to experimental data [14].…”
With the continuous integration of IoT technology and information technology, edge computing, as an emerging computing paradigm, makes full use of terminals to process and analyse real-time data. The explosion of Internet of Things (IoT) devices has created challenges for traditional cloud-based data processing models due to high latency and availability requirements. This paper proposes a new edge computation-based framework for iot data processing and scheduling using deep reinforcement learning. The system architecture incorporates distributed iot data access, realtime processing, and an intelligent scheduler based on Deep q networks (DQN). A large number of experiments show that compared with traditional scheduling methods, the average task completion time is reduced by 20% and resource utilization is increased by 15%. The unique integration of edge computing and deep reinforcement learning provides a flexible and efficient platform for lowlatency iot applications. Key results obtained from testing the proposed system, such as reduced task completion time and increased resource utilization.
“…In the Q-learning algorithm, the intelligent agent can always achieve the optimal strategy by continuously optimizing the current state. The agent update formula of Q-learning algorithm is expressed by equation (1).…”
Section: Related Workmentioning
confidence: 99%
“…With the rapid development of information technology in modern society, significant research results have been achieved in the field of wireless communication. As the carrier of information transmission, electromagnetic spectrum has an important role in the field of wireless communication [1]. In order to ensure that information can be transmitted wirelessly without interference and damage from the external environment, many communication countermeasure technologies have been gradually developed.…”
The continuous development of communication technology and various deep learning models has led to the invention and application of many anti-interference technologies in the field of communication countermeasures. The existing communication interference models have defects such as low anti-interference rate and low accuracy in communication spectrum prediction. To solve these problems, this study attempts to construct a Convolutional Neural Networks Long Short Term Memory (CNN-LSTM) and apply it to the communication jamming system for spectrum state prediction. Firstly, the framework of the communication interference system using the USRP RIO radio platform software was designed, and based on it, the communication interference channel was optimized using reinforcement learning Q-learning algorithm. Next, to further predict the signal spectrum state during the communication process, neural networks are utilized to construct a communication spectrum state prediction model.According to the optimization effect of communication interference channel and network spectrum prediction effect tested, the communication model under the Q-learning algorithm can achieve a 100% effective interference probability in fixed communication strategies. The Convolutional Neural Networks-1 Long-Short Term Memory-2 model has a prediction accuracy of 95.2% and can accurately predict changes in the communication spectrum. In summary, the Convolutional Neural Networks-1 Long-Short Term Memory-2 network constructed by this research can provide new solutions and achieve goodresultsfor communication spectrum prediction.
“…Mobile network environment refers to a mobile network that connects to the Internet of Things through mobile terminal devices such as mobile phones and tablets [1][2]. Micro animated videos are a short, creative, and interesting form of video that can depict animated characters.…”
In the mobile network environment, the accuracy of related image matching algorithms is affected by factors such as bandwidth uncertainty and channel interference, resulting in significant limitations in image feature matching. This article designs a high-precision matching algorithm for multi-image segmentation of micro animation videos in mobile network environments. Fully denoise micro animation video images using 2D High Density Discrete Wavelet Transform (HD-DWT), and apply fixed block count segmentation to process micro animation video images; Using Harris algorithm to complete image corner detection and obtain corner features of sub images; In the K-means clustering algorithm, SIFT feature vectors are divided into clusters and paired with the nearest neighbor cluster in another sub image to form a sub image matching pair, completing block based sub image matching; Combine all sub image matching results to obtain video image matching results, and use the Improved Random Sampling Consistency (RANCAS) algorithm to remove incorrect matching during the matching process, improving matching accuracy. The experimental results show that the designed algorithm can effectively reduce image noise, improve image quality, and generate a large number of matching pairs in mobile network environments. After the application of the designed algorithm, the production effect of micro animated videos in mobile networks can be significantly improved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.