A novel location-aware beamforming scheme for millimeter wave communication is proposed for line of sight (LOS) and low mobility scenarios, in which computer vision is introduced to derive the required position or spatial angular information from the image or video captured by camera(s) co-located with mmWave antenna array at base stations. A wireless coverage model is built to investigate the coverage performance and influence of positioning accuracy achieved by convolutional neural network (CNN) for image processing. In addition, videos could be intentionally blurred, or even low-resolution videos could be directly applied, to protect users' privacy with acceptable positioning precision, lower computation complexity and lower camera cost. It is proved by simulations that the beamforming scheme is practicable and the mainstream CNN we employed is sufficient in both aspects of beam directivity accuracy and processing speed in frame per second.
Adaptation of handover parameters in ultra-dense networks has always been one of the key issues in optimizing network performance. Aiming at the optimization goal of effective handover ratio, this paper proposes a deep Q-learning (DQN) method that dynamically selects handover parameters according to wireless signal fading conditions. This approach seeks good backward compatibility. In order to enhance the efficiency and performance of the DQN method, Long Short Term Memory (LSTM) is used to build a digital twin and assist the DQN algorithm to achieve a more efficient search. Simulation experiments prove that the enhanced method has a faster convergence speed than the ordinary DQN method, and at the same time, achieves an average effective handover ratio increase of 2.7%. Moreover, in different wireless signal fading intervals, the method proposed in this paper has achieved better performance.
In this paper, Computer Vision (CV) sensing technology based on Convolutional Neural Network (CNN) is introduced to process topographic maps for predicting wireless signal propagation models, which are applied in the field of forestry security monitoring. In this way, the terrain-related radio propagation characteristic including diffraction loss and shadow fading correlation distance can be predicted or extracted accurately and efficiently. Two data sets are generated for the two prediction tasks, respectively, and are used to train the CNN. To enhance the efficiency for the CNN to predict diffraction losses, multiple output values for different locations on the map are obtained in parallel by the CNN to greatly boost the calculation speed. The proposed scheme achieved a good performance in terms of prediction accuracy and efficiency. For the diffraction loss prediction task, 50% of the normalized prediction error was less than 0.518%, and 95% of the normalized prediction error was less than 8.238%. For the correlation distance extraction task, 50% of the normalized prediction error was less than 1.747%, and 95% of the normalized prediction error was less than 6.423%. Moreover, diffraction losses at 100 positions were predicted simultaneously in one run of CNN under the settings in this paper, for which the processing time of one map is about 6.28 ms, and the average processing time of one location point can be as low as 62.8 us. This paper shows that our proposed CV sensing technology is more efficient in processing geographic information in the target area. Combining a convolutional neural network to realize the close coupling of a prediction model and geographic information, it improves the efficiency and accuracy of prediction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.