The problem of insufficient signal coverage of 5G base stations can be solved by building new base stations in areas with weak signal coverage. However, due to construction costs and other factors, it is not possible to cover all areas. In general, areas with high traffic and weak coverage should be given priority. Although many scientists have carried out research, it is not possible to make the large-scale calculation accurately due to the lack of data support. It is necessary to search for the central point through continuous hypothesis testing, so there is a large systematic error. In addition, it is difficult to give a unique solution. In this paper, the weak signal coverage points were divided into three categories according to the number of users and traffic demand. With the lowest cost as the target, and constraints such as the distance requirement of base station construction, the proportion of the total signal coverage business, and so on, a single objective nonlinear programming model was established to solve the base station layout problem. Through traversal search, the optimal threshold of the traffic and the number of base stations was obtained, and then, a kernel function was added to the mean shift clustering algorithm. The center point of the new macro station was determined in the dense area, the location of the micro base station was determined from the scattered and abnormal areas, and finally the unique optimal planning scheme was obtained. Based on the assumptions made in this paper, the minimum total cost is 3752 when the number of macro and micro base stations were determined to be 31 and 3442 respectively, and the signal coverage rate can reach 91.43%. Compared with the existing methods, such as K-means clustering, K-medoids clustering, and simulated annealing algorithms, etc., the method proposed in this paper can achieve good economic benefits; when the traffic threshold and the number of base stations threshold are determined, the unique solution can be obtained.
Fatigue driving has always received a lot of attention, but few studies have focused on the fact that human fatigue is a cumulative process over time, and there are no models available to reflect this phenomenon. Furthermore, the problem of incorrect detection due to facial expression is still not well addressed. In this article, a model based on BP neural network and time cumulative effect was proposed to solve these problems. Experimental data were used to carry out this work and validate the proposed method. Firstly, the Adaboost algorithm was applied to detect faces, and the Kalman filter algorithm was used to trace the face movement. Then, a cascade regression tree-based method was used to detect the 68 facial landmarks and an improved method combining key points and image processing was adopted to calculate the eye aspect ratio (EAR). After that, a BP neural network model was developed and trained by selecting three characteristics: the longest period of continuous eye closure, number of yawns, and percentage of eye closure time (PERCLOS), and then the detection results without and with facial expressions were discussed and analyzed. Finally, by introducing the Sigmoid function, a fatigue detection model considering the time accumulation effect was established, and the drivers’ fatigue state was identified segment by segment through the recorded video. Compared with the traditional BP neural network model, the detection accuracies of the proposed model without and with facial expressions increased by 3.3% and 8.4%, respectively. The number of incorrect detections in the awake state also decreased obviously. The experimental results show that the proposed model can effectively filter out incorrect detections caused by facial expressions and truly reflect that driver fatigue is a time accumulating process.
For the problem of 5G network planning, a certain number of locations should be selected to build new base stations in order to solve the weak coverage problems of the existing network. Considering the construction cost and some other factors, it is impossible to cover all the weak coverage areas so it is necessary to consider the business volume and give priority to build new stations in the weak coverage areas with high business volume. Aimed at these problems, the clustering of weak point data was carried out by using k-means clustering algorithm. With the objective function as the minimization of the total construction cost of the new base stations, as well as the constraints as the minimal distance between adjacent base stations and the minimal coverage of the communication traffic, the single-objective nonlinear programming models were established to obtain the layout of macro and micro base stations in order to illustrate the impact of the shape of the station coverage area, the circular and the “shamrock” shaped coverage areas were compared in this paper. For the “shamrock” base station, a secondary clustering was undertaken to judge the main directions of the three sector coverage areas. Then, an improved model taking the coverage overlapping into consideration was proposed to correct the coverage area of different sectors. Finally, the optimal layout was obtained by adjusting the distribution of all base stations globally. The results show that the optimal planning method proposed in this paper has good practicability, which also provides a very good reference for solving similar allocation problems of dynamic resources.
The aim of this paper is to obtain the image information based on a given image of mold flux and to obtain the features that can describe the dynamical difference. The melting and crystallization dynamics of the slag were analyzed using the autoregressive moving average (ARIMA) time series model and data fitting method. Firstly, the binary image of the digital region of the original image was obtained by image information processing and segmentation methods, the original image number was determined by comparing the similarity of the information matrices of the given and standard images. The standard number with the highest similarity was considered as the number of the original image, and MATLAB was used to solve the problem, the digital information in all the images was successfully extracted. Secondly, ten eigenvalues were extracted from the given image after removing the background, and three principal components were obtained by principal component analysis. Then, a scoring model was constructed based on the percentage of variance, and the comprehensive scores of the three principal components to analyze the melting and crystallization process of the mold flux. Finally, based on the above work, the dynamic relationship between temperature, time and the melting and crystallization process of the mold flux was investigated. Since the temperature is approximately linearly correlated with time, the problem was transformed into finding the relationship between the melting and crystallization process of the mold flux and time. The least squares method, polynomial fitting and other methods were used to derive the relationship function, the relationship between the melting and crystallization process of mold flux and temperature and time was quantitatively analyzed.
The research of visibility detection in foggy days is of great significance to both road traffic and air transport safety. Based on the meteorological and video data collected from an airport, a deep Recurrent Neural Network (RNN) model was established in this study to predict the visibility. First, the Fourier Transform was used to extract feature variables from video data. Then, the Principal Component Analysis method was used to reduce the dimension of features. After that, 462 sets of sample data include image features, air pressure, temperature and wind speed, were used as inputs to train the RNN model. By comparing the predicted results with the actual visibility data as well as some other stateof-the-art methods, it can be found that the proposed model makes up for the deficiency of models based only on meteorological or image data, and has higher accuracy in different grades of visibility. With considering the meteorological data, the accuracy of RNN model is improved by 18.78%. Besides, with aids of correlation analysis, the influence of the meteorological factors on the predicted visibility was analysed, for fog at night, temperature is the dominant factor affecting visibility.
In this paper, a support vector regression (SVR) adaptive optimization rolling composite model with a sooty tern optimization algorithm (STOA) has been proposed for temperature prediction. Firstly, aiming at the problem that the algorithm tends to fall into the local optimum, the model introduces an adaptive Gauss–Cauchy mutation operator to effectively increase the population diversity and search space and uses the improved algorithm to optimize the key parameters of the SVR model, so that the SVR model can mine the linear and nonlinear information in the data well. Secondly, the rolling prediction is integrated into the SVR prediction model, and the real-time update and self-regulation principles are used to continuously update the prediction, which greatly improves the prediction accuracy. Finally, the optimized STOA-SVR rolling forecast model is used to predict the final temperature. In this study, the global mean temperature data set from 1880 to 2022 is used for empirical analysis, and a comparative experiment is set up to verify the accuracy of the model. The results show that compared with the seasonal autoregressive integrated moving average (SARIMA), feedforward neural network (FNN) and unoptimized STOA-SVR-LSTM, the prediction performance of the proposed model is better, and the root mean square error is reduced by 6.33–29.62%. The mean relative error is reduced by 2.74–47.27%; the goodness of fit increases by 4.67–19.94%. Finally, the global mean temperature is predicted to increase by about 0.4976 °C in the next 20 years, with an increase rate of 3.43%. The model proposed in this paper not only has a good prediction accuracy, but also can provide an effective reference for the development and formulation of meteorological policies in the future.
Gasoline is the primary fuel used in small cars, and the exhaust emissions from gasoline combustion have a significant impact on the atmosphere. Efforts to clean up gasoline have therefore focused primarily on reducing the olefin and sulfur content of gasoline, while maintaining as much of the octane content as possible. With the aim of minimizing the loss of octane, this study investigated various machine learning algorithms to identify the best self-fitness function. An improved octane loss optimization model was developed, and the best octane loss calculation algorithm was identified. Firstly, the operational and non-operational variables were separated in the data pre-processing section, and the variables were then filtered using the random forest method and the grey correlation degree, respectively. Secondly, octane loss prediction models were built using four different machine learning techniques: back propagation (BP), radial basis function (RBF), ensemble learning representing extreme gradient boosting (XGboost) and support vector regression (SVR). The prediction results show that the XGboost model is optimal. Finally, taking the minimum octane loss as the optimization object and a sulfur content of less than 5µg/g as the constraint, an octane loss optimization model was established. The XGboost prediction model trained above as the fitness function was substituted into the genetic algorithm (GA), sparrow search algorithm (SSA), particle swarm optimization (PSO) and the grey wolf optimization (GWO) algorithm, respectively. The optimization results of these four types of algorithms were compared. The findings demonstrate that among the nine randomly selected sample points, SSA outperforms all other three methods with respect to optimization stability and slightly outperforms them with respect to optimization accuracy. For the RON loss, 252 out of 326 samples (about 77% of the samples) reached 30%, which is better than the optimization results published in the previous literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.