Abstract:Simulated annealing (SA) is an effective method for solving unconstrained optimization problems and has been widely used in machine learning and neural network. Nowadays, in order to optimize complex problems with big data, the SA algorithm has been implemented on big data platform and obtains a certain speedup. However, the efficiency for such implementation is still limited because the conventional SA algorithm still runs with low parallelism on new platforms and the computing resource cannot be fully utiliz… Show more
“…In the practical production environment, a trimendous amount of multisource heterogeneous data is generated in the custom manufacturing process of complex heavy equipment, and accordingly, its cloud service side will also generate ten million-level system logs. In order to process massive logs offline in real time, we use Apache Spark as the implementation platform of the proposed model [23]. By transforming system logs into elastic datasets using the machine learning tool called MLlib in Apache Spark, we can quickly analyze stored files and provide timely feedback to clients for deletion.…”
The massive amount of sensing and communication data that needs to be processed during the production process of complex heavy equipment generates heavy storage pressure on the cloud server-side, thus limiting the convergence of sensing, communication, and computing in intelligent factories. To solve the problem, based on machine learning techniques, a storage optimization model is proposed in this paper for reducing the storage pressure on the cloud server and enhancing the coupling between communication and sensing data. At first, based on the operation rules of the distributed file system on the cloud server, the proposed model screens and organizes the system logs. With the filtered logs, the model sets feature labels, constructs feature vectors, and builds sample sets. Then, based on the ID3 decision tree, a file elimination model is trained to analyze the files stored in the cloud server and predict their reusability. In practice, the proposed model is applied in the Hadoop Distributed File System and helps the system delete underutilized and low-value files and save storage space. Experiments show that the proposed model can effectively reduce the storage load on the cloud server and improve the integration efficiency of multisource heterogeneous data during complex heavy equipment production.
“…In the practical production environment, a trimendous amount of multisource heterogeneous data is generated in the custom manufacturing process of complex heavy equipment, and accordingly, its cloud service side will also generate ten million-level system logs. In order to process massive logs offline in real time, we use Apache Spark as the implementation platform of the proposed model [23]. By transforming system logs into elastic datasets using the machine learning tool called MLlib in Apache Spark, we can quickly analyze stored files and provide timely feedback to clients for deletion.…”
The massive amount of sensing and communication data that needs to be processed during the production process of complex heavy equipment generates heavy storage pressure on the cloud server-side, thus limiting the convergence of sensing, communication, and computing in intelligent factories. To solve the problem, based on machine learning techniques, a storage optimization model is proposed in this paper for reducing the storage pressure on the cloud server and enhancing the coupling between communication and sensing data. At first, based on the operation rules of the distributed file system on the cloud server, the proposed model screens and organizes the system logs. With the filtered logs, the model sets feature labels, constructs feature vectors, and builds sample sets. Then, based on the ID3 decision tree, a file elimination model is trained to analyze the files stored in the cloud server and predict their reusability. In practice, the proposed model is applied in the Hadoop Distributed File System and helps the system delete underutilized and low-value files and save storage space. Experiments show that the proposed model can effectively reduce the storage load on the cloud server and improve the integration efficiency of multisource heterogeneous data during complex heavy equipment production.
“…When the development of each district is similar, it is allocated to each district according to the proportion of water demand; if the development levels of the two places are inconsistent, priority should be given to efficiency. Appropriately tilt towards high-efficiency industries so that the guarantee rate of high-efficiency water industries is higher than that of low-efficiency water industries [21].…”
Because of the continuous deterioration of water environment, it is ensured that the basic water demand of ecological environment is the key task of water resources utilization and control in China. In view of the uneven distribution of domestic water resources “more in the South and less in the north, more in the East and less in the west,” it is very necessary to optimize the allocation of water resources. This paper aims to optimize the allocation of water resources through simulated annealing algorithm (SAA), hoping to optimize the allocation of water resources through diversion, water intake, and storage measures such as pipelines. Based on this, this paper proposes an improved SAA pipeline construction algorithm. Aiming at the distribution of water sources in the Yangtze River Basin, the algorithm is used to optimize the objective function path to solve the unbalanced problem of rich and lack of regional water resources. And after optimizing the annealing simulation algorithm, the simulation optimization ability is significantly improved. Experiments show that the improved SAA can improve the optimal configuration by more than 50% and up to 96%, indicating that the improved algorithm has a more stable optimization planning function for the optimization of objectives and can often get a more perfect route.
“…SA provides a simple framework which can be implemented on systems with arbitrary energy landscapes, and it statistically guarantees an optimal solution. SA has hence been employed to solve optimization problems in a wide variety of domains such as circuit design, [ 5 ] data analysis, [ 6 ] imaging, [ 7 ] neural networks, [ 8 ] geophysics, [ 9 ] finance, [ 10 ] and the Ising model of magnetism. [ 11 ]…”
Section: Introductionmentioning
confidence: 99%
“…SA provides a simple framework which can be implemented on systems with arbitrary energy landscapes, and it statistically guarantees an optimal solution. SA has hence been employed to solve optimization problems in a wide variety of domains such as circuit design, [5] data analysis, [6] imaging, [7] neural networks, [8] geophysics, [9] finance, [10] and the Ising model of magnetism. [11] SA draws inspiration from physical annealing, in which a material is heated above its recrystallization temperature to allow atoms to rearrange and is then slowly cooled down to improve its crystallinity and reach a low energy state.…”
Metaheuristic algorithms such as simulated annealing (SA) are often implemented for optimization in combinatorial problems, especially for discreet problems. SA employs a stochastic search, where high‐energy transitions (“hill‐climbing”) are allowed with a temperature‐dependent probability to escape local optima. Ising spin glass systems have properties such as spin disorder and “frustration” and provide a discreet combinatorial problem with a high number of metastable states and ground‐state degeneracy. In this work, subthreshold Boltzmann transport is exploited in complementary 2D field‐effect transistors (p‐type WSe2 and n‐type MoS2) integrated with an analog, nonvolatile, and programmable floating‐gate memory stack to develop in‐memory computing primitives necessary for energy‐ and area‐efficient hardware acceleration of SA for Ising spin systems. Search acceleration of >800× is demonstrated for 4 × 4 ferromagnetic, antiferromagnetic, and spin glass systems using SA compared to an exhaustive search using a brute force trial at miniscule total energy expenditure of ≈120 nJ. The hardware‐realistic numerical simulations further highlight the astounding benefits of SA in accelerating the search for larger spin lattices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.