The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2018
DOI: 10.1002/cpe.4429
|View full text |Cite
|
Sign up to set email alerts
|

A speculative parallel simulated annealing algorithm based on Apache Spark

Abstract: Simulated annealing (SA) is an effective method for solving unconstrained optimization problems and has been widely used in machine learning and neural network. Nowadays, in order to optimize complex problems with big data, the SA algorithm has been implemented on big data platform and obtains a certain speedup. However, the efficiency for such implementation is still limited because the conventional SA algorithm still runs with low parallelism on new platforms and the computing resource cannot be fully utiliz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…In the practical production environment, a trimendous amount of multisource heterogeneous data is generated in the custom manufacturing process of complex heavy equipment, and accordingly, its cloud service side will also generate ten million-level system logs. In order to process massive logs offline in real time, we use Apache Spark as the implementation platform of the proposed model [23]. By transforming system logs into elastic datasets using the machine learning tool called MLlib in Apache Spark, we can quickly analyze stored files and provide timely feedback to clients for deletion.…”
Section: Model Implementationmentioning
confidence: 99%
“…In the practical production environment, a trimendous amount of multisource heterogeneous data is generated in the custom manufacturing process of complex heavy equipment, and accordingly, its cloud service side will also generate ten million-level system logs. In order to process massive logs offline in real time, we use Apache Spark as the implementation platform of the proposed model [23]. By transforming system logs into elastic datasets using the machine learning tool called MLlib in Apache Spark, we can quickly analyze stored files and provide timely feedback to clients for deletion.…”
Section: Model Implementationmentioning
confidence: 99%
“…When the development of each district is similar, it is allocated to each district according to the proportion of water demand; if the development levels of the two places are inconsistent, priority should be given to efficiency. Appropriately tilt towards high-efficiency industries so that the guarantee rate of high-efficiency water industries is higher than that of low-efficiency water industries [21].…”
Section: E Characteristics Of the Saamentioning
confidence: 99%
“…SA provides a simple framework which can be implemented on systems with arbitrary energy landscapes, and it statistically guarantees an optimal solution. SA has hence been employed to solve optimization problems in a wide variety of domains such as circuit design, [ 5 ] data analysis, [ 6 ] imaging, [ 7 ] neural networks, [ 8 ] geophysics, [ 9 ] finance, [ 10 ] and the Ising model of magnetism. [ 11 ]…”
Section: Introductionmentioning
confidence: 99%
“…SA provides a simple framework which can be implemented on systems with arbitrary energy landscapes, and it statistically guarantees an optimal solution. SA has hence been employed to solve optimization problems in a wide variety of domains such as circuit design, [5] data analysis, [6] imaging, [7] neural networks, [8] geophysics, [9] finance, [10] and the Ising model of magnetism. [11] SA draws inspiration from physical annealing, in which a material is heated above its recrystallization temperature to allow atoms to rearrange and is then slowly cooled down to improve its crystallinity and reach a low energy state.…”
mentioning
confidence: 99%