2015
DOI: 10.24297/ijct.v15i2.565
|View full text |Cite
|
Sign up to set email alerts
|

Simulated Annealing Algorithm for Feature Selection

Abstract: In the process of physical annealing, a solid is heated until all particles randomly arrange themselves forming the liquid state. A slow cooling process is then used to crystallize the liquid. This process is known as simulated annealing. Simulated annealing is stochastic computational technique that searches for global optimum solutions in optimization problems. The main goal here is to give the algorithm more time in the search space exploration by accepting moves, which may degrade the solution quality, wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…Finally, the Relief algorithm was used to evaluate the importance of features. The Relief algorithm was first proposed by Kira (Kira and Rendell, 1992 ) and has been widely applied in selecting features for classification (Rosario and Thangadurai, 2015 ). As shown in Figure 5 , to verify the validity of the proposed method, the brain region feature, subgraph feature, and multi-feature methods were each evaluated by the Relief algorithm.…”
Section: Resultsmentioning
confidence: 99%
“…Finally, the Relief algorithm was used to evaluate the importance of features. The Relief algorithm was first proposed by Kira (Kira and Rendell, 1992 ) and has been widely applied in selecting features for classification (Rosario and Thangadurai, 2015 ). As shown in Figure 5 , to verify the validity of the proposed method, the brain region feature, subgraph feature, and multi-feature methods were each evaluated by the Relief algorithm.…”
Section: Resultsmentioning
confidence: 99%
“…This illustrates the potential for the integration of the two different types of features to significantly improve classification performance. We used the Relief method [ 49 ] to calculate the average weights of subgraph pattern features, the minimum spanning tree of quantifiable local network features, and both types of features combined ( Figure 3 ). The average weight of the subgraph pattern features was 550.31, that of the minimum spanning tree of the quantifiable local network features was 915.42, and that for both feature types together was 945.16.…”
Section: Resultsmentioning
confidence: 99%
“…In the field of data mining, a large number of frequent subgraph mining methods have been proposed [ 45 , 46 ], including a priori-based graph mining [ 47 ] and the frequent subgraph discovery algorithm [ 48 ]. Here, we used the well-known gSpan algorithm [ 49 ] to extract the frequent subnetworks from the functional connectivity network. Because of its high efficiency in graph traversal and subgraph mining, the gSpan algorithm has been widely applied in many research fields, including neural imaging [ 25 27 ].…”
Section: Methodsmentioning
confidence: 99%
“…The resulting weights from each feature selection model were normalized between 0 and 1 to ensure comparable significance. A weight closer to 1 indicated higher importance of a gene in distinguishing infected and healthy conditions, as determined by the employed feature selection model [22][23][24][25].…”
Section: Machine Learning Modelsmentioning
confidence: 99%