2018
DOI: 10.1007/s10489-018-1320-1
|View full text |Cite
|
Sign up to set email alerts
|

Joint neighborhood entropy-based gene selection method with fisher score for tumor classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
39
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

4
4

Authors

Journals

citations
Cited by 85 publications
(39 citation statements)
references
References 43 publications
0
39
0
Order By: Relevance
“…where +1 , is the kth element of x in generation t+1; similarly , is the kth element of in generation t, which is the best location for monarch butterflies in Land1 and Land2, 3 , is the kth element of 3 in generation t, the monarch butterfly r 3 is randomly selected from Subpopulation2, and BAR is the adjustment rate. If BAR is less than the random number rand, the kth element of x at t + 1 is updated, where is the weighting factor, and = max / 2 , where S is the maximum walk step.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…where +1 , is the kth element of x in generation t+1; similarly , is the kth element of in generation t, which is the best location for monarch butterflies in Land1 and Land2, 3 , is the kth element of 3 in generation t, the monarch butterfly r 3 is randomly selected from Subpopulation2, and BAR is the adjustment rate. If BAR is less than the random number rand, the kth element of x at t + 1 is updated, where is the weighting factor, and = max / 2 , where S is the maximum walk step.…”
Section: Related Workmentioning
confidence: 99%
“…Then, the researches on tackling by optimization techniques in many applications have become a fruitful field of research, especially those interested in solving global optimization problems. The swarm intelligence optimization (SIO) algorithm is a kind of bionic random method inspired by natural phenomena and biological behaviors and can deal with certain high-dimensional complex and variable optimization problems because of its better computing performance and simple model [3,4].…”
Section: Introductionmentioning
confidence: 99%
“…Lyu et al [ 7 ] investigated a filter method based on maximal information coefficient, which eliminates redundant information that does not require additional processes. The wrapper methods use a classifier to find the most discriminant feature subset by minimizing an error prediction function [ 8 ]. Jadhav et al [ 9 ] designed a wrapper feature selection method and performed functional ranking based on information gain directed genetic algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…The selection of empirical risk loss is mostly dependent on the types of noises [ 11 , 12 ]. For example, squared loss is suitable for Gaussian noise [ 13 , 14 , 15 ], least absolute deviation loss for Laplacian noise [ 16 ], and Beta loss for Beta noise [ 17 , 18 , 19 ]. By the formula of the optimization method, a series of optimization algorithms are developed [ 20 ].…”
Section: Introductionmentioning
confidence: 99%
“…Suykens et al [ 22 , 23 ] constructed least squares SVR with Gaussian noise (LS-SVR). Wu [ 13 ] and Pontil et al [ 24 ] constructed -SVR with Gaussian noise (GN-SVR). In 2002, Bofinger et al [ 25 ] discovered that the output of a wind turbine system is limited between zero and maximum power and that the error statistics do not follow a normal distribution.…”
Section: Introductionmentioning
confidence: 99%