2020
DOI: 10.1186/s12864-020-07038-3
|View full text |Cite
|
Sign up to set email alerts
|

Multi-scale supervised clustering-based feature selection for tumor classification and identification of biomarkers and targets on genomic data

Abstract: Background The small number of samples and the curse of dimensionality hamper the better application of deep learning techniques for disease classification. Additionally, the performance of clustering-based feature selection algorithms is still far from being satisfactory due to their limitation in using unsupervised learning methods. To enhance interpretability and overcome this problem, we developed a novel feature selection algorithm. In the meantime, complex genomic data brought great challenges for the id… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 66 publications
(58 reference statements)
0
12
0
Order By: Relevance
“…The general framework of CBFS is to first group the attributes into a set of distinct clusters of highly correlated features and then select a representative feature from each cluster. Feature selection by feature grouping has recently attracted the interest of researchers in pattern recognition and data mining 33,38‐44 as it is less susceptible to obtaining a suboptimal feature subset. These methods are distinguished by the clustering algorithm, similarity and relevance metric.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The general framework of CBFS is to first group the attributes into a set of distinct clusters of highly correlated features and then select a representative feature from each cluster. Feature selection by feature grouping has recently attracted the interest of researchers in pattern recognition and data mining 33,38‐44 as it is less susceptible to obtaining a suboptimal feature subset. These methods are distinguished by the clustering algorithm, similarity and relevance metric.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, disregarding the class labels and the loss of information in the final stage are the other drawbacks of this technique. The multiscale clustering‐based feature selection algorithm is a wrapper approach with a novel dissimilarity function which exploits the potential of feature clustering for genomics data analysis 42 . It is in fact a supervised learning method, with a multiscale dissimilarity function formed by multiple distance functions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In the past, physical education in colleges and universities mainly focused on competitive minority sports, but now our requirements should develop into popular sports of art and ecological sports so that sports can become a necessity for people to survive. erefore, in college physical education, physical education teachers should add emotional education instead of simply teaching physical education [3], improve interest in learning, harmonize the relationship between teachers and students, form a harmonious teaching atmosphere, and improve teaching quality, which even has an important influence on his whole life. e modern educational process emphasizes the imparting of rational knowledge and neglects the accumulation of emotional experience.…”
Section: Introductionmentioning
confidence: 99%
“…In 2017, Huang et al [ 24 ] presented the NGRHMDA method by integrating two single recommendation methods (graph‑based scoring and neighbor‑based collaborative filtering prediction model), and achieved a good prediction result. With the fast development of machine learning technology [ 25 , 26 ], some machine learning-based methods were also presented for MDAs prediction. For example, in 2017, Wang et al [ 27 ] proposed a semi-supervised method called LRLSHMDA based on the Laplacian regularized least squares method.…”
Section: Introductionmentioning
confidence: 99%