2016
DOI: 10.1007/s12021-015-9292-3
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia

Abstract: We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-train… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
67
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2
2

Relationship

4
5

Authors

Journals

citations
Cited by 78 publications
(72 citation statements)
references
References 64 publications
2
67
0
Order By: Relevance
“…The principal regularization parameter of the RLR (λ), which sets the balance between the regularization and the data terms, was chosen among the set of values {10 −10 , 10 −4 , 10 −3 , 5 · 10 −3 , 10 −2 , 5 · 10 −2 , 10 −1 , 5 · 10 −1 }. We previously demonstrated that selecting also the parameter α by cross-validation did not yield advantages over fixing its value to 0.5 [31]. However, we confirmed that 225 this is the case with the setup of this paper by experimenting with different values of α (see Table 12 of the Supplementary Material).…”
supporting
confidence: 69%
“…The principal regularization parameter of the RLR (λ), which sets the balance between the regularization and the data terms, was chosen among the set of values {10 −10 , 10 −4 , 10 −3 , 5 · 10 −3 , 10 −2 , 5 · 10 −2 , 10 −1 , 5 · 10 −1 }. We previously demonstrated that selecting also the parameter α by cross-validation did not yield advantages over fixing its value to 0.5 [31]. However, we confirmed that 225 this is the case with the setup of this paper by experimenting with different values of α (see Table 12 of the Supplementary Material).…”
supporting
confidence: 69%
“…Moreover, a high number of predictors may cause the curse of dimensionality, i.e., the lack of generality caused by over-fitting. For avoiding the curse of dimensionality, many variable/feature selection methods have been proposed in neuroimaging data (Tohka et al, 2016, Mwangi et al, 2014). Among them, the regularization methods have gained considerable attention (Miller, 2002).…”
Section: Penalized Linear Regressionmentioning
confidence: 99%
“…A limitation of the elastic net penalty is that it does not consider spatial relationships of the voxels and neighboring voxels are not required to receive similar weights. While there are regularizers that take into account the spatial relationships among the voxels, such as GraphNet Grosenick et al (2013), these come with more parameters to select, longer computation times and have found to produce more variable estimate of the generalization error in the case of dementia related classification tasks Tohka et al (2016).…”
Section: Penalized Linear Regressionmentioning
confidence: 99%
“…To circumvent the problem, various machine learning techniques were proposed in the literature. Depending on their strategies, techniques can be categorized into 1) feature embedding (Roweis and Saul, 2000; He et al, 2006; Liu et al, 2013) and 2) feature selection (Wang et al, 2011; Zhu et al, 2014; Tohka et al, 2016). Basically, the common goal of the methods of both categories is to learn dimension-reduced features or representations from a small number of training samples, while still pertaining useful information for target tasks.…”
Section: Related Workmentioning
confidence: 99%