2012
DOI: 10.1016/j.patcog.2011.09.011
|View full text |Cite
|
Sign up to set email alerts
|

Model sparsity and brain pattern interpretation of classification models in neuroimaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

8
111
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 114 publications
(119 citation statements)
references
References 59 publications
8
111
0
Order By: Relevance
“…However, sparsity alone is not sufficient for making reasonable and stable inferences, in cases of very high voxel space and small number of training samples. In such cases, the plain sparse learning models such as the ℓ 1 -norm regularized model also called LASSO [22], often provide overly sparse and hard to interpret solutions [23]. Thus we need to extend the plain sparse learning model to incorporate some important structural features of brain imaging data in order to achieve a more stable, reliable and interpretable support identification results.…”
Section: B Advantages and Limitations Of Sparsity Applied To Nueroimmentioning
confidence: 99%
“…However, sparsity alone is not sufficient for making reasonable and stable inferences, in cases of very high voxel space and small number of training samples. In such cases, the plain sparse learning models such as the ℓ 1 -norm regularized model also called LASSO [22], often provide overly sparse and hard to interpret solutions [23]. Thus we need to extend the plain sparse learning model to incorporate some important structural features of brain imaging data in order to achieve a more stable, reliable and interpretable support identification results.…”
Section: B Advantages and Limitations Of Sparsity Applied To Nueroimmentioning
confidence: 99%
“…A recent work [8] studies the impact of model selection on the reproducibility and stability of the estimated models for simpler learning methods. In the present work, we select the hyper-parameters in the usual way of maximizing the classification accuracy over the internal LOSO-CV and we assess the stability of the resulting models in the external LOSO-CV.…”
Section: Experimental Protocol and Assessmentmentioning
confidence: 99%
“…However, often these models provide overly sparse solutions, or activation patterns, where the non-zero coefficients are assigned to disparate regions across the brain, without exploiting any spatial or temporal prior information [3], [4], [8].…”
Section: Introductionmentioning
confidence: 99%
“…Almost all collaborative filtering algorithms come with one or more adjustable hyperparameters, such as regularization and learning rate. The setting of these values plays a key role on the generalizability and accuracy of the predictive model [20]. The problem of finding better hyperparameter values for an algorithm to improve prediction accuracy or to optimize certain goals is called hyperparameter optimization or model selection.…”
Section: Related Workmentioning
confidence: 99%