2013
DOI: 10.1016/j.neuroimage.2012.09.065
|View full text |Cite
|
Sign up to set email alerts
|

Random forest-based similarity measures for multi-modal classification of Alzheimer's disease

Abstract: Neurodegenerative disorders, such as Alzheimer’s disease, are associated with changes in multiple neuroimaging and biological measures. These may provide complementary information for diagnosis and prognosis. We present a multi-modality classification framework in which manifolds are constructed based on pairwise similarity measures derived from random forest classifiers. Similarities from multiple modalities are combined to generate an embedding that simultaneously encodes information about all the available … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

7
268
0
7

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 400 publications
(285 citation statements)
references
References 69 publications
(82 reference statements)
7
268
0
7
Order By: Relevance
“…In [6], a feature is considered as important, if it is selected in the first three levels of the trees. A more sophisticated approach was used in [5], where the decrease in the Gini impurity criterion was measured for the individual features in each node. In this work, we adopt the former and simpler approach.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In [6], a feature is considered as important, if it is selected in the first three levels of the trees. A more sophisticated approach was used in [5], where the decrease in the Gini impurity criterion was measured for the individual features in each node. In this work, we adopt the former and simpler approach.…”
Section: Methodsmentioning
confidence: 99%
“…In [5], they were applied to manifold learning by deriving the pairwise similarity measures from random forest classifiers for different modalities. Additionally, the most important features for the classification problem could be extracted.…”
Section: Introductionmentioning
confidence: 99%
“…This is an ensemble classifier consisting of many decision trees, where the final predicted class for a test example is obtained by combining the predictions of all individual trees, as described further in this work (chapter 4.6) (46). This method can provide measures of the similarity between pairs of examples in the dataset and are often applied to high-dimensional data sets (47).…”
Section: Random Forestmentioning
confidence: 99%
“…An RF algorithm uses a random feature selection, a random subset of input features or predictive variables in the division of every node, instead of using the best variables, which reduces the generalization error. Additionally, to increase the diversity of the trees, an RF uses bootstrap aggregation (bagging) to make the trees grow from different training data subsets [212]. of each tree [212].…”
Section: Random Forestsmentioning
confidence: 99%
“…Additionally, to increase the diversity of the trees, an RF uses bootstrap aggregation (bagging) to make the trees grow from different training data subsets [212]. of each tree [212]. These are referred to as the 'out-of-bag' data (OOB) of the tree, for which internal test predictions can be made.…”
Section: Random Forestsmentioning
confidence: 99%