2018
DOI: 10.1016/j.patcog.2018.05.006
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical brain tumour segmentation using extremely randomized trees

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 79 publications
(41 citation statements)
references
References 30 publications
0
36
0
2
Order By: Relevance
“…In Table III we present the results on BRATS 2013 Leaderboard and Challenge sets. Even though these datasets are older and smaller, they allow us to compare with a larger variety of methods, such as CNN-based [6], [37], [38], [39], Random Forest and handcrafted features-based methods [40], [26], and multi-atlas patch-based methods [41]. Comparing our Baseline with the FCN with RR SegSE block, we can observe that the latter achieves an overall better performance in terms of DC in both the Leaderboard and Challenge set.…”
Section: Baseline + Rr Segse Baseline + Rr Var 2 Baseline + Rr Var mentioning
confidence: 99%
“…In Table III we present the results on BRATS 2013 Leaderboard and Challenge sets. Even though these datasets are older and smaller, they allow us to compare with a larger variety of methods, such as CNN-based [6], [37], [38], [39], Random Forest and handcrafted features-based methods [40], [26], and multi-atlas patch-based methods [41]. Comparing our Baseline with the FCN with RR SegSE block, we can observe that the latter achieves an overall better performance in terms of DC in both the Leaderboard and Challenge set.…”
Section: Baseline + Rr Segse Baseline + Rr Var 2 Baseline + Rr Var mentioning
confidence: 99%
“…In addition, they design an ensemble learning algorithm for best segmentation accuracy. Pinto, Pereira, Rasteiro, and Silva () described a hierarchical framework for brain tumor segmentation through extreme randomize trees and context features. They used BRATS 2013 challenge data set and achieve notable performance.…”
Section: Related Workmentioning
confidence: 99%
“…It consists of an ensemble of various single decision trees for subset data, which are resampled randomly from the training data [76]. The ET is an ensemble of a certain number of randomized trees that adds more randomization to the RF [73,77]. Both RF and ET are computationally efficient and capable of handling very high-dimensional features [75,77].…”
Section: Machine Learning Modelsmentioning
confidence: 99%
“…The ET is an ensemble of a certain number of randomized trees that adds more randomization to the RF [73,77]. Both RF and ET are computationally efficient and capable of handling very high-dimensional features [75,77]. However, the ET has a shorter training time than the RF in that it takes a simple approach to select thresholds on the node.…”
Section: Machine Learning Modelsmentioning
confidence: 99%
See 1 more Smart Citation