2011
DOI: 10.1007/978-3-642-18421-5_8
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Validation of Graphical Models for Learning Tumor Segmentations from Noisy Manual Annotations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…Optimizing the ensemble prediction by balancing variability reduction (fuse many predictors) and bias removal (fuse a few selected only) can be done on a test set representing the overall population, or for the individual image volume when partial annotation is available—for example from the limited user interaction mentioned above. Statistical methods that estimate and weight the performance of individual contributions—for example, based on appropriate multi-class extensions of STAPLE [69] and related probabilistic models [19], [84]—may also be used to trade bias and variance in an optimal fashion.…”
Section: Discussionmentioning
confidence: 99%
“…Optimizing the ensemble prediction by balancing variability reduction (fuse many predictors) and bias removal (fuse a few selected only) can be done on a test set representing the overall population, or for the individual image volume when partial annotation is available—for example from the limited user interaction mentioned above. Statistical methods that estimate and weight the performance of individual contributions—for example, based on appropriate multi-class extensions of STAPLE [69] and related probabilistic models [19], [84]—may also be used to trade bias and variance in an optimal fashion.…”
Section: Discussionmentioning
confidence: 99%
“…Some methods in this category improve standard classifiers such as support vector machines, decision trees, and neural networks by proposing novel training procedures that are more robust to label noise (Khardon and Wachman, 2007;Lin et al, 2004). Alternatively, different forms of probabilistic models have been used to model the label noise and thereby improve various classifiers (Kaster et al, 2010;Kim and Ghahramani, 2006).…”
mentioning
confidence: 99%
“…[11,12] detect label noise based on their change of the classification of other samples in a leave-one-out framework. Furthermore, probabilistic models with a novel form have been used and also improve the performance of the classifier [13,14].…”
Section: Related Workmentioning
confidence: 99%