2015
DOI: 10.1007/978-3-319-24553-9_80
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-Driven Forest Predictors for Vertebra Localization and Segmentation

Abstract: Abstract. Accurate localization, identification and segmentation of vertebrae is an important task in medical as well as biological image analysis. The prevailing approach to solve such a task is to first generate pixel-independent features for each vertebra, e.g. via a random forest predictor, which are then fed into an MRF-based objective to infer the optimal MAP solution of a constellation model. We abandon this static, two-stage approach and mix feature generation with model-based inference in a new, more … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2016
2016
2017
2017

Publication Types

Select...
4
2
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 16 publications
1
4
0
Order By: Relevance
“…The DF-initialized ConvNet achieved a Dice score of 0.66 after re-training, corresponding to a 10% relative improvement (see Figure 9(d) and Table 1(ConvNet)). This result matches previous State-of-the-Art results on this data set [39], but without the need for timeconsuming model-based inference. It's interesting to note that training a deep ConvNet with 8 hidden layers using hyperbolic tangent activation functions, and without batch normalization [21], is typically extremely difficult, but works well here, likely due to the good initialization of the network.…”
Section: Zebrafish Somite Classificationsupporting
confidence: 90%
See 1 more Smart Citation
“…The DF-initialized ConvNet achieved a Dice score of 0.66 after re-training, corresponding to a 10% relative improvement (see Figure 9(d) and Table 1(ConvNet)). This result matches previous State-of-the-Art results on this data set [39], but without the need for timeconsuming model-based inference. It's interesting to note that training a deep ConvNet with 8 hidden layers using hyperbolic tangent activation functions, and without batch normalization [21], is typically extremely difficult, but works well here, likely due to the good initialization of the network.…”
Section: Zebrafish Somite Classificationsupporting
confidence: 90%
“…A common strategy in stacked classification is to introduce smoothing between the layers of the stack (e.g. [23,26,39]), and it appears that a similar strategy is naturally learned by the deep ConvNet.…”
Section: Kinect Body Part Classificationmentioning
confidence: 99%
“…A labeled set of 23 images containing a total of ~2000 nuclei was used to train a stacked Random Forest (RF) classifier, using contextual "offset" features adapted from Refs. 49 and 50 , as well as newly developed circularity features. The RF was trained on three labels: background, nuclei contour, and nuclei, therefore producing three respective probability maps.…”
Section: Methodsmentioning
confidence: 99%
“…For example, in the Auto-context model, new features are generated by sampling predictions over a contextual grid, and as a result the stacked classifier is able to learn stereotypical class layouts [26]. This approach has been applied to numerous different tasks, including facade segmentation [11], and bio-medical image segmentation [13,18]. Here we adapt this idea to address the challenge of splitting touching nuclei by introducing "circularity features" that leverage prior knowledge of nucleus shape to better identify their boundaries.…”
Section: Related Workmentioning
confidence: 99%