2015
DOI: 10.1007/978-3-319-24553-9_78
|View full text |Cite
|
Sign up to set email alerts
|

Scale-Adaptive Forest Training via an Efficient Feature Sampling Scheme

Abstract: Abstract. In the context of forest-based segmentation of medical data, modeling the visual appearance around a voxel requires the choice of the scale at which contextual information is extracted, which is of crucial importance for the final segmentation performance. Building on Haar-like visual features, we introduce a simple yet effective modification of the forest training which automatically infers the most informative scale at each stage of the procedure. Instead of the standard uniform sampling during nod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2016
2016
2017
2017

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Depending on the focus of method design, existing studies in microscopy image classification can be categorized into two groups. The first group focuses on feature extraction, in which customized features are designed (Su et al, 2012;Sparks and Madabhushi, 2013;Peter et al, 2015;Xu et al, 2015;Jiang et al, 2015;Barker et al, 2016) or automated feature learning is conducted (Zhou et al, 2014;Otalora et al, 2015;BenTaieb et al, 2015;Wang et al, 2015). The second group focuses on the classifier design while standard and simple feature descriptors are used.…”
Section: Related Workmentioning
confidence: 99%
“…Depending on the focus of method design, existing studies in microscopy image classification can be categorized into two groups. The first group focuses on feature extraction, in which customized features are designed (Su et al, 2012;Sparks and Madabhushi, 2013;Peter et al, 2015;Xu et al, 2015;Jiang et al, 2015;Barker et al, 2016) or automated feature learning is conducted (Zhou et al, 2014;Otalora et al, 2015;BenTaieb et al, 2015;Wang et al, 2015). The second group focuses on the classifier design while standard and simple feature descriptors are used.…”
Section: Related Workmentioning
confidence: 99%
“…To obtain Ï€, we use an AdaBoost classifier [10] based on Haar features [11], which we more precisely defined and sampled as in [12]. We denote the stumps h t for t = {1, ..., T }, where T is the number of boosting iterations.…”
Section: Probability Mapmentioning
confidence: 99%
“…With these extracted descriptors, supervised classification models such as support vector machine (SVM) [12, 13, 18, 19, 22–24, 27, 29, 31], subspace learning [10, 11, 14–16, 26], multiple instance learning [17, 25] and sparse representation [21, 32] are applied. However, the classification performance is often largely affected by the small number of training data available for bioimage research.…”
Section: Introductionmentioning
confidence: 99%