Procedings of the British Machine Vision Conference 2016 2016
DOI: 10.5244/c.30.144
|View full text |Cite
|
Sign up to set email alerts
|

Mapping Auto-context Decision Forests to Deep ConvNets for Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 33 publications
(68 reference statements)
0
11
0
Order By: Relevance
“…Going further along this path, it may be interesting to revisit existing works on the equivalence between CNN and auto-context random forests [40] in order to find a better trade-off between the ability to automatically extract contextual features and the computational complexity acceptable in an operational map production system.…”
Section: Discussionmentioning
confidence: 99%
“…Going further along this path, it may be interesting to revisit existing works on the equivalence between CNN and auto-context random forests [40] in order to find a better trade-off between the ability to automatically extract contextual features and the computational complexity acceptable in an operational map production system.…”
Section: Discussionmentioning
confidence: 99%
“…As long as the structure of the trees is preserved, the optimized parameters of the neural network can also be mapped back to the random forest. Subsequently, [25] cast stacked decision forests to convolutional neural networks and found an approximate mapping back. In [9,17] several models of neural networks with separate, conditional data flows are discussed.…”
Section: Related Workmentioning
confidence: 99%
“…50 Deep learning algorithms can also benefit from accurate segmentation information provided by 7T MRI for improving feature learning. 51 Although our method produces better segmentation results, it has some limitations. (1) The number of training subjects (with both 3 and 7T MR images) is small in our experiments.…”
Section: Discussionmentioning
confidence: 98%