The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.3389/fcomp.2022.805166
|View full text |Cite
|
Sign up to set email alerts
|

From Shallow to Deep: Exploiting Feature-Based Classifiers for Domain Adaptation in Semantic Segmentation

Abstract: The remarkable performance of Convolutional Neural Networks on image segmentation tasks comes at the cost of a large amount of pixelwise annotated images that have to be segmented for training. In contrast, feature-based learning methods, such as the Random Forest, require little training data, but rarely reach the segmentation accuracy of CNNs. This work bridges the two approaches in a transfer learning setting. We show that a CNN can be trained to correct the errors of the Random Forest in the source domain … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…Our method works reliably on small volumes for all initial amounts of sparse image annotations and all investigated datasets, suggesting general applicability. The use of pseudo, incomplete, or imprecise labels as training data is also not novel; neither is the idea of iterative bootstrapping to refine a model by acquiring more segmented data for training [23][24][25][26] . Others recently demonstrated an end-to-end pipeline that learned to correct swift and incomplete annotations like skeletons and seeds 27 .…”
Section: Discussionmentioning
confidence: 99%
“…Our method works reliably on small volumes for all initial amounts of sparse image annotations and all investigated datasets, suggesting general applicability. The use of pseudo, incomplete, or imprecise labels as training data is also not novel; neither is the idea of iterative bootstrapping to refine a model by acquiring more segmented data for training [23][24][25][26] . Others recently demonstrated an end-to-end pipeline that learned to correct swift and incomplete annotations like skeletons and seeds 27 .…”
Section: Discussionmentioning
confidence: 99%
“…In either case, only a minimal amount of simple Python code is needed to integrate a pipeline of multiple networks with a user-friendly GUI. 44 . is available in the Zoo.…”
Section: A User-friendly Resource For the Whole Communitymentioning
confidence: 99%
“…5). Here, we show how the pre-trained models of the Zoo can be exploited for domain adaptation through the approach proposed by Matskevych et al 44 . We use their trained model for mitochondria segmentation in electron microscopy (EM) already available in the BioImage Model Zoo (model id: 10.5281/zenodo.6406756 or hiding-blowfish) and the images from the MitoEM challenge 45 .…”
Section: A User-friendly Resource For the Whole Communitymentioning
confidence: 99%
See 1 more Smart Citation