2019
DOI: 10.48550/arxiv.1911.10371
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentiable Meta-learning Model for Few-shot Semantic Segmentation

Abstract: To address the annotation scarcity issue in some cases of semantic segmentation, there have been a few attempts to develop the segmentation model in the few-shot learning paradigm. However, most existing methods only focus on the traditional 1-way segmentation setting (i.e., one image only contains a single object). This is far away from practical semantic segmentation tasks where the K-way setting (K > 1) is usually required by performing the accurate multi-object segmentation. To deal with this issue, we for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…Maligo [143] ADVENT [120] Zou [121] Biasetton [144] Romera [145] xMUDA [146] Abdou [147] SqueezeSeg [11] SqueezeSegV2 [54] Shaban [148] CANet [149] Hu [150] Dong [151] Snell [153] PANet [154] Tian [155] Bucher [156] Barnes [124] Zhou [125] Migishima [126] Bruls [127] Saleh [157] Kolesnikov [158] Petrovai [159] Alonso [160] Mackowiak [161] Li [162] Zhang [163] Chen [164] Piewak [165] Varga [166] Luo [167] Maligo [143] VIPER [169] Krahenbuhl [170] SYNTHIA [171] VEIS [172] Fang [176][177]…”
Section: Few-shot Learning Transfer Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Maligo [143] ADVENT [120] Zou [121] Biasetton [144] Romera [145] xMUDA [146] Abdou [147] SqueezeSeg [11] SqueezeSegV2 [54] Shaban [148] CANet [149] Hu [150] Dong [151] Snell [153] PANet [154] Tian [155] Bucher [156] Barnes [124] Zhou [125] Migishima [126] Bruls [127] Saleh [157] Kolesnikov [158] Petrovai [159] Alonso [160] Mackowiak [161] Li [162] Zhang [163] Chen [164] Piewak [165] Varga [166] Luo [167] Maligo [143] VIPER [169] Krahenbuhl [170] SYNTHIA [171] VEIS [172] Fang [176][177]…”
Section: Few-shot Learning Transfer Learningmentioning
confidence: 99%
“…Wang et al [154] also made use of learned prototypes to distinguish various semantic classes. Different from previous methods, Tian et al [155] employed an optimization-based method that leverages the linear classifier instead of nonlinear layers for training efficiency. Bucher et al [156] moved forward from few-shot to zero-shot semantic segmentation.…”
Section: Few-shot Learning Transfer Learningmentioning
confidence: 99%
“…Exploiting network components such as attention modules [39,44] and graph networks [41,40], recent works boost segmentation accuracy [21] and enable FSS with coarse-level supervisions [42,22,24]. Exploiting learning-based optimization, [23,25] combine meta-learning with FSS. However, almost all of these methods assume abundant annotated (including weakly annotated) training data to be available, making them difficult to translate to segmentation scenarios in medical imaging.…”
Section: Few-shot Semantic Segmentationmentioning
confidence: 99%
“…However, training an existing few-shot semantic segmentation (FSS) model for medical imaging has not had much success in the past, as most of FSS methods rely on a large training dataset with many annotated training classes to avoid overfitting [14,15,16,17,18,19,20,21,17,22,23,24,25]. In order to bypass this unmet need of annotation, we propose to train an FSS model on unlabeled images instead via self-supervised learning, an unsupervised technique that learns generalizable image representations by solving a carefully designed task [26,27,28,29,30,31,32,33].…”
Section: Introductionmentioning
confidence: 99%
“…Prior works have explored many ways to solve few-shot segmentation, for example network parameter imprinting [28,30], meta-learning [34], prototype learning [7,36], etc. Recently, many works [25,16,46,44,43,24] tackle the problem by measuring feature similarity between the annotated example and every Fig.…”
Section: Introductionmentioning
confidence: 99%