2019
DOI: 10.1109/access.2019.2953465
|View full text |Cite
|
Sign up to set email alerts
|

Meta-Seg: A Generalized Meta-Learning Framework for Multi-Class Few-Shot Semantic Segmentation

Abstract: Semantic segmentation performs pixel-wise classification for given images, which can be widely used in autonomous driving, robotics, medical diagnostics and etc. The recent advanced approaches have witnessed rapid progress in semantic segmentation. However, these supervised learning based methods rely heavily on large-scale datasets to acquire strong generalizing ability, such that they are coupled with some constraints. Firstly, human annotation of pixel-level segmentation masks is laborious and timeconsuming… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(7 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…MAML [ 36 ] seeks to learn a model initialization trained on multiple tasks and could be adapted to the target task with a few annotated data points. It has been widely used in the field of computer vision, involving visual tracking [ 37 ], incremental object detection [ 38 ], and semantic segmentation [ 39 ]. MAML has also been applied to many NLP tasks such as text classification [ 40 ], named entity recognition [ 41 ], and relation extraction [ 42 ].…”
Section: Related Workmentioning
confidence: 99%
“…MAML [ 36 ] seeks to learn a model initialization trained on multiple tasks and could be adapted to the target task with a few annotated data points. It has been widely used in the field of computer vision, involving visual tracking [ 37 ], incremental object detection [ 38 ], and semantic segmentation [ 39 ]. MAML has also been applied to many NLP tasks such as text classification [ 40 ], named entity recognition [ 41 ], and relation extraction [ 42 ].…”
Section: Related Workmentioning
confidence: 99%
“…where F(x p ) represents the final prediction result;f i x p and C i x p denote the forecasting result of the i-th point forecaster and its weight. Considering the prediction model in Fig.1, we can obtain the weight parameters of meta ensemble learning by minimizing the energy function of the training samples [35]. In this paper, we define the energy function as the sum of the mean square error of the training samples and the weight decay term, as shown below:…”
Section: B Training Process Of the Meta Ensemble Learningmentioning
confidence: 99%
“…Although meta-learning has seen a lot of success with few-shot image classification, frequently reaching stateof-the-art results, these techniques have not been as extensively used for segmentation. Some examples include [17], [18]. We use the FSS-1000 [7] dataset for our models.…”
Section: Related Workmentioning
confidence: 99%