2019 IEEE International Conference on Image Processing (ICIP) 2019
DOI: 10.1109/icip.2019.8803632
|View full text |Cite
|
Sign up to set email alerts
|

Hallucinating A Cleanly Labeled Augmented Dataset from A Noisy Labeled Dataset Using GAN

Abstract: Noisy labeled learning methods deal with training datasets containing corrupted labels. However, prediction performances of existing methods on small datasets still leave room for improvements. With this objective, in this paper we present a GAN-based method to generate a clean augmented training dataset from a small and noisy labeled dataset. The proposed approach combines noisy labeled learning principles with GAN state-of-the-art techniques. We demonstrate the usefulness of the proposed approach through an … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(13 citation statements)
references
References 14 publications
0
12
0
Order By: Relevance
“…This method was shown to be highly effective in removing label noise and improving the model performance. GANs were used to generate a training dataset with clean labels from an initial dataset with noisy labels by Chiaroni et al (2019).…”
Section: Label Cleaning and Pre-processingmentioning
confidence: 99%
“…This method was shown to be highly effective in removing label noise and improving the model performance. GANs were used to generate a training dataset with clean labels from an initial dataset with noisy labels by Chiaroni et al (2019).…”
Section: Label Cleaning and Pre-processingmentioning
confidence: 99%
“…• Weakly supervised learning: These techniques can be trained with a partially labeled dataset [6], and eventually with a fraction of corrupted labels [8], [9]. Advantageously, these approaches drastically reduce the need of labeled data.…”
Section: Learning Methodsmentioning
confidence: 99%
“…Concerning the online self-evaluation, some of the presented systems require a baseline reference analytically obtained [19]. However, if we consider that the analytical processes, considered as ground-truth labeling techniques, are likely to generate some noisy labels, it may be interesting to investigate some future reasearch on how to evaluate this prior noise from the learning model point of view [11], and how to deal with it [9].…”
Section: Limitations and Future Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…For reasons of expectation linearity, p N would consequently be associated by D to an intermediate label between 0 and 1. A solution may be to extend the proposed framework to this noisy PU learning challenge by drawing on existing asymmetric noisy labeled learning techniques [Chiaroni et al, 2019]. More specifically, by modifying training loss functions, it is possible to enforce G to learn the distribution corresponding to this given intermediate label predicted by D for p N .…”
Section: Limitations Of the Proposed Approachmentioning
confidence: 99%