2018 24th International Conference on Pattern Recognition (ICPR) 2018
DOI: 10.1109/icpr.2018.8545524
|View full text |Cite
|
Sign up to set email alerts
|

Transductive Label Augmentation for Improved Deep Network Learning

Abstract: A major impediment to the application of deep learning to real-world problems is the scarcity of labeled data. Small training sets are in fact of no use to deep networks as, due to the large number of trainable parameters, they will very likely be subject to overfitting phenomena. On the other hand, the increment of the training set size through further manual or semi-automatic labellings can be costly, if not possible at times. Thus, the standard techniques to address this issue are transfer learning and data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2
2
1

Relationship

3
2

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 28 publications
0
19
0
Order By: Relevance
“…θ ← OPTIMIZE(Lw(X, Y L ,Ŷ U ; θ)) mini-batch optimization 17: end for 18: end procedure Our main idea therefore is that instead of just encouraging nearby examples to get the same predictions, we encourage all examples to get predictions same as the ones we would get by transductive learning according to the quadratic cost (8) and its solution Z (6). Computing Z is efficient because it is performed outside our main optimization process, i.e.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…θ ← OPTIMIZE(Lw(X, Y L ,Ŷ U ; θ)) mini-batch optimization 17: end for 18: end procedure Our main idea therefore is that instead of just encouraging nearby examples to get the same predictions, we encourage all examples to get predictions same as the ones we would get by transductive learning according to the quadratic cost (8) and its solution Z (6). Computing Z is efficient because it is performed outside our main optimization process, i.e.…”
Section: Discussionmentioning
confidence: 99%
“…Learning by association [17] can been seen as two steps of propagation on a constrained bi-partite graph between labeled and unlabeled examples. Graph transduction game (GTG) [9], a form of label propagation, has been used for pseudolabels [8] as in our work, but in this case the network is pre-trained, the graph remains fixed and there is no weighting mechanism. We compare to this approach in Section 5.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The Graph Transduction Game (GTG) [6], is a semisupervised learning method which has recently found a renewed interest and successfully applied in different contexts, e.g. bioinformatics [19] and label augmentation problem [5]. The GTG casts the problem in terms of a non-cooperative multiplayer game, in which the objects (or images of a dataset) are the players while the possible strategies are the class labels.…”
Section: Graph Transduction Gamementioning
confidence: 99%
“…This lack of generalization can be easily overcome through the training of a supervised classifier with the newly labeled dataset (cf. [3]). In GT the data is modeled as a graph G = (V, E, w) whose vertices are the observations in a dataset and edges represent similarities among them.…”
Section: Graph Transduction Gamesmentioning
confidence: 99%