2016 IEEE International Conference on Image Processing (ICIP) 2016
DOI: 10.1109/icip.2016.7532690
|View full text |Cite
|
Sign up to set email alerts
|

Mutual exclusivity loss for semi-supervised deep learning

Abstract: In this paper we consider the problem of semi-supervised learning with deep Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated on the observation that unlabeled data is cheap and can be used to improve the accuracy of classifiers. In this paper we propose an unsupervised regularization term that explicitly forces the classifier's prediction for multiple classes to be mutuallyexclusive and effectively guides the decision boundary to lie on the low density space between the manifolds… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
133
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 149 publications
(180 citation statements)
references
References 19 publications
1
133
0
1
Order By: Relevance
“…It is reported to be an essential technique required by DLAs to achieve good performance. [34] However, we did not apply data augmentation because it has a high potential to distort shape, margin, echogenicity, and calcification, which are essential for differentiating benignity and malignancy.…”
Section: Discussionmentioning
confidence: 99%
“…It is reported to be an essential technique required by DLAs to achieve good performance. [34] However, we did not apply data augmentation because it has a high potential to distort shape, margin, echogenicity, and calcification, which are essential for differentiating benignity and malignancy.…”
Section: Discussionmentioning
confidence: 99%
“…coming from two similar images, or made by two networks with related parameters, are encouraged to have similar network outputs. Sajjadi et al [34] is the first, to our knowledge, to use a consistency loss between the outputs of a network on random perturbations of the same image. Laine and Aila [23] rather apply consistency between the output of the current network and the temporal average of outputs during training.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Sajjadi et al [24] assume that every training image, labeled or not, belongs to a single category, a natural requirement on the classifier is to make a confident prediction on the training set by minimizing the entropy of the network output. Besides, [61], [62], [27], [63] focus on consistency loss, where two related cases, e.g., coming from two similar images, or made by two networks with related parameters, are encouraged to have similar network outputs.…”
Section: Deep Semi-supervised Learningmentioning
confidence: 99%