2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00366
|View full text |Cite
|
Sign up to set email alerts
|

Robust Classification with Convolutional Prototype Learning

Abstract: Convolutional neural networks (CNNs) have been widely used for image classification. Despite its high accuracies, CNN has been shown to be easily fooled by some adversarial examples, indicating that CNN is not robust enough for pattern classification. In this paper, we argue that the lack of robustness for CNN is caused by the softmax layer, which is a totally discriminative model and based on the assumption of closed world (i.e., with a fixed number of categories). To improve the robustness, we propose a nove… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
209
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 292 publications
(210 citation statements)
references
References 31 publications
(41 reference statements)
1
209
0
Order By: Relevance
“…Remark Acc. (%) Soft-max -99.28 Center loss [27] λ � 0.1 99.62 Ring loss [23] λ � 0.1 99.58 LGM loss [28] α � 1 99.36 GCPL loss [37] λ � 0.1 99.41 SL [41] η � 0.0 99.32 GO loss λ � 0.1 99.66 ± 0.03 6 Complexity e dataset contains 10 categories of fashion products and is divided into 60, 000 training samples and 10,000 testing samples. We adopt the same network and training parameters with MNIST.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Remark Acc. (%) Soft-max -99.28 Center loss [27] λ � 0.1 99.62 Ring loss [23] λ � 0.1 99.58 LGM loss [28] α � 1 99.36 GCPL loss [37] λ � 0.1 99.41 SL [41] η � 0.0 99.32 GO loss λ � 0.1 99.66 ± 0.03 6 Complexity e dataset contains 10 categories of fashion products and is divided into 60, 000 training samples and 10,000 testing samples. We adopt the same network and training parameters with MNIST.…”
Section: Methodsmentioning
confidence: 99%
“…Regularizing the extracted features or adding regularization terms makes the features of the same class compact and the features of different classes separated. Based on this, several loss functions for classification have been studied from the perspective of redesigning clusters [35,36], such as GCPL loss [37] and Structure-aware loss [38].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…where S is the batch size, f i is the feature of the ith sample in the batch, and label i is the ground truth. Second, the prototype loss, which is firstly proposed by 16 , is modified to apply in our framework to train the prototypes of knowns. Third, we propose the prototype radius loss, which guides the model to learn the radius scope of each known category.…”
Section: Overviewmentioning
confidence: 99%
“…Most recently, the prototype learning was introduced to improve the robustness of CNNs. Yang et al proposed the CPL to improve the robustness by using prototypes and proposed the PL (prototype loss) to improve the intra-class compactness and inter-class distance of the feature representation in the work 16 . Yang et al also introduced a method to handle open set recognition problem by using prototypes in their paper.…”
Section: Introductionmentioning
confidence: 99%