2018 IEEE 17th International Conference on Cognitive Informatics &Amp; Cognitive Computing (ICCI*CC) 2018
DOI: 10.1109/icci-cc.2018.8482059
|View full text |Cite
|
Sign up to set email alerts
|

Style Memory: Making a Classifier Network Generative

Abstract: Deep networks have shown great performance in classification tasks. However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification. We introduce a network that has the capacity to do both classification and reconstruction by adding a "style memory" to the output layer of the network. We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…First, we describe the results we obtained with full training sets and compare with the state-of-the-art. On EMNIST- [22] 89.7% -Bhatnagar et al [25] 92.54% -Zhong et al [26] 96.35% -TextCaps 93.71 ± 0.64% 85.36 ± 0.79% letters, we significantly outperform the state-of-the-art Wiyatno et al [14] by 4.09%. An average accuracy of 90.46% was achieved by our system for the EMNIST-balanced dataset, which outperforms the state-of-the-art Dufourq et al [13] by 2.16%.…”
Section: Handwritten Character Classificationmentioning
confidence: 85%
See 2 more Smart Citations
“…First, we describe the results we obtained with full training sets and compare with the state-of-the-art. On EMNIST- [22] 89.7% -Bhatnagar et al [25] 92.54% -Zhong et al [26] 96.35% -TextCaps 93.71 ± 0.64% 85.36 ± 0.79% letters, we significantly outperform the state-of-the-art Wiyatno et al [14] by 4.09%. An average accuracy of 90.46% was achieved by our system for the EMNIST-balanced dataset, which outperforms the state-of-the-art Dufourq et al [13] by 2.16%.…”
Section: Handwritten Character Classificationmentioning
confidence: 85%
“…With full train set With 200 samp/class Cohen et al [12] 85.15% -Wiyatnoet al [14] 91.27% -TextCaps 95.36 ± 0.30% 92.79 ± 0.30%…”
Section: Emnist-letters Implementationmentioning
confidence: 99%
See 1 more Smart Citation