2022
DOI: 10.1109/tnnls.2021.3056762
|View full text |Cite
|
Sign up to set email alerts
|

Regular Polytope Networks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 49 publications
0
4
0
Order By: Relevance
“…For the continual re-identification problem, [63] shows that performance does not decrease during learning of multiple datasets. This evidence opens up also to recent methodologies for learning the features so-called compatible [67], based on fixed classifiers [58][59][60], in which the re-indexing of the gallery is no longer necessary when updating the feature representation.…”
Section: Fine Grained Visual Categorizationmentioning
confidence: 88%
“…For the continual re-identification problem, [63] shows that performance does not decrease during learning of multiple datasets. This evidence opens up also to recent methodologies for learning the features so-called compatible [67], based on fixed classifiers [58][59][60], in which the re-indexing of the gallery is no longer necessary when updating the feature representation.…”
Section: Fine Grained Visual Categorizationmentioning
confidence: 88%
“…For the continual re-identification problem, [63] shows that performance does not decrease during learning from multiple datasets. This evidence opens up also to recent methodologies for learning the features so-called compatible [67], based on the concept of fixed classifiers [58][59][60], in which the re-indexing of the gallery is no longer necessary when upgrading the feature representation.…”
Section: Fine Grained Visual Categorizationmentioning
confidence: 91%
“…Recently, it has been argued that the weights of the output layer can be fixed as non-trainable with no loss in accuracy [31], [32]. The main intuition is that during the training, both input features and weight vectors align simultaneously but fixing output layer weights still is able learn by adapting to input features.…”
Section: Neural Network Architecturementioning
confidence: 99%
“…Therefore, we fix the weight matrix 𝑊 𝐿 of the output layer to be an the Hadamard approximation of an orthonormal projection matrix 𝑃 ∈ 𝐑 𝑛 𝐿−1 ×𝐾 where 𝑛 𝐿−1 is the size of the last hidden layer. The output 𝑦 of the network is given by [32]:…”
Section: Neural Network Architecturementioning
confidence: 99%