2021
DOI: 10.48550/arxiv.2109.00893
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey on Open Set Recognition

Atefeh Mahdavi,
Marco Carvalho

Abstract: Recognition (OSR) is about dealing with unknown situations that were not learned by the models during training. In this paper, we provide a survey of existing works about OSR and distinguish their respective advantages and disadvantages to help out new researchers interested in the subject. The categorization of OSR models is provided along with an extensive summary of recent progress. Additionally, the relationships between OSR and its related tasks including multi-class classification and novelty detection a… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 108 publications
0
3
0
Order By: Relevance
“…Further downstream tasks and user/analyst acceptance (trust) are therefore improved when receiving well-calibrated posterior confidence scores. We further desire not to give highly confident predictions at test/inference time to "unwanted" examples, such as out-of-domain/out-ofclass or adversarial examples, and thus extensions with Open Set Recognition [12,17] and adversarial detection techniques [2] are also of importance to include in real-world scenarios.…”
Section: Believabilitymentioning
confidence: 99%
“…Further downstream tasks and user/analyst acceptance (trust) are therefore improved when receiving well-calibrated posterior confidence scores. We further desire not to give highly confident predictions at test/inference time to "unwanted" examples, such as out-of-domain/out-ofclass or adversarial examples, and thus extensions with Open Set Recognition [12,17] and adversarial detection techniques [2] are also of importance to include in real-world scenarios.…”
Section: Believabilitymentioning
confidence: 99%
“…Elucidating the interplay of dataset sizes, model capacities, and distributional shifts, especially relaxation of sample preparation protocols (e.g., using knife cuts instead of progressive sanding, or sanding to a coarser grit [thus involving fewer steps and less time]) and the closed-world assumption (Scheirer et al 2013), is likely to be an important challenge in the realization of general fielddeployable CVWID systems. We expect the exploration of these ideas (e.g., Mahdavi and Carvalho 2021;Vaze et al 2021;Yang et al 2021) to be a fertile area for future work.…”
Section: Deployment Gap Of Cross-validation and Field Testingmentioning
confidence: 99%
“…In the following years other works considered one-class classifiers for detection of new classes [25,26]. Although these methods in general can be used for OS classification, the different problem setting results in a comparatively low detection performance [27,28]. To the best of the authors' knowledge, the literature provides no work applying classifiers inherently made for the OS problem to event classification in active DNs.…”
Section: Thematically-related Workmentioning
confidence: 99%