2019
DOI: 10.1609/aaai.v33i01.33019801
|View full text |Cite
|
Sign up to set email alerts
|

Learning and the Unknown: Surveying Steps toward Open World Recognition

Abstract: As science attempts to close the gap between man and machine by building systems capable of learning, we must embrace the importance of the unknown. The ability to differentiate between known and unknown can be considered a critical element of any intelligent self-learning system. The ability to reject uncertain inputs has a very long history in machine learning, as does including a background or garbage class to account for inputs that are not of interest. This paper explains why neither of these is genuinely… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
69
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 103 publications
(82 citation statements)
references
References 64 publications
(52 reference statements)
0
69
0
Order By: Relevance
“…The traditional mapping function is to use SoftMax, which is suitable for an end-to-end deep network and ensures efficient inference speed. However, due to the closedset attribute of the SoftMax function [16], SoftMax must produce a classification score, which may be high in the unknown category [11]. Recently, many methods [15], [16], [31] based on Extreme Value Theory fill in the blanks towards open set deep networks.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…The traditional mapping function is to use SoftMax, which is suitable for an end-to-end deep network and ensures efficient inference speed. However, due to the closedset attribute of the SoftMax function [16], SoftMax must produce a classification score, which may be high in the unknown category [11]. Recently, many methods [15], [16], [31] based on Extreme Value Theory fill in the blanks towards open set deep networks.…”
Section: Related Workmentioning
confidence: 99%
“…However, the SoftMax layer has translation invariance, which will confuse known categories and unknown categories through our analysis. For example, CAV 1 is [1, -1, -4, 5, 2] and CAV 2 is [11,9,6,15,12]. They have the same SoftMax score and the maximum probability score with 0.9338.…”
Section: B Class Activation Mapping Valuementioning
confidence: 99%
See 2 more Smart Citations
“…Access to these uncertainties comes with the promise of allowing to separate what a model is truly confident about through output variability. However, misclassification is not prevented and in a Bayesian approach uncertain inputs are not necesssarily unknown and vice versa unknowns do not necessarily appear as uncertain [3]. This has recently been observed on a large empirical scale [19] and figure 1 illustrates this challenge.…”
Section: Introductionmentioning
confidence: 99%