2017
DOI: 10.1109/tnnls.2016.2514401
|View full text |Cite
|
Sign up to set email alerts
|

A Maximum Entropy Framework for Semisupervised and Active Learning With Unknown and Label-Scarce Classes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(18 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…2. We develop a novel unsupervised AD that goes well beyond [16], i) modeling the joint density of a deep layer using highly suitable null hypothesis density models (matched in particular to non-negative support for RELU layers); ii) exploiting multiple DNN layers; iii) leveraging a "source" and "destination" class concept, source class uncertainty, the class confusion matrix, comprehensive low-order density modelling [38], and DNN weight information in constructing a novel decision statistic grounded in the Kullback-Leibler divergence.…”
Section: Contributions Of This Workmentioning
confidence: 99%
“…2. We develop a novel unsupervised AD that goes well beyond [16], i) modeling the joint density of a deep layer using highly suitable null hypothesis density models (matched in particular to non-negative support for RELU layers); ii) exploiting multiple DNN layers; iii) leveraging a "source" and "destination" class concept, source class uncertainty, the class confusion matrix, comprehensive low-order density modelling [38], and DNN weight information in constructing a novel decision statistic grounded in the Kullback-Leibler divergence.…”
Section: Contributions Of This Workmentioning
confidence: 99%
“…Mixed strategies: Choose the sample by MEU (or random sampling) with probability p; otherwise by uncertainty sampling with probability 1 − p. Note that a non-zero proportion for uncertainty sampling is warranted because uncertainty sampling is a very good mechanism for discovering unknown classes that may be latently present in T u [18]. At the same time, using uncertainty sampling plays into the hands of the attacker.…”
Section: B Sample Selection Criteria For Active Learningmentioning
confidence: 99%
“…Huge false positive rates ensued when as little as 5% of training data consisted of these contrived emails. [12] considered active learning (AL), a promising framework for security applications, as the classifier can adapt to track evolving threats and also because oracle labeling may discover novel classes [18], [19] that may be zero-day threats. [12] demonstrated, using SVMs, that if an adversary "salts" the unlabeled data batch in a biased fashion near the current decision boundary (where AL seeks to choose samples for labeling), one can induce classifier degradation -each adversarial sample was chosen such that, if labeled, it will decrease accuracy the most.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…e greatest challenge for AL methods is identifying the most informative samples so that the maximum prediction accuracy can be achieved. A number of sample-selection criteria have then been applied to this task, including (1) query-by-committee (QBC), in which several distinct classifiers are used and the selected samples are those with the largest difference between the labels predicted by different classifiers [9][10][11]; (2) margin uncertainty sampling, wherein the samples are selected according to the maximum uncertainty based on their respective distances from the classification boundaries [12,13]; (3) max-entropy sampling, which uses entropy as the uncertainty measure via probabilistic modeling [14,15]; and (4) diversity sampling, which prefers selecting representative samples [16].…”
Section: Introductionmentioning
confidence: 99%