2022 IEEE Symposium on Security and Privacy (SP) 2022
DOI: 10.1109/sp46214.2022.9833659
|View full text |Cite
|
Sign up to set email alerts
|

Transcending TRANSCEND: Revisiting Malware Classification in the Presence of Concept Drift

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(27 citation statements)
references
References 30 publications
2
19
0
Order By: Relevance
“…First, we adapt TRANSCENDENT [8] to active learning. TRANSCENDENT [8] was originally designed to support classification with rejection, so that the classifier can decline to make any prediction for samples that appear to have drifted.…”
Section: Improved Active Learning Schemesmentioning
confidence: 99%
See 2 more Smart Citations
“…First, we adapt TRANSCENDENT [8] to active learning. TRANSCENDENT [8] was originally designed to support classification with rejection, so that the classifier can decline to make any prediction for samples that appear to have drifted.…”
Section: Improved Active Learning Schemesmentioning
confidence: 99%
“…First, we adapt TRANSCENDENT [8] to active learning. TRANSCENDENT [8] was originally designed to support classification with rejection, so that the classifier can decline to make any prediction for samples that appear to have drifted. In particular, they construct two scores to recognize drifted samples and decline to classify any sample whose scores are too low.…”
Section: Improved Active Learning Schemesmentioning
confidence: 99%
See 1 more Smart Citation
“…On "present" data, performances of the reference and test models appear highly comparable (AUC R = 0.9981 versus AUC T = 0.9984), but on future data, model performances are known to degrade-and will our estimates of them-as covariate shift, concept drift, and/or label drift mount [13][14][15]. Using Firenze on the unlabeled dataset, we investigate to what extent the change in model architecture improves performance by (i) increasing true malicious file identifications (true positives) by the model, without increasing false positives and (ii) improving identification of benign files without increasing false negatives.…”
Section: Evaluating Malware Detection Models Using Firenzementioning
confidence: 99%
“…Worse still, any effortfully-collected labeled data carries a high risk of becoming obsolete, since adversaries frequently change tactics and move targets. Taken together, these pecularities can induce severe concept drift, label drift, and covariate shift [13][14][15].…”
Section: Introductionmentioning
confidence: 99%