2022
DOI: 10.1145/3544746
|View full text |Cite
|
Sign up to set email alerts
|

FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments

Abstract: As advances in Deep Neural Networks (DNNs) demonstrate unprecedented levels of performance in many critical applications, their vulnerability to attacks is still an open question. We consider evasion attacks at testing time against Deep Learning in constrained environments, in which dependencies between features need to be satisfied. These situations may arise naturally in tabular data or may be the result of feature engineering in specific application domains, such as threat detection in cyber security. We pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 73 publications
0
5
0
Order By: Relevance
“…First, we performed evaluations on the raw data. As the multi-class classification models require a fixed input shape and an arbitrary elevation profile does not have a specific length, we divided the elevation profiles into equallength (32) chunks and use the raw data to train and test the models. For all datasets and threat models, we used a slightly modified version of the soft voting ensemble method while testing with raw data.…”
Section: Evaluation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…First, we performed evaluations on the raw data. As the multi-class classification models require a fixed input shape and an arbitrary elevation profile does not have a specific length, we divided the elevation profiles into equallength (32) chunks and use the raw data to train and test the models. For all datasets and threat models, we used a slightly modified version of the soft voting ensemble method while testing with raw data.…”
Section: Evaluation Resultsmentioning
confidence: 99%
“…in the image domain is a valid perturbation candidate. We emphasize, however, that this issue is not particular to this problem space we address in this paper, but applicable to a range of problems in general, such as software [29]- [31] and network domains [32], where the feature representation used for implementing the machine learning algorithm transforms the input by upholding a dependency among the features, which is not the case in the original image modality used in computer vision applications [33].…”
Section: Discussionmentioning
confidence: 99%
“…In the fourth problem, we propose algorithms for designing evasion attacks against ML classifiers to test the robustness of DNN models and understand possible countermeasures [36].…”
Section: Thesis Contributionsmentioning
confidence: 99%
“…• Robustness of ML security measures: We design ML techniques to predict malicious or benign behavior in the networks and study their robustness by developing the framework for mounting evasion attacks against ML models feasible in the cybersecurity constrained environments [36].…”
Section: Thesis Contributionsmentioning
confidence: 99%
See 1 more Smart Citation