2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8852056
|View full text |Cite
|
Sign up to set email alerts
|

Extreme Dimensionality Reduction for Network Attack Visualization with Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 45 publications
0
7
0
Order By: Relevance
“…Different autoencoder architectures have been proposed to reduce the feature dimensionality in most popular network intrusion datasets. These methods were implemented and evaluated with the network traffic data in publicly available datasets which include KDD-Cup99 [49]- [56], NSL-KDD [49], [57]- [60], UNSW-NB15 [50], [59], [61] and CI-CIDS2017 [62]. Table I shows the autoencoder-based feature dimensionality reduction techniques in the literature.…”
Section: Related Workmentioning
confidence: 99%
“…Different autoencoder architectures have been proposed to reduce the feature dimensionality in most popular network intrusion datasets. These methods were implemented and evaluated with the network traffic data in publicly available datasets which include KDD-Cup99 [49]- [56], NSL-KDD [49], [57]- [60], UNSW-NB15 [50], [59], [61] and CI-CIDS2017 [62]. Table I shows the autoencoder-based feature dimensionality reduction techniques in the literature.…”
Section: Related Workmentioning
confidence: 99%
“…A well-consolidated research stream has focused on the use of neural autoencoding models to lower the high dimensionality of original raw data in favour of compressed representations that exclude features prone to mis-classification [10]- [13], [28]. Stacked autoencoders are also considered in combination with traditional classifiers (e.g., SVM, K-NN, Gaussian Naive-Bayes) [29].…”
Section: Related Workmentioning
confidence: 99%
“…An autoencoder is an artificial neural network (NN) consisting of an encoder function mapping the input to a hidden code and a decoder, producing the reconstructed input learned by minimizing a loss function. As the hidden code commonly reduces the size of data, autoencoders are mostly used for saving the output of the encoder function for dimensionality reduction [9]- [13]. In any case, there are few studies that learn autoencoders, which go beyond the dimensionality reduction purpose, e.g., considering the output of the decoder function for data denoising [14] or the loss (residual error) for the anomaly detection [15]- [17].…”
Section: Introductionmentioning
confidence: 99%
“…1 (c). Compared with the previous sequentially executed training process, SSAE achieves extraction of high-level features that are highly correlated with both the input data x and the labeled information y, by simultaneously optimizing θ AE and θ cl [35,[40][41][42][43].…”
Section: Semi-supervised Autoencodermentioning
confidence: 99%
“…Lastly, visualization of the monotonic health state transition in 2D space has ye be addressed by other research. Since 2D graphics provide the most obvious and readable space representation for the human eye, a 2D health feature space (HFS) can intuitively show diagnosis results [35].…”
Section: Introductionmentioning
confidence: 99%