2018 Wireless Telecommunications Symposium (WTS) 2018
DOI: 10.1109/wts.2018.8363930
|View full text |Cite
|
Sign up to set email alerts
|

Autoencoder-based network anomaly detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
118
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 238 publications
(120 citation statements)
references
References 13 publications
0
118
0
2
Order By: Relevance
“…The structure of the self-supervised voltage sag source identification method based on CNN is shown in Figure 4. weights (or vectors) to solve the projection of input data on weights, and then to get coding [35,36]. The basic structure is shown in Figure 3.…”
Section: Self-supervised Voltage Sag Source Identification Methods Basmentioning
confidence: 99%
“…The structure of the self-supervised voltage sag source identification method based on CNN is shown in Figure 4. weights (or vectors) to solve the projection of input data on weights, and then to get coding [35,36]. The basic structure is shown in Figure 3.…”
Section: Self-supervised Voltage Sag Source Identification Methods Basmentioning
confidence: 99%
“…Algorithms such as recurrent neural networks (RNN) and 1-D convolutional neural networks (CNNs), have been shown to provide state-of-the-art results on challenging activity recognition tasks with little or no data feature engineering, instea d using feature learning on raw data [21]. In deep learning algorithms the Auto Encoder (AE) is an unsupervised way of learning the features of the training data [22]. It is a type of a neural network whose prime function is to reconstruct the input data as an output.…”
Section: Related Workmentioning
confidence: 99%
“…[Sakurada and Yairi, 2014] show that the non-linearity of autoencoders allows for better detection of anomalies than linear PCA, whilst being computationally cheaper than kernel PCA. Autoencoder variants are also commonly used, including recurrent [Chauhan and Vig, 2015;Malhotra et al, 2015], convolutional [Chen et al, 2018], denoising [Feng and Han, 2015] and variational models [An, 2015]. They have also been used just for feature learning, with a separate prediction network, such as a Gaussian Mixture Model [Bo et al, 2018], using the learnt features to make predictions.…”
Section: Related Workmentioning
confidence: 99%